
“Will AI take my job?”
I hear some flavor of this question on a weekly basis. It’s asked across all levels, from senior knowledge worker peers to my Intro to User Experience (UX) Design students at UC Berkeley, just starting their industry careers.
My response: “No, but AI will redefine your craft.”
In today’s episode, I’ll break down how AI is reshaping knowledge work, why systems thinking is essential for future-proofing careers (especially in UX), and what that skillset entails.
Redefining Craft with AI
Craft has traditionally referred to technical skill, attention to detail and commitment to producing high-quality work. This might include getting your kerning just right, debugging code, or wordsmithing. Compare these tasks to capabilities of AI tools currently on the market:
“Create custom, on-brand, and attention-grabbing designs in seconds.” (Canva’s Magic Design)
“Start writing in the editor and Copilot will suggest code as you type.” (GitHub’s Copilot)
“Be perfectly professional, clear, and convincing in a few clicks, not a few hours.” (Grammarly)
These AI tools accelerate or offload the completion of individual, lower-level tasks, shifting our focus to bigger-picture orchestration. Craft is evolving into systems thinking, a holistic paradigm that examines how the parts of a system interact and influence one another within a whole.
This shift is already underway. I'll illustrate this using my field of UX as a case study.
The State of UX’s 2024 survey highlights how AI is streamlining design processes and eliminating manual tasks:
“Figma as we know it today won’t be here for much longer. Once your design library is connected to code and AI is smart enough to build ad-hoc interfaces on the fly, the designer's role as an intermediary becomes less important. Soon, Figma’s primary audience will no longer be designers, but anyone in the org - a shift that is already well underway.”
This shift signals the need for the UX practitioner’s role to adapt. What does this adaptation look like?
A Systems Thinking Skillset
Let’s start by unpacking “big-picture orchestration”.
According to the same State of UXs survey, UX Research and UX Strategy will become more critical than ever, alongside envisioning (think strategic foresight and speculative design) and conceptual design. AI isn’t currently able to find the right use cases for new technologies, or figure out the next generation of interaction paradigms for Augmented Reality (AR) or multimodal AI agents.
Even with rapid emergence of increasingly powerful AI tools, there’s a very good chance that we’ll still need humans guiding AI’s “UX capabilities” (see Apple’s recent paper that found no evidence of formal reasoning in LLMs, and pioneer Yann LeCun’s thoughts on AI being “dumber than a cat”).
To effectively art-direct AI in service of “the big picture”, we need the ability to rapidly frame and reframe problem space from different stakeholders’ perspectives and altitudes (e.g., individual, society, environment). This helps us align the concrete outputs of AI with the more abstract-but-critical foundations of people’s needs, and anticipate potential consequences (and mitigate risks) of introducing our designs into the world.
I like to call this playing with zoom levels. These skills have been a part of the advanced UX practitioner’s playbook, even before modern AI tools came on the scene. However, it’s now urgent that these skills are introduced and intentionally practiced from Day 0 of a practitioner’s education.
These skills entail not only thinking across current ecosystems, but considering potential futures for those ecosystems, to examine the implications of a given product, service or platform. This skillset will help us overcome the all-too-common flatpack futures, in which future systems are “imagined and readymade products that people will choose to adopt” (from The Manual of Design Fiction).
What do flatpack futures look like? Examples include AI-enabled lapel pins or smart glasses that lack interoperability with our current smartphones. They often don’t consider societal implications of adoption. In isolation, these solutions are impressive technology feats, but incorrectly assume that people will immediately adopt the new and discard the old.
After working in extended reality for a decade, spatial computing product announcements often feel like déjà vu. For instance, see SixthSense, a gesture-based wearable computer system from MIT in the late 2000s, fits right in 15 years later with the recent announcement of Meta’s Orion AR glasses. We’ve designed the tech, but we haven’t adequately designed for its societal implications, such as the “Bystander Exploitation Problem”, as spatial computing pioneer Avi Bar-Zeev has discussed.
Thinking in systems can help us design products that fit within people’s lives (read: adoption and return use) and benefit society.
Takeaways
Recapping today’s episode, we covered how:
AI tools are redefining craft for knowledge workers, shifting our roles towards big-picture orchestration.
Systems thinking is essential for future-proofing careers - especially in UX - as AI tools streamline manual tasks.
Key systems thinking skills involve rapidly framing and reframing problem space from multiple stakeholder perspectives and altitudes, anticipating potential future scenarios, and critically assessing the societal impacts of emerging technologies. These skills are often central to UX Research and Strategy, particularly in practices like strategic foresight and speculative design, and need to become part of foundational UX education.
⏩ Next Week
By now, you might be wondering - how do we effectively think in systems if we haven’t put in the hours as detail-oriented craftspeople? Putting this another way, how will UX practitioners develop taste that sets them apart in a sea of AI-generated content? I’ll tackle this topic in next week’s newsletter, connecting it to the future of UX education and share an exciting announcement.
Human-Computer Interaction News
Humans Treat AI as Social Beings, but This Fades in Younger Generations: Research from Imperial College London found that participants treated AI agents as social beings in a game of Cyberball. Participants compensated for an ostracized agent by tossing the ball to them more frequently, just as they would with an ostracized human. Younger participants were less likely to follow this norm.
Claude Announces Upgraded Model that can Use Computers: Anthropic announced an upgraded Claude 3.5 Sonnet, and a new model, Claude 3.5 Haiku. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. That is, developers can direct Claude to use computers the way people do - by looking at a screen, moving a cursor, clicking buttons, and typing text.
AI Simulation Gives People a Glimpse of their Potential Future Self: 'Future You', a generative AI tool from MIT, allows users to simulate a conversation with a potential version of their future selves. The chatbot is designed to reduce anxiety, boost positive emotions, and guide users toward making better everyday decisions.
Designing emerging technology products? Sendfull can help you find product-market fit. Reach out at hello@sendfull.com
That’s a wrap 🌯 . More human-computer interaction news from Sendfull next week.