Principle #2: Technology can automate tasks, not jobs.
Principles of Automation Series: Part II of III
“47% of American jobs are at risk of being automated”.
While this technopanic-inducing headline might feel like it was taken from a 2025 news article, it is actually from a 2013 study by Oxford professors Frey and Osborne. They estimated the probability of computerization for 702 occupations, landing at the 47% figure. Just three years later, an OECD study revised that number dramatically downward: only 9%. Why the big gap?
Because definitions.
Frey and Osborne used an occupation-based approach, treating jobs as indivisible bundles. In contrast, OECD defined jobs as collections of tasks. This more granular, task-based analysis revealed that even in occupations labeled as “high risk” for automation, workers spend much of their time on subtasks machines can’t easily replicate - think face-to-face interaction, creative problem-solving, or applying context-sensitive judgment. Seen through this lens, automation targets slices of work, not entire jobs.
To make this more concrete, think about the tasks you performed yesterday. For me, this involved:
writing a research plan for a client
three one-on-one meetings
refining a lecture
scheduling an interview for my book
writing this newsletter
Each instance of activity was a distinct task, though I still write “design researcher” in the occupation box on official paperwork. A job isn’t a “uniform, indivisible blob of activity” (thanks for the imagery, Daniel Susskind) - it’s a mosaic of tasks.
This is why you should be highly skeptical of the next pitch deck or software ad you hear where someone claims that their agentic workflow will automate design research, customer service, or any other blob of activity. What specific tasks are they targeting? Did they study the context in which specific tasks unfolded before they attempted to automate them? Have they considered that their “solution” might not replace a role, but simply reshape it?
How Economists Analyze Tasks
The task-based view of labor was formalized back in 2003, in a paper written by three economists. They distinguished between routine tasks, which follow explicit rules and are easily codified, and non-routine tasks, which rely on experiential knowledge or judgment (what Polanyi’s Paradox reminds us is hard to articulate, let alone program). If you can spell out the task step-by-step, a machine can likely do it. If it requires context, empathy, or imagination, it remains stubbornly human.
Polanyi’s Paradox is the theory that people have “tacit knowledge” - experience-based skills that we can’t fully describe or codify into explicit instructions. As philosopher Michael Polanyi put it, “we can know more than we can tell”. Listing the steps of playing a piece on the violin doesn’t translate into someone new to the instrument being able to play it.
The authors also found that computerization was associated with reduced labor input of routine tasks, and increased labor input of non-routine tasks. That is, automation shifted the role of the human upstream.
Consider the case of bank tellers – a classic illustration of task-specific automation. Automated teller machines - better known as ATMs - began handling the routine cash-dispensing and deposit-taking tasks in the 1980s. One might expect that replacing these tasks would eliminate the bank teller job entirely. Yet the opposite happened: the number of bank tellers in the United States remained steady and even rose slightly in the years after ATMs proliferated.
While the number of routine transactions per teller declined, the demand for relationship-oriented financial service staff grew. The lesson? Automation carves off specific tasks, and humans shift to focus on the Polanyi-esque tasks that remain or newly emerge.
What Now?
By now, you’re hopefully convinced that technology can automate tasks, not jobs (if not, reach out - I want to improve the argument now, before my book is published). You’ve also seen how successful automation results in task redistribution, with the human doing more cognitively demanding, non-routine tasks, and the machine doing the routine, repetitive tasks.
So, how do you automate well in practice?
It starts with understanding your audience and the tasks they perform in context. Methods like field studies and contextual inquiry can be used to reveal how work gets done, while analysis techniques like task analysis unpack and visualize those processes.
As automation accelerates, the question isn’t which jobs will vanish, but how tasks will be reshaped. Reframing the discussion this way turns it from predictions of loss to the challenge of designing technology that complements human work.
This is Part II of a three-part series on the Principles of Automation - peek into topics from my forthcoming book with Rosenfeld Media, Designing Automated Futures. 🔗 Sign up to be the first to know about new book releases, sales, and events.
⏪ Recent Episodes
ep. 77: Principle #1: Automation is a Spectrum
ep. 76: A Parking Meter’s Quiet AI Design Wisdom
ep. 75: Still in the Loop
📖 Good Reads
Tapping Into AI’s Conservatism by Jorge Arango
A Design Manager’s Playbook / Compass by Uday Gajendar
Ode to Pre-Run Coffee by Sam Robinson
That’s a wrap 🌯 . More on UX, HCI, and strategy from Sendfull in two weeks!




