Principle #3: Just because you can automate, doesn’t mean you should.
Principles of Automation Series: Part III of III
Imagine a future where technology could reliably perform every task that people get paid to do. Just because we have the technological capability to automate these tasks, should we?
This question usually evokes economists’ work on technological unemployment, the loss of jobs due to new technologies that reduce the need for human labor. For instance, John Maynard Keynes predicted that by 2030, we would have a fifteen hour work week due to technological progress and capital accumulation.
Something that stands out to me, looking across the economics literature of the 19th and 20th century, is a recurring assumption that technological progress will continue to advance. My take is that the scope of what can be automated will keep expanding, such that the question “Can we automate?” trends toward “yes” for an increasingly wide range of tasks.
Living in the bubble of the San Francisco Bay Area, it is easy to feel like we are uncomfortably close to this automated future - not just physical automation (think “factories of the future” and automated assembly lines), but also cognitive automation. If you don’t believe me, check out the billboards when you drive into San Francisco (check out this fantastic August 2025 billboard photodump by Aaron Parecki).

LLM capabilities have indeed been advancing at performance on “thinking tasks” at an unprecedented rate - perhaps unsurprising, given the vast sums of money being poured into AGI development (10× the Manhattan Project). For example, between 2023-2024, AI performance on benchmarks of PhD-level science questions and multimodal understanding and reasoning sharply increased by ~49 and ~19 percentage points, respectively.

Let’s try a thought experiment.
Put aside the LLM’s inherent unpredictability and the nontrivial organizational change management required to successfully implement automated workflows. Assume that whatever you could dream of automating, could be automated. The technical capabilities are available and reliable. Your organizational readiness to automate is high. Should you?

Should you automate?
Last episode, I talked about the task-based view of labor. There are two types of tasks, per classic economics paper, The Skill Content of Recent Technological Change: An Empirical Exploration:
Routine tasks: follow explicit rules and are easily codified. Think data entry, scheduling social media posts, and generating weekly sales reports.
Non-routine tasks: rely on experiential knowledge or judgment. Think negotiating a partnership, interpreting ambiguous data patterns, or problem reframing.
Routine tasks are good candidates for automation. Non-routine tasks - not so much, at least historically. As cognitive automation advances, we need to get more nuanced about where and how technology fits. One helpful steer is MIT Sloan’s 2024 EPOCH framework. The authors propose five groups of human capabilities that complement AI:
Empathy and Emotional Intelligence
Presence, Networking, and Connectedness
Opinion, Judgment, and Ethics
Creativity and Imagination
Hope, Vision, and Leadership
The researchers found that work in the United States is shifting toward tasks that draw on EPOCH capabilities. Jobs built around these kinds of tasks have grown faster, are hiring more, and look stronger over the next decade (sidenote: I appreciated their distinction between tasks and jobs).
By contrast, roles made up of tasks that can be fully automated, where AI replaces people rather than supports them, are shrinking. That means full replacement tends to hurt long-term growth and opportunity.
There’s also a middle ground: AI augmentation, aka partial automation. It improves productivity and value creation but doesn’t automatically lead to more jobs. To make it work, teams need to redesign roles and workflows so AI handles repetitive tasks while people focus on high-EPOCH work.
Takeaways
Technological capability doesn’t justify automation. Just because you can automate, doesn’t mean you should.
To answer the question of “should you automate?”, you can start with some broad rules of thumb:
Tasks that are rule-based and require minimal context for successful performance are good candidates for automation. And, even in highly automated systems, humans are usually still involved in oversight, feedback, or exception handling.
Tasks that require EPOCH should remain human-led. If automation is involved, it should play an augmenting role via partial automation.
Even considering these guidelines, the value of automation ultimately depends on how well it supports a person’s job-to-be-done and fits within their context, not merely on what it replaces. Keep an eye out for how “Could you do it?” and “Should you do it?” form the axes of a framework - the Autonomy Decision Matrix - from my upcoming book, which helps teams prioritize what to automate and sequence automation rollout across different levels of partial autonomy.
This is the final installment of a three-part series on the Principles of Automation - a peek into topics from my forthcoming book with Rosenfeld Media, Designing Automated Futures. 🔗 Sign up to be the first to know about new book releases, sales, and events.
🚀 Sendfull in the Wild
CSCW Workshop: Augmenting Collaborative Problem-Solving
When? October 19
What? I’ll be sharing my position paper, Rewriting the Product Development Playbook: Adapting Collaboration in the Age of Generative AI, calling for researchers and industry practitioners to co-develop and test frameworks to help teams navigate the new realities of building with - and for - AI products.
Product Pulse: Navigating AI and the New Product Landscape
When? October 29
What? I’ll be speaking on a panel about how AI is reshaping product responsibilities, tools, and careers, focusing on prototyping, validation, and design.
Where? Online. Get tickets here.
DesignMeets x Business Design Initiative: AI & Human-Centered Design
When? November 19
What? I’ll be speaking on panel about how human-centered design is evolving in the age of AI.
Where? Rotman School of Management, University of Toronto (extra special for me! My home town and alma mater 🇨🇦). Get tickets here.
⏪ Recent Episodes
ep. 78: Principle #2: Technology can automate tasks, not jobs.
ep. 77: Principle #1: Automation is a Spectrum
ep. 76: A Parking Meter’s Quiet AI Design Wisdom
📖 Good Reads
What electronic music foretells about generative AI: The recent AI slop deluge, courtesy of Sora app and Meta Vibes launches, has historical precedent in electronic music. The lesson? As creation gets easier, curation becomes both harder and more essential.
Japanese genZers attempt limiting smartphone use to 2 hours per day: The central Japanese town of Toyoake (pop. 69,000) introduced a measure to cap phone time at two hours daily. No penalties are issued if you ignore it - it’s more like public health guidance against online addiction and sleep loss. What would need to be true to make this measure scale beyond Toyoake?
The next internet isn’t for us: OpenAI doesn’t want you to leave their platform, yet that’s exactly what we do. I download my cover image for this newsletter from ChatGPT and import it into Substack. It’s in every AI company’s best interest to keep us within their ecosystem. However, unlike the current internet, the “Model Web” isn’t built for people - it’s built for machines, and for the companies that run them.
That’s a wrap 🌯 . More on UX, HCI, and strategy from Sendfull in two weeks!


