“We’ll just replace the manual review step with AI.”
“Using our agent, customers now resolve their requests in less than 2 minutes.”
“The goal is full automation so we can stop hiring people.”
Sound familiar? Each of these quotes is grounded in recent headlines: Meta replacing human privacy and integrity evaluators with AI review; Klarna’s efficiency metrics after automating customer service, and before rehiring human representatives earlier this year; and Artisan’s controversial “stop hiring humans” ad campaign.
The underlying theme across all three quotes is the overpromise of AI-driven replacement — a belief that automation can fully substitute for human work, without acknowledging how automation impacts systems, be it a customer’s workflow or your organization’s broader service ecosystem.

How to automate — how you decide what to delegate to AI, how you sequence automation rollout, how you communicate decisions — are the difference between project success and failure. And we’re currently seeing more of the latter: Asana’s Work Innovation Lab found that two-thirds of organizations are failing to scale AI across their business. IBM reported only 25% of AI initiatives have delivered expected ROI, with just 16% scaling enterprise-wide.
This is a big topic. So big, in fact, that I’m writing a book to help organizations answer the question, How do we automate? For today’s newsletter, I’m scaling down to focus on three cognitive traps that AI teams often fall into - each one tied to the quotes we saw at the start of this article.
Working on agentic workflows? I’d love to interview you for my upcoming book - reach out at stef@sendfull.com
Three Cognitive Traps to Avoid
Wait, what’s a cognitive trap? Rooted in cognitive psychology, it refers to a recurring mental shortcut or flawed assumption that leads people to make poor decisions, often without realizing it.
Trap #1: The Substitution Myth
“We’ll just replace the manual review step with AI.”
The substitution myth is the assumption that automation can directly replace human tasks without fundamentally changing system dynamics. Decades of human factors research have demonstrated that automation doesn't merely replace human activity; it changes it.
For example, when you move from doing tasks manually (flying a plane) to automating some of those tasks (autopilot), you have shifted the human’s role from doing to monitoring. Automation can even paradoxically introduce new challenges as this shift takes place — a phenomenon called the Ironies of Automation. For example, monitoring tends to be more boring than doing, which can lead to new issues like lapses in attention, causing the user to miss important events. Or eventually, skill atrophy, if the system ever requires manual takeover.
How to avoid the trap? Addressing the substitution myth starts with recognizing that automation never happens in isolation — changing one part of the system changes the broader system. To go one step deeper, use systems thinking tools like the Futures Wheel to explore how automation reshapes roles, processes, and relationships. Ask (and map!): What first-, second-, and third-order changes could occur when this task is automated? What new skills or responsibilities will be required? Identify your highest priority opportunities and risks, and revise your approach accordingly.
Trap #2: The McNamara Fallacy
“Using our agent, customers now resolve their requests in less than 2 minutes.”
The McNamara Fallacy is the error of relying exclusively on quantitative metrics to evaluate success, while ignoring qualitative factors that seem harder to measure, but are often more important. The fallacy is named after United States Secretary of Defense Robert McNamara, who famously prioritized the number of enemy casualties (McNamara body count) as a primary success metric during the Vietnam War.
Applying the fallacy to automation: Take the case of Klarna. The company had measured AI performance on customer success tasks using metrics like response time and chat volume, while overlooking harder-to-quantify outcomes such as customer satisfaction and issue resolution.
Shorter call duration when your customer rage quits the session, stuck in a loop with an AI chatbot, is not something to celebrate. However, it will show up on a dashboard as “shorter call times”, creating a veneer of success while user experience quality declines.
How to avoid the trap? Anchor your automation goals in success as defined by your target audience. This requires returning to foundational questions like: What would a meaningful improvement to this workflow look like to the audience we’re serving? How can we measure that? Techniques like journey mapping and co-defining success metrics with users can help gain clarity on what matters, shifting the focus of measurement from outputs to outcomes.
Trap #3: The Full Automation Fallacy
“The goal is full automation so we can stop hiring people.”
The full automation fallacy is the mistaken belief that all human tasks can - or should - be entirely replaced by automated systems. This fallacy often stems from a desire to reduce costs or streamline operations. However, building fully autonomous systems is significantly easier said than done, as it requires a deep understanding of the current system, and a coherent vision, strategy, and execution for how the new system will be better, benefitting both the business and the customer.
The stakes of getting full automation wrong are high, for example: eroding customer trust, wasting significant time and resources, and creating accountability gaps, especially when there’s no clear path for human oversight or escalation.
How to avoid it? Get clear on why you want to build a fully autonomous system. What value will full automation deliver to your target audience? Differentiate your company in the market? Reshape your operations?
From there, take a hard look at your organizational readiness. Do you have the right infrastructure, cross-functional alignment, and escalation pathways in place to support automation at scale? Even assuming something can and should be automated doesn’t mean your team is ready to support what comes next. A key topic in my book is how to align your automation vision with your organization’s readiness so your organization can build a sequenced strategy.
Human-Computer Interaction News
AI agents are struggling with task performance: Futurism magazine shares a sobering overview of AI agent performance. Counterpoint: these are early days for agents, and per Ethan Mollick’s book Co-Intelligence, “Today's AI is the worst AI you will ever use.” My $0.02: Even as technical capabilities improve, their success will ultimately depend on how well we integrate them into workflows where they deliver meaningful value. Doing this effectively requires a deep understanding of user journeys and thoughtful service design.
RadGPT helps patients read radiology report information: The Cures Act Final Rule requires that patients have real-time access to their radiology reports, which contain technical language. Stanford researchers built and tested an LLM, RadGPT, to generate accompanying report explanations. The model produced personalized, high-quality explanations and Q&A pairs for each report, with a low risk of generating harmful content.
Four charts showing where AI could go next in the US: The Brookings Institution recently mapped the AI economy — specifically “which regions are ready for the next technology leap”. MIT Technology Review summarized their report into four charts.
That’s a wrap 🌯 . More human-computer interaction news from Sendfull in two weeks!