ep. 64: Isn't it Ironic? What AI Design Can Learn from Decades of Human Factors Research
7 min read

It’s January 7, 1994. It’s almost 10:30pm, and there’s a snowstorm in Columbus, Ohio. United Express Flight 6291 is preparing to land at Port Columbus International Airport, with eight people aboard - five passengers, two flight crew, and one flight attendant.
The pilot had a history of low confidence in his manual control skills and a heavy reliance on autopilot during nighttime low-visibility approaches. This night was no exception.
Failing to monitor the aircraft’s airspeed during final approach, the flight crashed just short of the runway. The pilot, copilot, flight attendant, and two passengers were fatally injured.
This tragic example illustrates a well-documented phenomenon in human factors literature - overreliance on automation, also known as misuse. It is part of a decades-wide corpus of findings - and recommendations - on how to effectively design for automation.
Guess where else we’re seeing a lot of automation? AI design.
Whenever we use a genAI tool to synthesize key themes across PDFs that we previously would have done ourselves, or generate code that we previously would have written, we are automating parts of our workflow. Sometimes, we overrely on this AI automation, which is where we start to see issues like lower levels of critical thinking - especially in those who are younger and have less education (read: “low confidence in their manual control skills”).
Today’s episode explores what genAI can learn from decades of human factors research.
Human Factors: When Automation Shifts the Burden
Human factors is the science of understanding human capabilities and limitations, and applying that knowledge to design better products and environments. The primary goals of the field, according to the Human Factors and Ergonomics Society, are “to reduce human error, increase productivity, and enhance safety, comfort, and enjoyment for all people”. Pretty great, right?
Historically, human factors has shaped aviation, medical devices, and industrial automation - domains where poor design can have catastrophic consequences. One of the key lessons from decades of research in the field is that automation doesn’t simply replace human work - it reshapes it.
When tasks are automated, the human role shifts from doing to supervising, which paradoxically can increase cognitive load. We can trace this phenomenon back to Bainbridge’s Ironies of Automation (from 1983!), which highlight how automating industrial processes can exacerbate rather than eliminate challenges for human operators.
When automation handles routine tasks, humans are left with oversight responsibilities but may not be engaged enough to stay sharp. Over time, skill degradation can make manual takeovers more difficult. Additionally, supervisory roles tend to be more cognitively demanding, requiring humans to interpret system outputs, diagnose errors, and make higher-order decisions—all of which can increase mental workload. In short, the more advanced automation becomes, the more critical human oversight becomes.
We see these ironies playing out today in AI-assisted creative and analytical work: genAI often “makes easy tasks easier, while making complex tasks harder”. Let’s dive deeper into how these tensions are unfolding with genAI.
Ironies of AI: When Help Becomes Hindrance
The promise of AI is to reduce effort and increase efficiency, yet research shows that in many cases, genAI introduces new friction points:

The Production-to-Evaluation Shift: As AI automates content creation, human users shift from producing work to evaluating AI-generated outputs. This change increases cognitive load, as users must assess reliability, correctness, and relevance - often without full context, leading to inefficiencies and potential errors.
Unhelpful Workflow Restructuring: AI integration can disrupt established workflows by introducing new tasks, such as prompt engineering or excessive revisions, which can reduce efficiency. Rather than streamlining processes, poorly integrated AI can fragment work, forcing users to adapt in counterproductive ways.
Task Interruptions: AI-generated suggestions, while intended to assist, can disrupt user flow by introducing distractions that require additional cognitive effort. Frequent, ill-timed, or unnecessary AI interventions can break concentration, leading to inefficiencies rather than productivity gains.
Task-Complexity Polarization: AI often makes simple tasks even easier while complicating difficult ones. By automating straightforward processes but requiring human oversight for complex decisions, AI can increase the cognitive burden on users when they need to intervene in high-stakes scenarios
True to the ironies of automation, the more advanced AI systems become, the more crucial human oversight will be. Poorly designed automation often leaves humans underprepared to intervene when needed - arguably the stage we’re in now with current LLM UIs.
Beyond the Substitution Myth: Designing AI as a Collaborator
One of the most persistent misconceptions in AI design is the Substitution Myth - the idea that automation can directly replace human tasks without fundamentally changing system dynamics. This assumption ignores how automation transforms the human role, often in unintended ways. Rather than treating genAI as a mere tool for efficiency, we need to design it as a collaborator that supports human expertise.
“Rather than viewing AI as a simple tool for efficiency, it should be seen as a collaborator that requires intentional design to preserve human creativity and judgment.” - Shukla et al. (2025)
Here are five ways we can design genAI as a collaborator, addressing the productivity challenges we examined earlier, courtesy of Simkute et al. (2024):
Continuous Feedback and Explainability: AI systems should provide ongoing, interpretable feedback to help users understand how and why decisions are made. Without clear explanations, users struggle to assess AI outputs, leading to errors, distrust, and inefficiencies in human-AI collaboration.
System Personalization: AI should adapt to individual users’ expertise and preferences rather than enforcing a one-size-fits-all approach. Personalized settings can help reduce cognitive overload by tailoring AI assistance to user needs, improving efficiency and trust in automation.
Ecological Interface Design: AI should be integrated into workflows in a way that aligns with human cognitive processes, rather than forcing users to adapt to poorly designed interfaces. By presenting information in a structured, intuitive manner, AI can enhance situational awareness and decision-making.
Attend to Interruption Timing: AI interventions should be strategically timed to avoid disrupting users’ focus on primary tasks. Poorly timed AI prompts can break concentration and create inefficiencies, while well-timed assistance can support productivity without adding cognitive strain.
Clear Task Allocation: AI should have clearly defined roles that complement, rather than compete with, human decision-making (more on this with the Cognitive Offloading Matrix). When AI responsibilities are ambiguous, users struggle with misplaced trust or over-reliance, leading to inefficiencies and errors.
A classic human factors paper captures the essence of human-AI coordination: ‘How do we make them get along together?’
Takeaways
GenAI Design can Learn from Human Factors.
Designing AI that thinks with us, not for us requires drawing from decades of human factors research. While we’re automating increasingly complex thinking tasks at an unprecedented pace, we are still automating - and there’s a wealth of knowledge on how to design automated systems effectively. (For a related perspective, check out this episode on how 50 years of aviation human factors research can inform spatial computing design).
Automation Doesn’t Replace Human Work - It Reshapes It.
The ironies of automation remind us that as automation advances, human oversight becomes even more critical. Easy tasks become easier, but complex tasks become harder. Human oversight is critical. While easy tasks become easier, complex tasks often become harder. This shift redefines our roles - not as producers, but as orchestrators, redefining craft from doing the thing to directing AI to do the thing.
Design for AI as Collaborator.
GenAI shouldn’t just be a tool for efficiency - it should be designed as a collaborator that enhances human expertise. Achieving this requires principles like continuous feedback and explainability, system personalization, ecological interface design, mindful timing of AI interventions, and clear task allocation.
Human-Computer Interaction News
Gemini Robotics brings AI into the Physical World: Google introduced Gemini Robotics, their Gemini 2.0-based model designed for robotics.
A Lifelike Prosthetic Hand: Researchers at Johns Hopkins University have developed a lifelike prosthetic hand that combines soft, flexible joints with a sturdy inner structure and advanced touch sensors. This allows individuals with limb loss to grip and identify objects with high accuracy, outperforming current prosthetics.
“AI Has a Seat in the C-Suite” Survey: A survey conducted by Wakefield Research and sponsored by SAP polled 300 American C-level executives at companies with at least $1 billion in annual revenue. 44% reported they would override a decision they had previously planned based on AI insights.
Designing emerging technology products? Sendfull can help you find product-market fit. Reach out at hello@sendfull.com
That’s a wrap 🌯 . More human-computer interaction news from Sendfull in two weeks!