ep. 81. What Toyota Can Teach Us About AI Automation
5 min read
“The goal is full automation so we can stop hiring people.”
This sentiment captures a dominant mindset underlying much of AI development. Look no further than the I-80 billboards driving into San Francisco—notably Artisan’s controversial 2024 “stop hirring humans” ad campaign.

The typo is intentional. It suggests that Artisan’s AI employees are an improvement over us error-prone flesh bags.
While the campaign was designed to “rage bait”—and Artisan later said that their real goal is to automate work humans don’t enjoy—this wasn’t a baseless PR stunt. The belief that humans are the rate-limiters to maximum efficiency runs deep.
The Dream of Full Automation
Frederick Winslow Taylor—the mechanical engineer who authored The Principles of Scientific Management (1911), named the most influential management book of the twentieth century by the Academy of Management—saw human labor as the weak link in the industrial system.
To him, workers were unreliable machines. He sought to identify the most efficient way to complete a task, and then select and train a worker to complete exactly that task. This approach would get people to operate with the predictability of machines, boosting productivity. Two years later, Henry Ford translated that logic into practice, introducing the first moving assembly line to mass-produce cars, realizing Taylor’s vision of efficiency.
Imagine if Taylor knew what we were up to with our current cognitive automation efforts with AI agents. What could be more efficient than replacing inefficient knowledge workers?
At the root of this worldview is the belief that humans are inherently limited, and that machines can help us transcend those limitations. Sam Altman’s 2016 interview in The Atlantic captures this perspective:
“There are certain advantages to being a machine. We humans are limited by our input-output rate—we learn only two bits a second, so a ton is lost. To a machine, we must seem like slowed-down whale songs.”
If fewer humans mean greater efficiency, then full automation seems like the logical endgame. Artificial General Intelligence is the Bach to our human slowed-down whale songs.
The Cost of Humans Out of the Loop
The trouble is, designing humans out of the loop tends to yield the opposite of limitless productivity. Consider a few examples:
General Motors (1980s): Chairman Roger B. Smith sought to revitalize GM through building a fully-automated “factory of the future”. What started off as $52 million investment ballooned into $40 billion initiative. The result? Robots misidentifying car models, misattaching parts, and painting each other instead of vehicles. The Michigan factory that was the centerpiece of this effort closed in 1992.
Boeing (2015–2019): The company introduced the “Fuselage Automated Upright Build” system to drill and fasten aircraft sections. Except, the robots frequently produced damaged or incompletely assembled fuselages, forcing Boeing to bring back human machinists.
Klarna (2024–2025): The fintech firm replaced customer service agents with ChatGPT-powered AI handling over two-thirds of customer chats. Within a year, plummeting service quality and unempathetic or incorrect responses led Klarna to rehire human employees.
It turns out that you do need humans in the loop. The challenge isn’t whether to involve them, but how. The Toyota Production System offers a blueprint for designing automation that keeps humans in the foreground, whether in manufacturing or AI.
The Loom that Knew When to Stop
In 1896, Sakichi Toyoda, founder of Toyoda Automatic Loom Works (the foundation for Toyota), invented the Toyoda Power Loom. It quickly earned a reputation for producing high-quality cloth thanks to a weft-stopping mechanism that automatically halted the machine when a thread broke or ran out. This prevented defective fabric and reduced waste while allowing workers to focus on fixing issues rather than constantly monitoring the loom. The idea became the foundation of jidoka, or “automation with a human touch,” sometimes called autonomation.

Automation With a Human Touch
Fast forward to post-WWII Japan. The country’s manufacturing industry was devastated. Toyota, which had shifted their focus from textiles to automobiles in 1937, struggled to compete with resource-rich American automakers like Ford, who focused on mass production. Toyota worked with their constraints, and developed the Toyota Production System. It was built on the pillars of jidoka, and just-in-time manufacturing, which means producing only what is needed, when it’s needed.
The Toyota Production System would transform Toyota into a global leader in the auto industry, improving quality and efficiency, while cutting waste and cost. Jidoka was key to this success. Two of its most famous implementations were:
Andon cords: Each worker could pull a cord (or press a button) to stop the line if they noticed an issue, such as a defect, safety hazard, or malfunction. This process empowered front-line workers to halt production, immediately addressing issues that would otherwise degrade quality.

A worker pulls an andon cord on a manufacturing line. Source: Toyota. Mistake-proofing (poka-yoke): Machines were designed to detect and prevent human error—for example, by refusing to proceed if a bolt wasn’t fully seated. Just like the Toyoda Power Loom, this meant humans didn’t need to watch the machines continuously to maintain quality. This also let an operator manage several machines simultaneously.
Fun fact: Japanese industrial engineer Shigeo Shingo introduced the concept of mistake-proofing at Toyota in the 1960s. Originally termed baka-yoke (“fool-proofing”), he later revised it to poka-yoke (“mistake-proofing”) to show respect for workers. The change reflected a broader philosophical shift, from attributing errors to individuals to designing systems that make mistakes less likely to occur.
A Culture of Shared Responsibility
You might wonder: wouldn’t it be terrifying to be the person who pulls the andon cord?
This is where Toyota’s approach gets even more interesting. The andon cord represents a culture of shared responsibility. When a worker pulls the cord to signal a defect or delay, coworkers immediately “swarm” to the spot, and the first thing they do is thank the person who raised the issue. Pulling the cord is seen not as a failure, but as an act of duty and care to teammates, the company, and the customer.
Toyota’s leadership played a key role in creating this culture. Processes and incentives were designed to make accountability feel collective rather than punitive. Managers were expected to coach rather than blame. Continuous improvement (kaizen) was baked into everyday operations.
Where Ford’s production system emphasized technical efficiency, Toyota’s was sociotechnical. It integrated human judgment as an essential part of quality and learning, and built a culture where expressing that judgement simply became the right thing to do.
Takeaways for AI Automation
Toyota succeeded because it designed its manufacturing system for human intervention, not human absence. Human judgment and accountability were rewarded, not replaced. Automation served Toyota’s vision of delivering quality through efficiency, not efficiency alone. If you’re asking how to automate with AI, the same principles apply:
Automation Serves Mission
For Toyota, automation was a means to an end—a way to improve quality and reduce waste. This contrasts sharply with “AI-first” strategies that prioritize technology over purpose. Start with your organization’s mission: how can AI create customer value, not just reduce costs?
Incentivize Accountability
When a model behaves unexpectedly, what’s your equivalent of the andon cord? Do people feel empowered to raise concerns—and will leadership act on them? Build feedback loops where human judgment is expected and rewarded, with clear pathways for surfacing and resolving issues that impact product quality and trust.
Automate with a Human Touch
Toyota’s machines stopped automatically when an error was detected, prompting human review. Apply the same mindset to AI. Design for graceful handoffs between automation and human oversight. Build fail-safes that pause, flag, or escalate anomalies—keeping people in the loop where judgment matters most.
This episode is a sneak peek into topics from my forthcoming book with Rosenfeld Media, Designing Automated Futures. 🔗 Sign up to be the first to know about new book releases, sales, and events.
🚀 Sendfull in the Wild
Upcoming Events
DesignMeets x Business Design Initiative: AI & Human-Centered Design
When? November 19
What? I’ll be speaking on panel about how human-centered design is evolving in the age of AI.
Where? Rotman School of Management, University of Toronto. Get tickets here.
Still in the Loop: Leading with Human-Centered Design in the Age of AI
When? November 21
What? I’ll be sharing a framework and tools from my forthcoming book, Designing Automated Futures, to help practitioners navigate and thrive in the evolving AI landscape.
Where? In person at the University of Toronto and online. Get tickets (free!) here.
⏪ Recent Episodes
ep. 80: Automation in the Field: A Conversation with Konstantinos “Kostas” Kandylas
ep. 79: Principle #3: Just because you can automate, doesn’t mean you should.
ep. 78: Principle #2: Technology can automate tasks, not jobs.
📖 Good Reads
A compilation of the latest AI statistics (Exploding Topics)
AI enterprise adoption report (The Wharton School)
Korea rolls back AI-powered textbooks (Rest of World)
That’s a wrap 🌯 . More on UX, HCI, and strategy from Sendfull in two weeks!

