ep. 83. The Lawn Mower that Ate the Soccer Field
5 min read
The German amateur soccer club SG Egels-Poppens recently found their playing field marked by deep ruts and patches of torn earth. Was it vandalism? A spirited scrimmage the night before? The recent rain?
None of these things.
The culprit was the team’s own autonomous lawnmower robot. They’d even given it a name: Robby.

The field was wet after a rainstorm, and Robby got entrenched in the soggy ground. The robot continued “mowing” long after its control system should have detected something was wrong and halted operation.
The damage was so extensive that the club won’t be able to host matches until spring, when the field can be restored with new grass or turf.
Why This Matters
This incident is both captivating and instructive because Robby should be the textbook example of automation done right. You’d be hard-pressed to pick a better task to automate than mowing the lawn:
It’s repetitive.
It operates in a bounded environment.
It requires minimal expertise.
Most people won’t miss doing it.
It requires relatively little context*.
*Clearly Robby failed on this front, but we can agree that a robot mower operating in a confined space requires less context than, say, negotiating a client deal.
The problem lies in Robby’s design: autonomous systems need fail-safes.
Redesigning Robby
We can look to safety-critical system design for tips on how to redesign Robby, saving SG Egels-Poppens a season of heartbreak. These tips also double as guidance for designing any autonomous system (think agentic workflows).
Wait, what does safety-critical mean? It’s a scenario where system failure could lead to significant harm, such as injury, environmental damage, or major financial loss. Commercial aviation is one example of a safety-critical domain.
Monitoring
The first step to fixing a problem is knowing you have a problem.
The current mower lacked situational awareness, failing to perceive relevant environmental cues (soggy grass, getting stuck in the mud), interpret them against its operational goals (mow grass, not create ruts), and update its actions accordingly.
A redesigned Robby would need continuous feedback loops between sensors, expected system behavior, and decision thresholds. For example, monitoring wheel torque against forward motion would have allowed the system to detect that it was exerting effort without making progress.
With these inputs, Robby’s control logic could have transitioned the mower into a safe, low-risk state (say, moving slower, pausing, backing up to firmer ground) rather than persisting.
Redundancy
Redundancy supports resilience by providing multiple channels for a system to notice when it is no longer operating within safe limits.
For Robby, redundancy would have ensured there was more than one way to detect and respond to abnormal field conditions. One option would be to combine wheel slip detection with independent measures such as soil moisture or GPS-based movement checks. If any one sensor failed, a second cue could still trigger a halt.
State Validation & Escalation
It’s ok to ask for help.
Once an issue with operation was detected, Robby’s system could have triggered a state validation step that halted operation and requested intervention (aka escalation). Enter human-in-the-loop exception handling. This might look like having the robot send a picture of the field to a human when it detected something was off about its operating conditions.
One problem: Robby is running overnight. No one wants to be woken up at 1am by a notification from your lawn robot.
What if you can get another machine to look at the picture and decide for you? It’s increasingly common that AI agents verify the work of other agents, so this isn’t far fetched. While adding an ‘agent checker’ might be overkill for most lawn mowing robots (well, maybe not in the SF Bay Area), it would reduce the number of times a human would have to assess the situation.
If you were designing automation for a high-stakes, contextually rich task, you’d want human judgement in the loop immediately upon exception-detection.
The Ghost in the Machine
Big-big picture: No matter how many machines are checking the work of other machines, you needed a human to orchestrate the machines at some point. This is a key principle of human factors research: Automation simply shifts human work upstream. You move from doing the thing to verifying the output of your automated system. With AI agents, you might be the human setting up the thing that’s verifying the output (of the thing doing the thing. Say that five times fast).
Sometimes verifying the output can be more taxing than just doing the thing in the first place. This is how automation can paradoxically exacerbate, rather than eliminate, challenges for human operators. This phenomenon is called Bainbridge’s Ironies of Automation.

That said, in the case of autonomous lawn mowing, even the occasional verification is probably worth not having to regularly mow a giant field.
Now What?
Remember Robby when designing autonomous systems:
Design for Monitoring: Give autonomous systems the ability to notice when their behavior no longer matches expected conditions. Continuous sensing, comparison against norms, and clear decision thresholds prevent silent failure states.
Add Redundancy: Don’t let a single signal (say, sensor input or LLM output) be solely responsible for determining outcomes. Add multiple ways to detect when performance is going off the rails.
Plan for Escalation: Define how and when the system asks for help. Low-stakes tasks may allow automated validation, while complex or high-consequence scenarios require timely human judgement to interpret ambiguity and reset course.
All of that said: Whether it’s orchestration, monitoring, or verification, people are still part of the system. Automation isn’t all-or-nothing. It’s a partnership between humans and machines.
This episode is a sneak peek into topics from my forthcoming book with Rosenfeld Media, Designing Automated Futures. 🔗 Sign up to be the first to know about new book releases, sales, and events.
🚀 Sendfull in the Wild
Recording of my November 21 talk, Still in the Loop: Leading with Human-Centered Design in the Age of AI, at the University of Toronto Faculty of Information.
Recap of the AI & Human-Centered Design Panel at the Rotman School of Management, written by event co-host, DesignMeets.
⏪ Recent Episodes
ep. 82: Live from Toronto: AI x Design Panel Recap & Reflections
ep. 81: What Toyota Can Teach Us About AI Automation
ep. 80: Automation in the Field: A Conversation with Konstantinos “Kostas” Kandylas
📖 Good Reads
Half My Day Is Robots Now by David Kaye: A framework for how you’ll feel about AI, based on what you do for a living.
Where Are You Actually Trying to Go? by haley rose: How to map your long-term goals (not just for work).
The future of AI-powered sales with Vercel COO, Jeanne DeWitt by Lenny Rachitsky: What go-to-market looks like in the AI era.
That’s a wrap 🌯 . More on UX, HCI, and strategy from Sendfull in two weeks!



"One does not simply replace human work with automation" - yes! And "Sometimes verifying the output can be more taxing than just doing the thing in the first place." I somehow hadn't heard of Bainbridge’s Ironies of Automation ... now I have to go look that up, and check out episode 64 :) Nice article, Stef Hutka!