
In the early 1950s, the Dayak people of Borneo experienced a severe outbreak of malaria. To address this issue, the World Health Organization (WHO) intervened by using DDT, a chemical aimed at eradicating the mosquitoes responsible for spreading the disease. As a result, the mosquito population decreased, leading to a reduction in malaria cases. Problem solved, right?
Not quite.
DDT entered the food web. Insects killed by DDT were eaten by geckos, who were then eaten by cats. The cats started to die, leading to the rat population skyrocketing. Two serious new diseases, sylvatic plague and typhus, emerged as a result, threatening the same people whom the DDT intervention aimed to help.
The WHO responded by having the Royal Air Force parachute live cats into Borneo - aka Operation Cat Drop. An estimated 14,000 cats were involved, hunting the rats and restoring equilibrium.
I shared this story at the start of my semester this fall, reprising my role of teaching Introduction to User Experience Design to graduate students at UC Berkeley’s School of Information. What’s the moral of Operation Cat Drop, what’s its place in foundational design education, and what can it teach us about designing for generative AI (genAI)? Today’s episode explores these questions.
Why Design Needs Systems Thinking
Operation Cat Drop vividly illustrates how problem-solving can have unintended consequences if we don’t consider how it impacts the broader system into which a solution is introduced. This lesson is highly relevant to design, which has often been framed - and taught - as a problem-solving activity. As design leader Hugh Dubberly has written about, we need to stop describing Design as problem solving.
The world is not composed of clearly definable problems that can be neatly and objectively solved. As Dubberly explains, we are not watchmakers replacing a broken gear. Designers work within the reality of systemic issues, connected in dynamic networks of cause and effect. There are numerous stakeholders, and how you define a problem depends on your worldview (more on this in a past episode).
For instance, people designing a new pair of smart glasses might see the problem to be solved as people looking down on their phones and not being aware of the world around them. The 1-1 solution is designing a glasses wearable that surfaces key information to the end user while heads-up. However, looking down on your phone (or holding up your phone to take a picture) is also a clear signal to bystanders about your intent. To bystanders, the solution of a wearable that looks just like everyday glasses may raise more problems than solutions (see the original Google Glass ”glassholes” phenomenon).

This is not a call to designers to stop innovating. It’s the opposite. We must challenge ourselves to apply systems thinking, a holistic paradigm that examines different ways that parts of a given system interact and influence each other within a whole. Within the world of technology, this approach can help us design products that fit within systems. This is not only a more responsible way to design, but can also benefit adoption and retention. In the case of smart glasses, this includes designing for bystander privacy and interoperability with the customer’s existing technology ecosystem (i.e., you’re more likely to buy and wear these smart glasses if you’re not creeping out the people around you, and if they work well with your existing smartphone).
The Urgency of Systems Thinking for GenAI Design
The need for applying - and teaching - systems thinking in design becomes increasingly urgent when it comes to genAI design. GenAI solutions are developing and unlocking new problem spaces at an unprecedented rate. These solutions are capable of cognitively offloading increasingly complex tasks with each subsequent generation of foundation model. My Problem-Solution Symbiosis Framework offers tools for practitioners to guide the design of human-centered AI given these realities.
I’d argue that much of the public concern about genAI stems from designing these tools with little regard for the systems within which they operate.
You can find numerous examples of this in our AI news cycle. For instance, recent research supports that Google AI Overviews are having a significant, negative impact on the visibility of news publishers within search results. For instance, a search for “who are the UK national newspaper editors” yields an AI Overview mainly based on the only source which keeps an up-to-date list, Press Gazette. The Press Gazette article from which Google takes most of its AI-written information is now pushed down to the bottom of the second mobile phone screen on search.
If we play this out, we can envision a potential future where news publishers continue to disappear, and remaining content quality suffers. The introduction of the AI Overviews solution had consequences on the broader system, beyond just getting a reader a quick summary (there’s a significant case for futures thinking practice here, but that’s for another episode).
This observation echoes a quote by Jodi Forlizzi, HCI leader and professor at Carnegie Mellon University, from Jorge Arango’s podcast, The Informed Life:
“…there’s this prevailing notion that technology is bad, and we can use reframing to show that no, technology is not bad. Often it’s the system around the technology…”
Forlizzi gave an example of how AI and automation, especially following the pandemic, negatively impacted hospitality workers in Las Vegas, many of whom are immigrants, women, and older people with limited digital literacy. AI solutions, such as algorithmic room assignments, often are designed to prioritize efficiency over worker well-being. In other words, they are not designed for the system in which they will operate, and did not include the people whom they would affect in the design process.
Results include greater wear and tear on workers’ bodies (e.g., pushing a large cart across distances to clean the next auto-sequenced room, rather than batching by location) and workarounds (e.g., marking all rooms as in-progress), which then creates a disconnect with operations and management (e.g., linens aren’t getting moved).
Dr. Forlizzi’s research, in collaboration with unions, shifted towards a systems approach. They involved workers in the design of AI tools, such as allowing them to sequence tasks, thereby improving autonomy and job satisfaction while mitigating the negative effects of automation. This is a great example of applying systems thinking to AI design, considering AI technology within the broader systems of products, services, and people.
A Path Forward
Systems thinking is urgently needed in the practice of design, especially when it comes to genAI solutions. However, it’s also is a paradigm shift that is typically not emphasized in design education. This is why Operation Cat Drop is part of Week 1 of my Intro to UX course, followed by hands-on stakeholder mapping for a given problem space, considering each stakeholder’s values, incentives and overlaps. This activity adapts the problem space ecosystem mapping tool that is part of my Problem-Solution Symbiosis Framework, a new framework for building human-centered AI.

In next week’s episode, I’ll walk through an example of how you can map problem space ecosystems to identify new opportunities and risks for a genAI solution. For now, here are three key takeaways from today’s episode:
Design needs to shift from a problem-solving to systems-thinking mindset: Operation Cat Drop illustrates the unintended consequences of problem-solving without considering how their solutions interact within a broader system. Design, often conceptualized as a problem-solving activity, needs a systems thinking approach, given the complexity and interconnectivity of modern technology and the world for which it’s designed.
This shift is increasingly urgent given the development speed and potential of genAI capabilities: Designers must understand how genAI solutions can impact not just users but entire ecosystems, to build responsible solutions that people value and want to adopt. This involves understanding the systems for which you are designing, as well as an understanding of the technology’s capabilities.
We need tools to help practitioners practice systems thinking: An elephant in the room is that systems thinking can be challenging - not only does it require a paradigm shift from problem-solving, but it also requires mapping out understanding complex, interconnected networks. We need new tools to help practitioners start incorporating this thinking into their practice. The Problem-Solution Symbiosis Framework is built to help provide practitioners with tools to design human-centered AI solutions that fit within systems. More on this next week.
📣 A Call to Action
Are you working on genAI tools, especially developing new ideas, features and market positioning ? Let’s chat about how we can partner on a case study, applying the Problem-Solution Symbiosis Framework. Reach out at stef@sendfull.com
Human-Computer Interaction News
Can AI Be Conscious?: This Psychology Today explores the philosophical, biological, and technological aspects of whether AI can be conscious. While some scientists believe that machines could one day simulate human-like awareness, the consensus is that true consciousness - rooted in biological processes and dynamic physical changes - remains exclusive to living organisms.
Brain Patterns Linked to Specific Behavior Using AI: Researchers at the University of Southern California used recurrent neural networks to separate brain patterns related to a particular behavior. This research has potential improve brain-computer interfaces and aid with the discovery of new neural patterns.
Americans Express Real Concerns About Artificial Intelligence: The latest survey from Bentley University and Gallup reports that relative to a year ago, Americans still see more harm than good from AI, though fewer see it as harmful. They are still very wary of how it is being used in a multitude of settings, including the workplace. Transparency about how AI is used in business practices was a key action that could help alleviate Americans’ concerns about AI.
That’s a wrap 🌯 . More human-computer interaction news from Sendfull next week.