
Last week, we covered the origins of problem framing and reframing research, applications to product design, and takeaways for practicing (re)framing. This week, we examine how problem framing and reframing plays out in gen AI product design, and why it remains a particularly challenging task for practitioners. This is the second of a two-part series on problem framing to “get the right design”, especially for gen AI solutions.
Problem Framing in Gen AI Design
We previously discussed how teams often seek support with problem framing and reframing [1, 2, 3], to help them build a useful gen AI solution - something that addresses user needs, rather than a technical solution in search of a problem. Existing AI product design guidebooks, such as the People + AI Guidebook, help practitioners “get the design right”, but there’s an opportunity for more guidance on “getting the right design”.
We looked at the basic reasoning patterns designers use in problem solving, based on the work of Kees Dorst. The core equation is:
“What” is an object, service, or system. “How” is a known working principle that will help achieve the Value we aim to deliver to customers. The “What” and “How”, together should achieve this value proposition.
When faced with a new design problem, we typically only know the end value we want to achieve. This looks like:
Experienced design practitioners develop or adopt a frame around a problem.
Per Dorst, the logic goes: IF we look at the problem from a certain perspective, and adopt the working principle associated with that perspective, THEN we will create the aspired value. Practitioners will work backwards from value (a type of induction) to devise the “how”, and design a corresponding “what” (e.g., object, system, service).
This framing process is the opposite of what we observe with gen AI systems, which tend to be developed solution-first. As a disruptive innovation, gen AI is a technology solution that upends existing markets while creating new ones. With each of its rapid advancements, the technology becomes capable of being applied to new types of problems. This is the opposite of how human-centered design usually operates, instead starting first with the problem we seek to solve for people, followed by identifying and building a solution.
Returning to our equation, design practitioners are now faced with the pressure of leading framing with the “what” - aka the gen AI solution the team is considering.
If we take the usual approach of framing based on the “how” and value we aspire to deliver to people, we are left with two sets of question marks, disconnected from the “what”. This might manifest as conducting design research and recommending targeting user needs that gen AI is ill-suited to address, or generating ideas that require near-perfect model performance to be useful to people - both of which are phenomena documented in recent human-computer interaction research.
An alternative approach is to begin by learning as much as possible about the “what”. Spend hands-on with frontier models like GPT-4o (Open AI) and Claude 3 (Anthropic), as well as other gen AI solutions you are considering customizing. If you are building your own foundation model, these solutions can serve as analogs. Through this experiential learning, you will come to understanding gen AI’s strengths and limitations - something especially important, given the emergent behavior inherent to gen AI systems. This echoes what Ethan Mollick, Wharton Associate Professor of Management, has advised about gen AI tools: after about 10 hours of playing with one of the frontier models, you will begin to think of numerous gen AI use cases in your product space.
In parallel, learn about the goals, contexts and workflows of your target audience. At what - if any - point could a gen AI solution deliver unique value to their workflow? These points are hypotheses you can test in subsequent research. This approach goes beyond traditional human-centered design, adopting a more holistic approach that acknowledges the symbiosis of problem and solution space.
Takeaways
Gen AI is a disruptive innovation - a technology solution that unlocks new opportunities for application. This runs contrary to the traditional human-centered design approach, grounded in what problem we’re solving for the user. However, simply leading a project solution-first runs the risk of building something that at best, doesn’t serve people’s needs and is not adopted, and at worst, causes harm (a vast topic for a future episode). To reconcile these approaches, we propose a parallel exploration of both problem and solution space:
Know your gen AI systems: Learn gen AI systems’ strengths and limitations via experiential learning with existing tools (especially frontier models). This will help you understand to which use cases gen AI can best be applied.
Know your audience: Learn about your target customer’s goals, contexts and workflows. Identify points at which gen AI could extend the customer’s capabilities, and treat those points as hypotheses that can be tested in subsequent research. Be mindful that just because you can replace a part of someone’s workflow with gen AI, doesn’t mean you should try - the solution should be creating value, rather than replacing a task that the customer needs or wants to perform (more on that here).
Know your ML engineer: Design practitioners need to work more closely than ever with team members building gen AI solutions. This includes ML engineers, data scientists and research scientists. Given how fast gen AI systems are evolving, we need to make sure cross-functional collaboration is occurring, with fluid knowledge-exchange of system capabilities and customer needs.
Upcoming Event
Founder Stef Hutka presents on Embodied by Design at Kinetech Arts: Y-Exchange tonight May 29th 7-8:30pm at ODC Dance Commons. Movement artist Gizeh Muñiz Vengel will also be performing. RSVP here.
Human-Computer Interaction News
What does the public in six countries think of generative AI in news?: The Reuters Institute and Oxford University surveyed 12,000 people in six countries (Argentina, Denmark, France, Japan, UK, USA) to understand if and how people use gen AI, and attitudes about its use in journalism. ChatGPT was the most widely recognized and used generative AI tool. Many respondents who said they have used gen AI have used it just once or twice, and it is yet to become part of people’s routine internet use. When asked to assess what they think news produced mostly by AI with some human oversight might mean for the quality of news, people tend to expect it to be less trustworthy and less transparent, but more up-to-date and (by a large margin) cheaper for publishers to produce.
NIST launches a new program to advance sociotechnical AI testing and evaluation: The National Institute of Standards and Technology (NIST) launched the Assessing Risks and Impacts of AI (ARIA) program to assess the societal risks and impacts of AI systems (i.e., what happens when people interact with AI regularly in realistic settings). Once deployed, the program will help develop ways to quantify how a system functions within societal contexts.
Apple's 2024 Design Awards have a new Spatial Computing category: Finalists included Sky Guide (star gaze constellation finder), NBA (basketball highlights & stats), djay (music remix app), Synth Rider (freestyle-dance VR rhythm game), Blackbox (sensory puzzle game) and Loóna (cozy puzzle games). The winners will be announced at WWDC 2024, June 10-14.
Want to workshop on new ways to frame the problems you’re solving? Sendfull can help. Reach out at hello@sendfull.com
That’s a wrap 🌯 . More human-computer interaction news from Sendfull next week.