
Last week, I introduced the Stakeholder Ecosystem Mapping Workshop from my Problem-Solution Symbiosis Framework. The workshop helps teams reframe how they think about designing genAI systems, shifting from problem-solving for an end user, to understanding the broader ecosystem your product is entering. This helps manage risk and identify opportunities, helping teams build more valuable, sustainable, and scalable genAI solutions.
Today, I share another tool from the Framework: Building GenAI Intuition. Gaining intuition about genAI solution space can help you and your team understand different genAI technology’s strengths and weaknesses, to better identify what use cases the tech is best equipped to address. This understanding is especially important given the hype around genAI, its rapid technological development, and emergent output that can make output unpredictable.
This episode shares five practical ways you can start intuition-building with intention and structure, both through hands-on experimentation with genAI tools and close partnership with team members developing machine learning (ML) models.
Five Ways to Build GenAI Intuition
The following list walks you through five ways to begin building genAI intuition, starting with the lightest weight entry point - solo experimentation, all the way up to orchestrating structured, hands-on playtesting with your team. A few caveats before we dive in:
These approaches aren’t meant as a substitute for adversarial testing, in which a team systematically and deliberately introduces inputs designed to test how the model behaves if exploited by bad actors.
These approaches do not replace model benchmarking.
Steer clear of sharing proprietary or sensitive information with external genAI tools. Check your organization’s policies if in doubt.
With that, let’s get started!
1. Get to Know Foundation Models
Start “playing” with frontier models, such as Claude’s Anthropic, Open AI’s ChatGPT, and Meta’s Llama. Whatever task you’re working on right now, try “asking AI”. Try the same prompt in each tool. You’ll start to see similarities and differences. After a while, you’ll start getting a good sense of what tasks each model is good at versus not. If you’re curious about development tools, you could follow this approach with genAI APIs (find a curated list here).
From spending several hours per week using these foundation models in a knowledge worker capacity, I know for instance that ChatGPT-4o can serve as an effective “sparring partner” to improve the structure of a paragraph, but does not excel at generating a shortlist of relevant human-computer interaction news for this newsletter.
This approach of hands-on time with genAI models echoes what Ethan Mollick, Wharton Associate Professor of Management, has recommended: spend at least 10 hours using one of the frontier models to develop a sense of the use cases to which genAI could be applied in your domain of interest.
2. Target Experimentation to Your Product Area
After this more general experimentation with frontier models, you can start using prompts related to tasks in your team’s product area. For example, if you are building a genAI powered grading solution (extending our example from last week), start by entering an existing essay rubric and sample essay into ChatGPT versus Claude (again, not uploading sensitive or proprietary information). Compare the resulting evaluation. How is it similar? Different? Which one is “better”? How do you know?
If you have existing, internally-built genAI solutions related to your product area, start to experiment with tasks that you hypothesize (or know, from design research!) are important to your target audience.
3. Learn From Your ML Engineer
If you’re not already spending time with the people on your team directly responsible for developing genAI models (e.g., ML engineers, research scientists), it’s time to set up a 1-on-1. Learn about their goals, workflow and what they view as the model’s primary capabilities. Learn about what benchmarks they reference. Ask how you can try in-development solutions, so you understand how they are built and evolve. Share what you know about your customers, and identify how you can best stay connected to each other’s work.
4. Experience your Competitive Landscape
To last week’s episode, every product enters an existing ecosystem of people and existing technology. For a product to be adopted, it needs to work well within that ecosystem, and deliver more value than whatever someone is doing today.
Review (or start!) customer research on your target audience. What are their goals and context? What tools do they use today (if any) to accomplish the “job-to-be-done” that your genAI-based solution proposes to accomplish? Spend some time using those tools, learning their capabilities. Keep in mind that these solutions may not be genAI tools.
For example, if you’re developing an AI-grading solution for instructors, learn the specifics of how an instructor goes about developing a rubric and evaluating student work. See how they use learning management systems (e.g., Canvas) in their process, and if possible, spend some time with these tools hands-on.
5. All Together, Now!
You can scale the solo experimentation approaches we’ve covered to a team level, to help foster shared understanding of technology capabilities, as well as connect people across functions. This can be done by leading a playtest workshop for your immediate cross-functional team.
Before the Workshop:
Identify the goal: Identify a goal that is relevant to your customers, that you can ask people in the workshop to perform. For example, if you’re a company building easy-to-use design tools for a general audience, the goal for the session might be: “Generate a custom image for your best friend’s birthday card”.
Identify your tech: In parallel, work with your team - especially engineering to select relevant technology (ideally something internal, in development), and make sure it’s capable of the goal you want people to try. If it’s not, scale your goal accordingly. Check that team members can easily access the tech for the session.
Set logistics: Prepare a way to record feedback during the workshop, invite people to the workshop (1 hour is a good start), and share instructions on how to get set up with the tech ahead of the workshop.
During the Workshop:
Set context: As the facilitator, ground the workshop in key information about your customers (e.g., goals, motivations) and use cases.
Set the goal: Ask the team to try to accomplish the goal that is relevant to your users, which you identified before the workshop.
Document feedback: As everyone goes hands-on to accomplish this goal, encourage people to exchange thoughts and ideas in real time. Take notes of what you hear. You can also have attendees write feedback in a centralized document, organized under simple headers like “What worked well and why?”, “What didn’t work well and why”?, “What if anything is missing?”, and an area for other open-ended feedback. Keep these questions open-ended and light weight.
After the Workshop:
Synthesize: Synthesize key themes from your observations and the team document. Review these with key partners, prioritize action items and share them back with the team.
Follow-up: Check in with your team about the learnings, seeing how they’re being actioned upon and offering clarification or support.
Set a cadence: Consider scheduling these every time there is a major technical development, creating a dedicated time and space for people to build intuition with the latest tech.
I’ll add that this playtesting approach can be valuable with any in-development technology, and is something I’ve regularly used in my practice outside genAI (e.g., you can set up playtests every time a major app build is internally released).
When Should We Start Intuition Building?
It's best to start intuition building early in the product development process, particularly when framing your problem space. While it might seem like this biases your team toward technology-first thinking, the reality is that genAI often starts with a technology solution in mind. For example, teams customizing a foundation model are already working within an existing solution space. This is modeled in the Problem-Solution Symbiosis Framework, in which problem and solution spaces evolve in parallel, requiring an understanding of both from the outset.
Research from Carnegie Mellon University supports this approach. Researchers ran two ideation workshops, with the goal of producing ideas that were feasible and relevant for applications in a hospital intensive care unit (ICU). Each workshop included a team of clinicians (deeply familiar with people problems) and data scientists (deeply familiar with the technology) working to improve critical care medicine in the ICU.
In the first workshop, participants focused solely on ideating based on user problems without considering AI capabilities. In the second, participants were first briefed on AI capabilities, shown below.

The second workshop yielded more impactful, lower-effort solutions relative to the first. Output ideas included AI improving the coordination between clinicians (e.g., generating a schedule for nurses and respiratory therapists for extubation), or systems that improved logistics and resource allocation (e.g., predicting which medications would be needed based on current patients, and pre-ordering from the pharmacy).
In contrast, almost no concepts from the first workshop were low-effort, and involved situations with high uncertainty where the task genAI addressed would be difficult, even for trained human experts. For example: using deep learning to help discover the right amount of sedation for a patient on a ventilator. Too little or too much sedation would have serious consequences for the patient, making this a very high risk use case.
This research demonstrates the importance in understanding genAI capabilities early in the product development process, and how grounding ideation in these capabilities can help identify use cases that are both valuable and feasible.
Takeaways
Today’s episode explored Building Solution Space Intuition - one of the tools from the Problem-Solution Symbiosis Framework. Building intuition involves developing an experientially-based understanding of genAI’s capabilities, helping us better identify the user value the technology can unlock.
5️⃣ Five Practical Ways to Build Solution Space Intuition
We covered five ways you can start building intuition of genAI technology capabilities:
Get to Know Foundation Models: Spend hands-on time with frontier models to understand their strengths and weaknesses.
Target Experimentation to Your Product Area: Use prompts and tasks specific to your domain to see how genAI performs in relevant contexts.
Learn from Your ML Engineers: Collaborate closely with ML engineers to gain insights into model capabilities and limitations.
Experience Your Competitive Landscape: Analyze and use existing tools your target audience currently employs to understand the ecosystem your genAI product will enter.
Team Workshops for Collective Understanding: Organize a cross-functional playtest workshop to build a shared comprehension of genAI capabilities.
👁️ Looking Ahead: The Cognitive Offloading Matrix
Stay tuned for next week's introduction of the Cognitive Offloading Matrix, a tool designed to help identify which tasks can be effectively offloaded to genAI, optimizing your product strategy.
📣 Call to Action
Interested in building genAI intuition with your team? Let’s chat about how we can partner on a case study! Reach out at stef@sendfull.com
Building GenAI Intuition: A How-To Guide © 2024 by Stefanie Hutka, Head of Design Research, Sendfull LLC is licensed under CC BY-SA 4.0
Human-Computer Interaction News
Autonomous Vehicles Could Understand their Passengers Better with ChatGPT: Researchers at Purdue University may be among the first experiments testing how well a real autonomous vehicle can use large language models to interpret commands from a passenger and drive accordingly. For instance, a passenger could say, “I’m in a hurry”, and the vehicle takes you on the most efficient route.
Americans in Both Parties are Concerned over the Impact of AI on the 2024 Presidential Campaign: A Pew Research Center survey reveals that 39% of Americans believe AI will be used primarily for harmful purposes during the presidential campaign, while only 5% think it will be used mostly for good. Additionally, 57% of U.S. adults, with similar shares across Republicans and Democrats, expressed deep concern that AI will be used to create and spread fake or misleading information to influence the election.
AI-Generated Ideas Rated More “Original” Than Experts: Fifty scientists and an LLM ideation agent each generated research ideas. Reviewers scored AI-generated concepts as more exciting than those written by humans, although the AI’s suggestions scored slightly lower on feasibility. Read the pre-print here.