Welcome back to Sendfull’s three-part series, exploring different futures thinking approaches and methodologies. Last week, we discussed a design research method called anticipatory ethnography, where we use design fiction “prototypes” to help anticipate user needs and inform 0-to-1 product development.
Today, we share the last episode in the series, focusing on foresight: a mindset and process that turns facts about the present into plausible, provocative, and rational potential futures. We discuss what foresight is (and isn’t), distill the key steps of practicing foresight, and share three things you can start doing today to start practicing foresight.
What is (and isn’t) foresight?
Foresight is the systematic analysis of patterns that affect change, in order to turn facts about the present into potential, plausible futures. Foresight practice often works on a 10-year timescale. As researcher and game designer Jane McGonigal has found, after surveying over 10,000 students: 10 years is enough time for society and your own life to become dramatically different, relative to today.
Importantly, foresight is not about predicting the future - there are far too many variables at play to accurately predict the state of the world in 2034. Instead, foresight helps us imagine many different, potential futures, to help us talk about the futures we want to experience, and take action to increase the probability of those futures. This practice can help us think more creatively and flexibly, and more clearly communicate our vision of the future.
You might be wondering if and how foresight practice differs from futures thinking. While closely related, I distinguish between the two. Futures thinking is the broad, umbrella term for the mindset and approach that helps us explore, anticipate and plan for potential futures. In contrast, foresight practice - while under the futures thinking umbrella - tends to refer to a specific, structured process, in which we generate future forecasts using “signals” and “drivers”.
Signals and drivers
Think of the last time something made you stop and think about the possibilities it represented. Maybe it was trying alternative meat for the first time. Maybe it was your first time hearing about cryptocurrency. Or maybe, it was a tech company going fully remote. These reactions are something to pay attention to - that reaction means you have just identified a signal.
The Institute for the Future (IFTF) has referenced science fiction author William Gibson’s famous quote with regard to signals: “The future is already here — it’s just not evenly distributed”. Gibson is referring to “signals of change” - these small, often local, innovations of people doing things that differ from how the mainstream believes the world functions. They might be new ideas, technologies, or habits.
Now, consider broad, long-term trends like climate change, the aging population, and the rise of disinformation. These trends are called drivers in foresight practice. IFTF uses the acronym STEEP - social, technological, environmental, economic, and political - to help identify and categorize drivers. Drivers don’t operate in isolation, and regularly overlap.
Drivers create the conditions necessary for signals to form. IFTF has a great analogy for this: “drivers are to signals as a river is to a whirlpool. The river flows continuously while contributing to the conditions necessary for a particular vortex to form”.
Foresight practice, in practice
How do you put signals and drivers together to anticipate potential futures? I’ll distill the key steps of how I’ve applied foresight practice.
Step 1: Set your focus. Identify what use case you want to focus on. You can quickly go down a rabbit hole exploring different futures, so having a use case focus area is important.
Example focus: Creating more accessible extended reality (XR) experiences for people with disabilities, using generative AI. The focus aims to address the challenge of making XR environments more adaptable to various needs, such as visual, hearing and mobility impairments.
Step 2: Identify your drivers and signals. Leveraging both secondary and primary research, identify the drivers and signals that are relevant to your use case. Remember the “STEEP” acronym for identifying drivers. For signals, think about qualitative observations that provoked a reaction - something that made you think about the possibilities it represented.
Example drivers and signals: Drivers could include the increasing demand for accessible experiences (Social), and the increasing overlap of AI and XR (Technological). Your signal might be observing a friend adjusting the text size in a virtual environment for better readability, indicating a need for customizable accessibility settings in XR.
Step 3: Create a scenario. Go back to your signals from Step 2. Your signal is a specific observation about how the future might change, which will help you create a scenario - a narrative about the future. Start by asking, what does the signal tell you about what we’re moving away from versus towards? What does it look like if the signal becomes mainstream? From here, you likely have enough information to create your scenario.
Example scenario: XR applications leverage generative AI to dynamically adapt to individual users' needs.
Step 4: Map your scenario to Dator’s Four Futures: These Four Futures are continuation (‘business as usual’ - the world moves forward on its current trajectory), collapse (our current trajectory suddenly halts and comes apart), discipline (new limits are imposed on current trajectory, to limit collapse), and transformation (new ways of being that transcend the current trajectory).
Example mapping:
Continuation: XR technologies continue to evolve incrementally, with slow integration of AI-driven accessibility features becoming more common but not universal.
Collapse: Regulatory or technological challenges halt the progress of integrating AI into accessibility, leaving many users with disabilities marginalized in XR spaces.
Discipline: The industry adopts strict standards and regulations for accessibility in XR, ensuring that all new developments include AI-driven features to support users with disabilities.
Transformation: A breakthrough in AI and XR technologies enables a new era of fully accessible digital environments, where experiences are personalized in real-time to each user's unique needs, making XR universally accessible and indistinguishable from non-XR interactions for people with disabilities.
Step 5: Diverge and converge. Brainstorm about what problems you might be designing for in each future scenario. Ideate about potential solutions. Then, dial it back to the present day, learning from potential analogs that can inform near-term change. Consider how you can start working towards your best ideas today, and discuss how you can future proof if more unfavorable futures occur.
Example divergence and convergence:
Problems and solutions: In a transformation scenario, one might design an XR educational platform that uses AI to adapt learning materials to the student's preferred learning mode (visual, auditory, sign language). Problems to consider include ensuring the AI accurately interprets the user's needs and protecting user privacy.
Near-term analog: Using current text-to-speech and speech-to-text technologies to assist users with visual and hearing impairments in navigating smartphones and computers. This analog can inform the development of similar, more advanced features in XR.
Future proofing: Developing flexible AI models that can be easily updated as new accessibility needs emerge, ensuring that XR experiences remain inclusive even as technology and user expectations evolve.
Takeaways
Here are three things you can start doing today to start practicing foresight:
Identify at least one signal and driver you’re currently observing. What’s a driver that is shaping your professional career? Maybe it’s the growth of generative AI. How about a signal? Maybe it's seeing some consumers going analog. Remember, this is just the beginning: signals and drivers are connected (remember the river/whirlpool analogy), and drivers regularly overlap.
Consider your 10-year vision, based on your signals and drivers. Identify what you (or your product team!) might be doing in 10 years, if the signal you’re seeing becomes mainstream. Try swapping out different signals and drivers, and see how that changes your 10-year vision.
Learn more: Keep in mind that the five-step process I shared earlier is a simplification, to provide an overview of how you can apply the foresight process. To learn more, I recommend checking out Institute for the Future and this publication on Exploring Ethnofutures, published as part of the fantastic applied ethnography conference, EPIC.
This brings our three-part series on futures thinking methods to a close. In case you missed it, you can check out part 1 on futures thinking for emerging product design, and part 2 on anticipating futures with design research, where we discussed anticipatory ethnography using design fiction.
Stay tuned for next week, where I’ll share a case study from a recent talk I presented, demonstrating how I’ve applied futures thinking to form a point of view on outsourcing increasingly complex “thinking tasks” to AI.
Human Computer Interaction News
Meta's Ray-Ban smart glasses are becoming AI-powered tour guides: The overlap between extended reality and AI continues to deepen. Meta’s Andrew Bosworth (Boz) shared a new Look and Ask feature on the Ray Ban smart glasses. Using these features, you can recognize landmarks (e.g., the Golden Gate Bridge) and learn facts about them. They use their in-built camera to scan the scene in front of you, and cross-reference the image with info in Meta AI’s knowledge database.
Google DeepMind’s latest AI agent learned to play Goat Simulator 3: Google DeepMind revealed an AI program capable of learning how to complete tasks in a number of games, including Goat Simulator 3 - a surreal video game in which players take domesticated goats on a series of implausible adventures. The agent learns to tackle games it has never seen before by watching human players. This development signals a step towards more generalized AI that can transfer skills across multiple environments.
Adobe Substance 3D’s AI features can turn text into backgrounds and textures: Two new beta tools for Adobe Substance can generate object textures and staging environments using text descriptions. These tools can help 3D artists to quickly produce creative assets for their projects using text descriptions, streamlining 3D workflows.
Building a product for anticipated audiences? Sendfull can help. Reach out at hello@sendfull.com
That’s a wrap 🌯 . More human-computer interaction news from Sendfull next week.