
Generative AI is on a trajectory to think for us, not with us.
This concern has been on my mind for a while. Last fall, I published an article anticipating the immense potential - and risks - of offloading increasingly complex “thinking tasks” to genAI. With the technology advancing at an unprecedented pace, the urgency of this topic has only grown.
Since then, new research has reinforced a troubling trend: genAI is reshaping cognition, and not for the better. For example, the more we use genAI, the less critical thinking we engage in. This effect is most pronounced among those with high confidence and trust in genAI’s output (and therefore less likely to fact-check) and younger users - raising serious questions about the role of this technology in K-12 education.
In today’s episode, I summarize recent research on how genAI is influencing how we think, reason, and learn, and how we can get on track for genAI to think with us, not for us.
How is GenAI Reshaping Cognition?
A paper published just last week synthesized research from multiple studies, highlighting five key ways genAI is reshaping our cognitive abilities:
Knowledge Acquisition: GenAI search systems, such as Perplexity.ai, SearchGPT, and Microsoft Copilot, streamline information retrieval by synthesizing content from multiple sources, but this can encourage passive consumption and reduce users' ability to critically evaluate information. By prioritizing widely accepted perspectives and reinforcing pre-existing beliefs, these systems risk amplifying echo chambers, marginalizing alternative viewpoints, and limiting exposure to diverse perspectives.
Reasoning: GenAI can influence decision-making by increasing user trust and susceptibility to misinformation, often leading to over-reliance on AI-generated advice even when it contradicts personal reasoning or context. Simply knowing advice comes from an AI can make people trust it more, amplifying belief in misinformation and weakening independent judgment.
Learning: While genAI can enhance short-term task performance, it does not support deep learning or knowledge retention, often fostering overconfidence and skill atrophy. In education, AI-generated assistance can hinder problem-solving skills, particularly among novice learners, making them overly dependent on AI and struggling when its support is removed.
Creativity: GenAI can enhance creativity for less experienced users but risks homogenizing ideas and reinforcing repetitive patterns. While AI improves goal-oriented tasks (convergent thinking), it can also hinder exploration and originality (divergent thinking), potentially reducing long-term human creativity by discouraging independent creative effort.
Metacognition and Critical Thinking: GenAI use is linked to reduced critical thinking as users increasingly offload cognitive tasks, leading to diminished self-regulation and engagement. Frequent AI reliance correlates with weaker critical thinking skills, particularly among younger users and novice learners, underscoring the need to explicitly teach metacognitive strategies.
Where Do We Go From Here?
While it might be tempting to say “we should ban genAI” or “slow its development”, these are unrealistic options.
We also have some evidence for genAI’s positive potential. For instance, increased productivity across contexts, enhancements to artist and designers’ creative process, and potential for improving learning experiences - such as scaling personalized support and diversifying learning materials.
Therefore, instead of resisting AI, we must design AI systems to engage, rather than erode, human cognition. There’s ample room to innovate, with best practices and design patterns still being established.
The research we covered earlier supports three recommendations for designing AI tools that think with us, not for us:
Design AI as a thinking partner (or critic!), not an answer engine: Incorporate metacognitive prompts and AI-generated provocations to foster critical thinking, prompting users to reflect on biases, alternative perspectives, and the reliability of AI outputs. Microsoft Research is experimenting with this for Excel, using AI to support the user in evaluating and updating output.
Add guardrails in educational AI tools: AI use should be minimal in the early stages of learning, only providing formative feedback to users. Guardrails should support a progressive expansion of learner-AI interactions, allowing learners to develop and apply independent judgment when engaging with AI for assistance.
Enhance learning with structured AI interaction: Imagine a schema-based learning tool where you engage with an interactive interface, including a navigable knowledge graph along with a conversational AI agent. This could help learners identify connections between related ideas and build a more comprehensive understanding of a topic.
How to Offload Cognition with Intention Today
If you're an industry practitioner developing an AI tool, you might be wondering how you can offload cognition to AI with greater intentionality today. Perhaps your team is exploring a new product concept or brainstorming AI features to develop. My Cognitive Offloading Matrix provides a framework to determine when to delegate thinking tasks to AI. Beyond product development, this tool can also guide decisions on framing AI capabilities in marketing and integrating AI into your team's workflows.
Human-Computer Interaction News
U.S. Workers Are More Worried Than Hopeful About Future AI Use in the Workplace: Pew Research reports that about a third of workers say AI use will lead to fewer job opportunities for them in the long run; chatbots were seen as more helpful for speeding up work than improving its quality.
12 Most Popular AI Tools: A survey run by Future Publishing (the owner of TechRadar) covering hundreds of tech consumers in the US and the UK shows AI use is increasing, and lists the tools most often used.
AI Generates Human-like Goals: Researchers at NYU explored how humans generate their own goals, from childhood play to adulthood, and how computational models struggle to fully capture this richness. They developed a model that learns from human-created games to represent and generate human-like goals.
That’s a wrap 🌯 . More human-computer interaction news from Sendfull in two weeks!
Want to explore how the Cognitive Offloading Matrix can guide your genAI strategy? Let’s collaborate on a case study and uncover where genAI can add the most value. Reach out at stef@sendfull.com