
“How do I use this?” I asked, looking at a record player while wearing a pair of Snap 5th generation Spectacles Augmented Reality (AR) glasses.
The glasses quickly returned spatially annotated instructions, digitally “anchored” to different parts of the record player.
The experience, Spatial Tips, is designed to extend beyond the confines of a demo: look at an object, ask your AR glasses how to use it, receive contextual guidance courtesy of AI, and complete the task with AI’s support. This was one of the many experiences I had the opportunity to try last week at Augmented World Expo (AWE), the XR industry's flagship conference.
Since AWE’s founding in 2010, it has grown from a niche gathering to 6,000+ attendees, 300 exhibitors, and nearly 400 speakers. I’ve attended most AWE events over the past decade - walking the expo floor, giving talks, speaking on panels, and serving as a main stage host since 2022.

In this week’s episode, I share takeaways grounded in the three themes I was tracking: the XR x AI user experience, XR x AI adoption in enterprise, and AI-enabled XR creator tools.
The XR x AI User Experience
TLDR
Compared to last year, there’s clear progress - XR x AI is now delivering strong proof-of-concept experiences that are useful and usable, such as low-latency, hands-free spatial instructions.
To move to broader adoption, we’ll need more refined design patterns that reduce cognitive load and maintain situational awareness, along with better hardware (especially battery life) and expanded interoperability with existing devices.
As AI-enabled XR wearables grow more context-aware, we need a greater focus on privacy and data use to protect users and build trust.
Deeper Dive
A key theme at AWE 2025 was how XR - especially AR glasses - and AI are converging to create more intuitive, immersive user experiences.
“We used to ask what the killer app for XR would be. Now we realize it’s AI itself—and vice versa. XR is the killer interface for AI.” - Ori Inbar, AWE Co-founder
Qualcomm demonstrated the first fully on-device generative AI assistant running on RayNeo X3 Pro AR glasses, powered by the new Snapdragon AR1+ chip. The glasses could interpret a voice query (e.g., asking for fettuccine alfredo ingredients) and display the response directly in the wearer’s field of view - no phone or cloud required. While there was some lag, and the output appeared to clutter the user’s field of view, the core interaction model is promising. There’s a significant opportunity for design practitioners to define interaction patterns that keep the user’s visual field uncluttered, helping keep cognitive load low and situational awareness high.
Across the show floor, over 20 smart eyewear devices featured AI assistants, signaling progress toward the long-standing XR vision of embedding computing in the physical world and closing the loop between user intent and system response.
One standout was Snap Spectacles’ Spatial Tips demo: look at an object, ask how to use it, and get contextual AR guidance overlaid in real time. It delivered on a key XR x AI use case - hands-free, heads-up, in-context instruction - offering support that’s more immediate and less cognitively demanding than referencing a mobile phone or printed manual. Spectacles also support translation, recipe guidance, and games that incorporate the physical world into the experience - all triggered by what you see or say.

Another demo, Imagine Together, invited playful experimentation through natural voice prompts that generated corresponding 3D imagery. I’d say, “imagine an ostrich in running clothes”, and moments later, a 3D ostrich in running clothes materialized in my space. Less practical than Spatial Tips, but illustrative of what expressive, real-time media creation and sharing could become (instead of memes or reels, we send spatial creations to one another).
Both demos were responsive, and the glasses were comfortable. Plus, the glasses offer some mobile interoperability - like using your phone as a controller or display. This is important because adding another hardware device to your technology ecosystem is high-lift, and glasses aren’t at the point where they can obviate a mobile device.
In addition, the glasses' 45-minute battery life limits use to short, high-impact moments. While the demos were good, they weren’t yet good enough for me to carry glasses (and possibly a battery pack) around along with my phone. Seamless interoperability between AR glasses and mobile devices (and other wearables, like your smartwatch), and extending battery life are important for driving adoption.
Last but critical: privacy. As AI wearables grow more context-aware, privacy must be a core design principle to build and keep user trust. PICO’s SecureMR was one promising example, enabling processing of sensitive sensor data (e.g., video feeds) on device, while enabling developers to build smart overlays, real-time object recognition, and environment-aware guidance. But across the event, privacy wasn’t treated as central. For XR x AI to scale, trust can’t be retrofitted - it must be built in from the start.
XR x AI Adoption in Enterprise
TLDR
Enterprise accounts for 71% of enterprise use cases, with AI increasingly integrated to enhance functionality and performance.
AI-driven AR is enabling shared spatial understanding between humans and robots, transforming physical workspaces into responsive, context-aware environments.
Spatial mapping is the backbone of this transformation.
Deeper Dive
The conference underscored how AI-powered XR is gaining momentum in enterprise, now representing 71% of the market. Companies are using AR and AI to enhance training, navigation, and automation.
A standout example was Auki Labs’ Cactus platform, which uses spatial AI to optimize in-store operations with AR overlays for tasks, inventory, and guidance. Auki’s system enables a “digital twin” for retail: store associates or robots can see AR overlays of tasks, inventory, and guidance generated by AI. (Fun fact: Cactus won Best Enterprise Solution at the AWE 2025 Auggie Awards).
In his keynote, Auki founder Nils Pihl emphasized the need to make the physical world accessible to AI.

Auki Labs' system helps AR devices and robots understand where they are in the real world by creating accurate 3D maps of physical spaces. Pihl emphasized in his keynote as key to making the physical world accessible to AI.
Beyond retail, enterprise XR x AI applications are expanding into logistics, manufacturing, and maintenance. Niantic Spatial’s CTO, Brian McClendon, introduced Large Geospatial Models - AI-powered world maps enabling AR devices and robots to navigate with spatial context. In demos, AR headsets guided warehouse workers while AI agents used the same data to move robots safely through factory environments. Niantic also announced a partnership with Snap to build an AI-powered global map for AR glasses, reinforcing the importance of shared spatial intelligence.
AI-enabled XR Creator Tools
TLDR
AI is streamlining XR content creation, making development faster, easier, and more accessible for creators of all skill levels.
Platforms like Snap’s Lens Studio and 8th Wall Studio now feature generative AI tools that handle tasks like 3D asset generation, natural language input, and scene recognition.
These tools lower technical barriers and expand who can build XR experiences, helping fill the content gap that's critical for broader adoption.
Deeper Dive
Many AWE 2025 announcements highlighted how AI is making AR/VR content creation faster and more accessible. Snap introduced updates to their creator tool, Lens Studio, including built-in generative AI and integrations with OpenAI, Google Gemini, and DeepSeek. Developers can now build multimodal AR experiences that understand natural language, recognize scenes, and generate 3D content in real time.
These tools significantly lower the barrier to creating context-aware AR experiences, like hearing a user’s question and overlaying a visual answer in relation to the user’s environment. It’s a notable leap forward from last year, when I spoke on a panel about making XR creation more accessible with generative AI.
Other tools showed similar progress. 8th Wall Studio (part of Niantic), which won the 2025 Auggie Award for Best Developer Tool, debuted its AI-native Asset Lab - a feature that lets creators generate 3D models, characters, and animations from a text or image prompt. This kind of one-click content generation meaningfully lowers the barrier to entry, especially given the steep learning curve of 3D modeling, animation, and interactivity typically required to build XR experiences.

We also saw AI supporting creators through improved world understanding. Niantic’s partnership with Snap to build an AI-powered world map enables developers to anchor content to real locations and deploy AI agents that understand those spaces semantically. This unlocks use cases like location-aware AR storytelling and immersive experiences, as well as AI-guided navigation and contextual in-the-moment assistance.
Takeaways
1. AI-enabled XR experiences are starting to deliver user value, but broader adoption depends on solving for privacy, usability, and hardware limitations.
The combination of XR and AI technology is enabling useful, hands-free experiences like real-time spatial instructions. However, scaling will require stronger privacy protections to build user trust, design patterns that keep cognitive load low, and practical hardware improvements such as longer battery life and strong interoperability with people’s current hardware (e.g., mobile phones).
2. Spatial mapping is the connective layer between XR wearables and robotics, enabling coordinated human-machine interaction.
Accurate, real-time spatial maps are powering shared understanding between people and machines, forming the foundation for more complex collaboration in enterprise environments like retail and manufacturing.
3. AI-powered creator tools are lowering barriers to creating XR experiences and fueling the XR content pipeline.
Generative AI is reducing technical barriers and speeding up development, allowing a broader, more diverse set of creators to produce high-quality AR/VR content. This shift is critical, as the success of XR wearables depends on a steady pipeline of compelling content and applications.
Human-Computer Interaction News
Your Brain on ChatGPT: A new study led by MIT researchers found that relying on ChatGPT for essay writing led to weaker neural engagement, lower memory recall, and reduced perceived ownership of work, compared to using a search engine or no tools at all. Taking breaks from AI tools during the writing process can help protect memory circuits and promote deeper cognitive and linguistic engagement.
The Illusion of Thinking: Researchers at Apple found that new “thinking” models (Large Reasoning Models) perform well on medium-difficulty tasks but break down on more complex ones. This raises important considerations for feasibility (certain tasks may not be reliably automated) and trust (confident but flawed reasoning can mislead users).
Future of Work with AI Agents: A new framework by Stanford researchers maps how workers want AI agents to support or automate tasks, as compared to what AI can currently do. The research highlights critical mismatches and opportunities for AI agent development, and offers early signals of how AI agent integration may reshape core human competencies from information-focused skills to interpersonal ones.
Building an emerging product in XR, AI, or Robotics? Sendfull helps you get to market faster - and with greater confidence - using neuroscience-backed evaluation frameworks. Reach out at hello@sendfull.com
That’s a wrap 🌯 . More human-computer interaction news from Sendfull in two weeks!