
At the start of 2024, I predicted that AI and extended reality (XR) would become deeply intertwined. Why does this matter for the XR industry?
As AI development advances, it unlocks the value of an XR wearable - think interacting with ChatGPT through voice input and audio output via smart glasses, or adaptive training scenarios that respond to your voice and gestures in a VR headset. On the creator side, AI can lower the barrier to creating 3D assets or interactive experiences that bring virtual worlds to life.
As someone who has worked in XR for over a decade, this value unlock feels significant. AR and VR have historically struggled to move from novelty to mainstream appeal. AI feels like the missing piece to bridge that gap. That’s why I’m excited to head to Augmented World Expo (AWE) next week - reprising my role as main stage host and spending hands-on time with the latest tech.
AWE is the XR industry's flagship conference (think CES, but exclusively for XR). This year’s event draws 6,000+ attendees, 300 exhibitors, and nearly 400 speakers. The conference theme is the AI + XR imperative, with the core thesis that AI needs XR to perceive and interact with the real world, while XR needs AI to create valuable experiences - faster and cheaper.
In this week’s episode, I share the three themes I’ll be watching at this year’s conference.
What I'm Watching at AWE 2025
1. The XR x AI User Experience
Nearly every AWE exhibitor will showcase AI integration, reflecting the growing convergence of AI and XR. Qualcomm’s keynote will highlight how on-device AI processing is enabling the next generation of smart glasses, reducing reliance on the cloud and allowing for real-time interaction.
I’m curious how this AI integration will feel from a user perspective. Will AI-powered smart glasses deliver truly real-time responsiveness, minimizing the gap between user intent and system output? How seamlessly will these devices interoperate with users’ existing hardware, like smartphones or smartwatches? How are consumers thinking about user and bystander privacy in a perpetually context-aware wearable?
“We used to ask what the killer app for XR would be. Now we realize it’s AI itself -and vice versa. XR is the killer interface for AI.” - Ori Inbar, AWE Co-Founder
2. Enterprise AI x XR: Use Cases and Adoption
Enterprise use now accounts for 71% of the XR market. AWE 2025 will reflect this with a tailored VIP program for sectors including healthcare, manufacturing, retail, and defense. This momentum highlights the the value of applied, industry-specific solutions.
I’ll be watching for how AI is integrated into these XR use cases - and how it’s being adopted. My prediction? Progress will be slow. XR is already challenging to scale, and two-thirds of organizations fail to scale AI across their business.
3. AI-enabled Creator Tools
Text-to-3D platforms, multimodal interfaces, and tools like Sloyd, Bezi, and Scenario are transforming who gets to create for immersive environments. I explored this shift during an AWE 2024 panel, How Generative AI Can Make XR Creation More Accessible, with Dylan Fox, Sean Dougherty, and Ash Shah.
This evolution is especially meaningful for creators with disabilities, who can now leverage multimodal inputs and outputs throughout the creative process, as well as for those previously excluded due to the technical barriers of 3D workflows. I’ll be watching for the latest creator tools - how they lower the barrier to entry and how they address accessibility.
Summary
AI is starting to unlock the full potential of XR. At this year’s AWE, I’ll be watching closely for how that’s playing out in real-world tools, enterprise applications, and creator workflows.
If you’re planning to attend but haven’t registered yet, feel free to use code SPKR25D for 10% off your pass.
Curious how we got here?
Check out past Sendfull episodes from AWE 2024 for more context and conversation:
Human-Computer Interaction News
AI Trend Report: Mary Meeker and her team compile foundational trends about AI. Highlights: charts on consumer AI adoption (slide 55), AI agent interest (slide 91), and the rise of physical agents (slide 300).
Teaching AI Models What They Don’t Know: MIT spinout Themis AI has developed a platform that helps AI models identify and communicate their own uncertainty. This enables more reliable outputs in high-stakes applications like drug discovery, autonomous systems, and LLM deployment.
Extending Minds with Generative AI: Cognitive philosopher Andy Clark, in a recent Nature piece, reframes generative AI as a cognitive partner rather than a threat. He argues that humans are “natural-born cyborgs,” and the real challenge is learning when and how to trust these tools.
Designing emerging technology products? Sendfull can help you find product-market fit. Reach out at hello@sendfull.com
That’s a wrap 🌯 . More human-computer interaction news from Sendfull in two weeks!