As a Canadian living in the United States, this is a big week for holidays. We had Canada Day on July 1. Independence Day - aka July 4 - is tomorrow. Keeping with the summer holiday theme, this week’s episode will be a light read. I share freshly-uploaded recordings from this year’s Augmented World Expo conference, and a throwback to a talk I gave with wonderful collaborator and design researcher, Laura Herman, in 2020. Have a restful week, everyone!
AWE Talk Roundup
#1: Building for the Future of Human Cognition (watch here)

As XR and AI become increasingly intertwined, we have the potential to extend human cognition in a way that was previously impossible. In my talk, I discussed how extended reality (XR) and AI can extend cognition, described potential UX futures at the intersection of XR and AI, and shared a framework to help teams build experiences that are useful and desirable.
#2: How Generative AI Can Make XR Creation More Accessible (watch here)

Our panel discussed the challenges that face creators with disabilities, the capabilities of AI that could support creators with disabilities (and therefore, all creators, per the curb cut effect), and examined how these developments could positively affect the XR x AI industry as a whole. Our panel was comprised of Dylan Fox, Director of Operations for XR Access, Sean Dougherty, Director, Accessible UX Services at LightHouse SF, Ash Shah, AR Prototype Engineer at Magic Leap, and yours truly.
Want to learn more about XR accessibility? Check out episode 33 for my thoughts on generative AI's role in XR accessibility, and Dylan’s write up on all things accessibility x XR x genAI at AWE.
#3: Multisensory Perception in XR: Insights from Neuroscience and User Research (watch here)

In this talk from AWE 2020, Laura Herman and I discussed how XR technology affects our perception, and what we can learn from our perceptual systems to inform the multisensory design of XR experiences. We covered how the human brain perceives the world, how the brain perceives in immersive experiences, and how we can leverage this understanding to build a more empathetic future of spatial computing.
We had noted that computers are “rapidly evolving to better perceive the world through technologies like computer vision and voice interfaces” and that “simultaneously, users increasingly expect multisensory user experiences in lieu of traditional, two-dimensional audiovisual interfaces.” Fast-forward to current generative AI multimodal interfaces, and the increasing AI and XR overlap - looks like we got a few things right! :)
Human-Computer Interaction News
Character.AI now allows users to talk with AI avatars over calls: Back in episode 2, I wrote about the disproportionately high DAU/MAU of Character.AI, relative to other AI-first companies like Runway and ChatGPT. At the time, people could only interact with their AI character of choice using text. The company recently announced that people could now also call their AI character, enabling a more immersive, dynamic experience for users. In testing ahead of launch, more than 3 million users had made over 20 million calls. Use cases included practicing language skills, mock interviews and use in role playing games.
Figma announces big redesign with AI: Figma’s Config conference took place on June 26-27. Two major announcements were a major UI redesign and new generative AI tools to help people more easily create design projects. One example of the new AI capabilities in use is prompting Figma to create a design for a new restaurant app. The tool will then generate an app mock-up with menu listings, a tab bar, and buttons for delivery partners like Uber Eats and DoorDash.
Magical thinking and textual skeuomorphism: prompt engineering incantations is terrible UX: Design leader Mike Kuniavsky wrote a great piece on why we need to shift away from treating prompts as magical incantations and towards developing more practical and insightful interaction frameworks. He argues that prompt engineering embodies a cluster of unhelpful interaction metaphors (and is therefore terrible UX), and calls for designers to think about what the alternative could be (perhaps something like a programming language).
Is your team working on AR/VR solutions? Sendfull can help you test hypotheses to build useful, desirable experiences. Reach out at hello@sendfull.com
That’s a wrap 🌯 . More human-computer interaction news from Sendfull next week.