ep. 13. New AI-powered devices help us accomplish our goals more quickly and intuitively. Is this enough to drive hardware adoption?
7 min read
Moving beyond apps to accomplish user goals
We’re at a human-computer interaction (HCI) inflection point, where AI-powered devices stand to shorten the loop between user intent and desired output, relative to our current interactions with apps. This helps people accomplish their goals on their devices more efficiently, effectively and with greater satisfaction (in other words: improves usability).
Consider a scenario where I want to schedule lunch with a friend, using most existing technology tools. We can break this goal into tasks (also called a task analysis): Assume I start out in my messaging app, where we first discuss lunch plans. I then open my calendar to check my availability. I then open Yelp to look for restaurant recommendations. I open Google maps to see how long it will take me to get to a given restaurant. Only then will I return to my messaging app to share options and propose a time to meet. I have used four different apps - and this isn’t even getting into specific subtasks performed within each app.
While this scenario is familiar, it is also high-friction. There is a large delta between conceptualizing my goal and actually accomplishing it. AI devices that have been announced in the last three months, such as the Humane AI Pin, Rabbit R1 pocket companion and Samsung Galaxy S24 smartphone, promise a future in which you use a natural language input and/or intuitive gesture to express your goal and near-instantly accomplish it, without all of the apps and taps in between.
Using the Humane AI Pin, you can - for example - set your AI Mic to ambiently listen to a conversation, and then later ask it for key details, both outsourcing memory and obviating the need for a notes app.
Rabbit R1 goes further. It controls apps and processes on your existing phone based on your requests, replicating your interactions with apps instead of you performing them. In the lunch scheduling example, Rabbit theoretically could perform all of the calendar checking, recommendation gathering and so on, informed by past behavior patterns. The result: optimal lunch plans without me having to open a single app.
Samsung Galaxy S24’s Galaxy AI introduces several new interaction paradigms that move us away from apps. Some examples include:
Circle to search with Google: After long-pressing on the home button, the user can circle, highlight, scribble on or tap anything on the phone screen to see search results.
Live translate: Two-way, real-time voice and text translations of phone calls within the native app (incidentally, done on-device to keep conversations private).
Transcript assist: Transcribe, translate and summarize voice memos (on-device - no internet required).

This HCI future, in which an AI breaks a user’s request into different tasks and performs them, informed by past user behavior and current context, personally sounds intuitive and satisfying. We can see how this quickly morphs into anticipating user needs. For example, I'm trying to find my flight gate in a rush. My phone surfaces directions without me asking, because it has all the data it needs to safely infer this is the information I need to accomplish my goal. (Bonus points for showing me this heads-up, while wearing smart glasses).
In short, these AI-powered devices are obviously better at meeting your goals and needs than what we have today, so people will just adopt them, right? Right??
The plot thickens: user adoption
I was struck by Ryan Hoover (Founder of Product Hunt)’s X post - he posited a theory in which new devices like the Human AI Pin and Rabbit R1 will have an easier time building new consumer behaviors than incumbents. He also notes a downside - that new hardware adoption is much slower than that of software adoption.

I agree with parts of this.
I do think people will quickly learn and come to prefer these more intuitive ways to interact with machines and get their desired output. Who wouldn’t want to reduce friction to accomplish their goals? Assuming we help people discover this new way of more intuitively interacting with their devices, and the AI works as marketed, I predict people would move into this post-app future and not look back.
And yes, hardware adoption is generally slower than software option for a number of reasons (cost, physical logistics, risk of obsolescence, etc).
Where I disagree is that I think “incumbent devices”, equipped with functionality like what’s on offer with the new Samsung S24 smartphone, are a Trojan Horse to make these frictionless, post-app interactions normcore. As much as I admire Humane and Rabbits’ innovation, they also mean I now have to carry around another device to make my existing device (phone with apps) more usable. It’s like an ultra high-tech workaround.
“But these devices sense my context with cameras and AI microphones!” Yes, but you can make a lot of that happen on a phone, and make all of it work on smart glasses and spatial computing headsets (by virtue of them being on your face and not in your pocket).
Lastly, it is unclear what the strength of an AI pin or pocket companion is, relative to AI-capable devices we’re already familiar with (e.g., smart phones, watches, etc), and therefore strike me as an awkward but necessary evolutionary step in the broader trajectory of AI hardware development. You could try to leverage the same argument against smart glasses or spatial computing headsets. Except, their unique strengths relative to existing devices are clearer - situation awareness and greater user agency, in which the user controls where they direct attention and what they interact with (via sight) - unlike a lapel pin or pocket companion.
Takeaways
AI-powered devices like Humane’s AI Pin, Rabbit R1 and the Samsung Galaxy S24 smartphone show us a glimpse of future HCI, where we shorten the loop between user intent and desired output. This model reduces user friction, relative to our current interactions with apps to achieve our goals, and is generally desirable.
While they offer this frictionless user experience, AI pin and pocket companion form factors are unlikely to be adopted, as they lack unique strengths relative to incumbent devices, which are (or will soon be) capable of reducing friction via AI-driven HCI. Smart glasses and spatial computing headsets may fare better, as they deliver unique strengths like situation awareness and user agency, relative to incumbent devices.
Approaches for identifying opportunities to reduce friction and increase adoption of products include: understanding the user’s goal and context, conducting a task analysis based on those user goals, and honing the unique strengths a product offers to people, relative to incumbents.
Human Computer Interaction News
Meta pledges to build open–source artificial general intelligence (AGI), centers glasses as key AI user interface: While Mark Zuckerberg’s announcement on AGI was the major focal point, he also talked about the need for new devices to more effectively interact with AI. He predicted that glasses will be a central user interface. Because they let an AI “see what you see and hear what you hear” they’re “always available to help out”. Ray Ban Meta glasses and Meta AI are already headed in this direction, and we’re seeing positive reception from early adopters (see: “Smart glasses without displays are surprisingly useful”).
AI was center-stage at the World Economic Forum’s (WEF) Annual Meeting at Davos: There’s been detailed coverage of this already, so I’ll instead focus on the comprehensive Global Risks Report 2024 released by the WEF. The top three risks believed to present a “material crisis on a global scale in 2024” were extreme weather (66%), AI-generated misinformation and disinformation (53%), social and/or political polarization (46%), cost-of-living crisis (42%), and cyberattacks (39%). Percentages were based on responses from 1,490 experts across academia, business, government, the international community and civil society.
New McKinsey report on the economic potential of generative AI: A key takeaway was that ~75% of the unique value that generative AI use cases could deliver (relative to previous technologies) falls across four areas: Customer operations, marketing and sales, software engineering, and R&D. Examples included gen AI’s ability to support interactions with customers, generate creative content for marketing and sales, and draft computer code based on natural-language prompts, among many other tasks.
Using brain activity to control robots: New research at Stanford has demonstrated that people can direct robots to move objects, clean countertops, play tic-tac-toe, pet a robot dog, and cook a simple meal - using only their brain signals (measured via electroencephalography, or EEG). The technology is called Neural Signal Operated Intelligent Robots (NOIR), and serves as a proof-of-concept example that AI can “successfully decode complex brain wave activity and turn it into robotic action.”
Want to identify ways to reduce friction in your user experience? Identify the unique strengths of your product relative to competitors? Sendfull can help. Reach out at hello@sendfull.com
That’s a wrap 🌯 . More human-computer interaction news from Sendfull next week.