The Humane AI Pin, a much-anticipated screen-less, multi-modal wearable, recently started shipping. While many appreciate the product’s concept of moving us away from screens, user experience (UX) feedback has largely been negative. In this episode, I share feedback themes and recommendations, and offer three lessons that the AI Pin can teach us about building for emerging technology.
What is the Humane AI Pin?

The AI Pin, first announced on November 9, 2023, is a screen-less wearable device in a pin form factor. Its sensors allow you to use voice commands and gestures, take pictures, and recognize objects in the environment. A built-in laser can project menus and text onto your palm. In Humane’s words, the AI Pin lets you “understand, create, communicate and remember, all while being present”. It costs $699 USD, plus a $24 monthly subscription.
The AI Pin has recently started shipping in the United States, and reviewers are sharing mostly critical feedback. We’ll discuss three themes across reviews from The Verge, New York Times, Fast Company, The Washington Post, Bloomberg, as well as from YouTube reviews like Marques Brown.
Three UX Feedback Themes
1. Lacking unique user value relative to existing technology.
The AI Pin was designed to be a phone replacement, with its own phone number, cellular data connection to place phone calls and play music (only via music streaming service, Tidal), and a camera to capture photos and videos. However, reviewers agree that the AI Pin isn’t better than your smartphone at most tasks, leading to user friction and frustration.
For instance, if given the choice to take a picture from the AI Pin on your jacket lapel versus looking through your phone’s viewfinder, you’ll likely choose the latter, because it gives you immediate feedback that you’re capturing what you’re intending to capture. You also cannot access frequently used tools (e.g., cannot order an Uber), meaning you still need to use your phone to accomplish common tasks.
If we were to redesign the AI Pin, we should start with the assumption that people (and culture) change less quickly than technology. No matter how compelling the speculative design vision of a screen-less future, adopting new technology still requires user behavior change. For behavior to change, new technology needs to offer value above and beyond currently-used technology, and consider how that technology fits within culture. For instance, even if the AI Pin was at feature parity with your smartphone, we’d still need to consider the UX of connecting with non-Pin users (e.g., if a smartphone user initiates a video call with an AI Pin user, what does that experience look like?), and social acceptability (we’ll get to that in #2).
Recommendation: Offer interoperability with a user’s smartphone, making the AI Pin a part of your interconnected technology ecosystem. For instance, I could leverage the AI Pin’s portability and voice recording capability to capture a newsletter idea on the go, without pulling out my phone. From here, the idea is transcribed into something I can copy-paste (e.g., from my Notes app) into my Google Doc when back at my laptop. Focusing on fewer high-value features that prioritize interoperability with existing technology could help increase adoption and potentially lower device cost - not unlike the newly-announced, Limitless pendant ($99 USD, plus an optional $19 monthly Pro subscription).
2. Frustration over voice recognition and response limitations.
The AI Pin is a voice-centric device, which - in its current implementation - limits contexts in which people use the device. If you’re in a noisy environment, it becomes harder for the device to detect your voice, and harder to hear its response, leading to increased friction around command repetition and inaccurate responses. Having voice as the primary input also presents privacy concerns: people can easily overhear you, limiting use around private conversations. Finally, you cannot subtly use the pin around others. As The Verge puts it, “When you stand in front of a building, tapping your chest and nattering away to yourself, people will notice.” This speaks to the importance of designing for social acceptability (see #1).
Recommendations: To improve voice recognition accuracy, consider how we might better differentiate between user commands and background noise (e.g., employing machine learning models that adapt to the user's speech patterns and preferences). Implementing a contextual understanding capability to interpret commands more effectively based on the user's location, past interactions, and current activity (of course, with the user’s permission) could reduce the need for repeating or rephrasing commands.
To address privacy concerns related to voice commands in public spaces, consider a low-profile interaction mode. This mode could allow for whisper-level voice commands or even non-verbal cues like tapping or subtle hand gestures to activate specific functions without drawing attention. This discreet interaction mode would make the device more socially acceptable and versatile, enabling users to engage with their AI Pin quietly in a variety of settings.
3. Latency between user intent and output
The AI Pin often lags behind the pace of natural conversation, which can result in users repeating commands or rephrasing questions in an effort to achieve their desired outcome. These delays in response increase latency between between user intent and output, disrupting the flow of interaction and leading to user disappointment.
Users have also expressed that while AI-based recognition is a compelling feature (take note, extended reality wearables companies!), latency issues make it less efficient than using a smartphone for similar tasks. Watch this in action when Marques Brownlee compares object recognition of a Cybertruck via the AI Pin and a smartphone - in short, the smartphone returns the response faster than the pin. This leads to user dissatisfaction, and signals to users that they cannot use the AI Pin for tasks that require immediate feedback or action.
Recommendation: Until we can enhance AI processing speed (e.g., on-device processing), design for scenarios in which the AI Pin can offer unique user value over a smartphone. For instance, when does a user not need high-accuracy, immediate feedback? Maybe this looks like capturing data while out (e.g., snap a picture of interesting restaurants), and then being able to query the AI Pin for synthesized information about what you captured (e.g., recommendations for a vegetarian restaurant that can seat five people tomorrow night). This is a recommendation, not just for the AI Pin, but for any AI solution that has noticeable latency to return a user response.
While there are other feedback themes (e.g., people praised the hardware’s solid build quality and language translation capabilities; people unanimously noted overheating issues), I focused on themes shared earlier because I see those as most salient to emerging technology adoption. You can read more about these other themes in round-up posts such as Tech Radar and Tom’s Guide.
Takeaways
The AI Pin’s UX feedback offers three lessons we can apply to building emerging technologies:
No matter how ambitious our vision, we still need to meet people where they’re at.
People won’t change their behaviors overnight because a product promises a desirable vision of a screen-less future. Instead, we need to identify how to scale that future in a way that delivers distinct benefits to the user today. This requires a more holistic understanding of the user (e.g., understanding their context, goals, attitudes, behaviors, cultural norms), and ways we can connect new technology with their current technology ecosystem, such as interoperability.
Leverage the unique “superpowers” of your emerging technology.
Identify how your technology solution can uniquely benefit the UX, relative to other existing technologies. For a smartphone, unique benefits include precise input and familiar interaction patterns. For smart glasses equipped with AI capabilities, its the ability to learn about the world around you with greater situational awareness than looking through a phone. For smartwatches, its glanceable convenience and health tracking. What can an AI Pin do better than a phone? Smart glasses? Smart watch? Double down on that use case.
Identify your “Golden Path”. Test it end-to-end with users.
A Golden Path, aka Key User Journey, is the key set of steps that a user takes to find a product’s real value. Looking across reviews, it is difficult to identify the AI Pin’s Golden Path. If the AI Pin’s value props were to “understand, create, communicate and remember”, that’s a lot of Paths to get right, especially when introducing a new interaction paradigm. Assuming you’ve done (1) and (2) with your product, now is the time to get crisp on your Golden Path. You can then iteratively test that end-to-end path with users to refine your offering leading up to launch.
Human Computer Interaction News
Building a social media algorithm that actually promotes societal values: A Stanford research team has developed a social media algorithm that integrates democratic values to decrease partisan animosity. Their study used a modified feed-ranking algorithm that downranked or replaced highly anti-democratic posts, which resulted in reduced partisan hostility among users. The findings suggest that prioritizing societal values in algorithms can maintain user engagement while promoting healthier democratic interactions.
New AI music generator Udio synthesizes realistic music on demand: Udio, an AI tool that can generate high-fidelity musical audio from text prompts, was publicly launched on April 10. Created by former DeepMind employees, Udio first generates lyrics with a large language model similar to ChatGPT, and then synthesizes the music. While the output is technically impressive, it still requires significant human input to achieve high-quality results.
The AI Index Report: Stanford’s Human-Centered AI group shared the seventh edition of the AI Index report, covering trends like technical advancements in AI, public perceptions of the technology, and geopolitical dynamics surrounding its development. One of the top takeaways is that AI beats humans on some tasks, but not all. AI surpassed human performance on image classification, visual reasoning and English understanding. Humans excelled at competition-level math, visual commonsense reasoning and planning. Time to update our Fitts Lists, regarding what machines and humans are better at, respectively.
Looking to understand and enhance the user value of your emerging product? Sendfull can help. Reach out at hello@sendfull.com
That’s a wrap 🌯 . More human-computer interaction news from Sendfull next week.