I keep coming back to an episode of Moonshots, Humanoid Robots Are Coming to Your Home This Decade, where Peter Diamandis and David Blundin interview Bernt Børnich, founder and CEO at 1X Robotics. Børnich explains how 1X’s humanoid robots are being trained through a blend of teleoperation and autonomy, emphasizing pragmatic design, real-world learning, and trust in the home.
Teleoperation is the remote control of a machine - especially robots - by a human operator. In 1X’s case, the operator doesn’t puppet every move but gives high-level commands (e.g., “hands here,” “grasp this”). The robot’s perception and control systems figure out the low-level motions.
These points stood out from the interview:
Teleop (read: partial automation) as a bridge - not failure: If a task can be achieved via teleop, that means the hardware is capable. It's only a matter of getting a neural network to learn it end-to-end. This is not a shortcoming but simply a step towards more autonomous performance.
Teleop as training data: Each teleop session generates labeled demonstrations. Work gets done in the moment while the data goes towards improving performance.
Outcome-first philosophy: 1X’s priority is a reliable result in the home. Whether the robot completes a task with full autonomy or with teleop support matters less than the fact that the task is done.
In short, 1X’s goal is delivering a reliable UX today, while building towards autonomy. Strategic sequencing is favored over jumping to (attempted) full autonomy (e.g., Klarna), because the immediate priority is a reliable outcome for the customer. How the outcome is achieved can evolve over time. AI software development could benefit from this approach.
Themes from recent book interviews echo these points. The organizations using AI technology for regulated industries, such as investment banking, care about whether a task gets done reliably - and more efficiently than it is done today. Whether the solution is partially or full autonomous is irrelevant.
Partial autonomy should be the goal because agents are ultimately prediction machines. They don’t understand or care. ChatGPT is bullshit, indifferent to the truth of its outputs (thanks
for the reference). This is why we need humans steering these systems.The trouble with artificial intelligence is that computers don't give a damn. —John Haugeland
AI Agents require structured contexts with a human defining goals, constraining actions, or intervening when needed. Depending on your use case and tech capabilities, maybe what you need is a human-in-the-loop (HITL), where the human remains the primary decision maker while the system carries out their plans. Or perhaps you need human-on-the-loop (HOTL), where a system can make and carry out its own decisions while a human supervises. We can map these as a spectrum: Manual —> HITL —> HOTL —> Fully Autonomous. (Fun fact: In the HCI literature, you’ll even find a 10-point autonomy scale!) The steps between “manual” and “fully autonomous” are collectively "partial autonomy”.
What does the LLM equivalent of teleop look like? The agent doesn’t need to “puppet” every step on its own. Instead, it becomes immediately more useful when it shares control and learns from human feedback.
Together, these points support a key design principle: Automation is a spectrum, not a binary switch. Whether we’re talking about humanoid robots in the home or AI agents in the workplace, the most successful systems will be those that prioritize reliable outcomes and trustworthy interactions over a premature leap to full autonomy.
The takeaway for product and design leaders building agents and robots? Sequence capability development by delivering value now in ways that respect human context, while progressively expanding the system’s role - if both desirable and technically feasible.
This is a Part I of a three-part series on the Principles of Automation - a peek into topics from my forthcoming book. Working on agentic workflows or robotics? I’d love to interview you - reach out at stef@sendfull.com
⏪ Recent Episodes
ep. 76: A Parking Meter's Quiet AI Design Wisdom
ep. 75: Still in the Loop
ep. 74: What Canoeing Can Teach Us About Human-AI Partnership
📖 Good Reads
Stop Applying to Jobs Like a Desperate College Grad (Even if You Are One) by
2025 Is About Vibe-Coding, 2026 Will Be About Consequences! by
That’s a wrap 🌯 . More on UX, HCI, and strategy from Sendfull in two weeks!