Ramping Figure 03 Production
The transition from functional prototype to scalable fleet is a significant hurdle in humanoid robotics. Figure has cleared that gap with BotQ, our high-volume manufacturing facility. The Figure hardware and manufacturing teams have transformed BotQ from a blueprint into a high-output environment, delivering over 350 of our third generation humanoid robots and increasing our production rate from 1 Figure 03 per day to 1 per hour - a 24x throughput improvement in under 120 days.
This 24x increase represents more than just manufacturing efficiency - it is a fundamental shift in our development velocity. By expanding our fleet, we are generating the data streams required to unlock next-generation autonomous capabilities. The following sections outline how we have scaled production, how we manage a growing robot fleet, and how this scale has enabled our latest technological breakthrough: perception-conditioned whole-body control.

Ramping Robot Production
BotQ has successfully transitioned from the prototype phase to the production phase with dedicated lines for all critical modules of the system. We have successfully demonstrated the one robot per hour cycle time needed for our Figure 03 production targets, but will continue to optimize from here. The scale up was made possible by developing efficient lines powered by custom built manufacturing execution software running across over 150 networked workstations.
The largest fundamental challenge was improving yield rate and cycle times. First we had to drastically improve quality at the supplier level by qualifying hundreds of suppliers against incoming inspection criteria to ensure only good parts make it into the building. Every step of the build process has quality checks along the way, with more than 50 in-process inspection points making sure that only good subassemblies roll through the lines. Our rigorous focus is already paying off: our end of line first pass yield is now over 80% and improving weekly, our battery line has achieved a 99.3% first-pass yield and shipped over 500 battery packs, and we have produced over 9,000 actuators across more than 10 distinct SKUs.

We have also implemented sophisticated End-of-Line (EOL) and in-process testing, with each robot subjected to over 80 functional verification tests before sign-off. This includes multi-limb stress testing and comprehensive "burn-in" sessions where robots perform full-body exercises such as squatting, shoulder presses, and jogging at cycle counts in the thousands to replicate real-world use and eliminate early-cycle failures.
Ramping Robot Operations
The increased volume of robots we are running at HQ and beyond directly accelerates our technical roadmap and operational intelligence. Robots shipped from BotQ are allocated to internal research and development groups, data collection, efforts for robots to perform end-to-end housework, and commercial use-case development. The larger our fleet becomes the more data we are generating for Helix, our humanoid AI model. This growth also allows us to deploy more robots into the real world, where we can use direct feedback from live operating environments to harden our system and capabilities.
By running more robots for longer duration we have encountered and resolved failures that were invisible at a smaller scale. This influx of real-world data has enabled us to:
Build Robust Diagnostics: We have implemented an advanced alert system and diagnostics for Failure Analysis, allowing us to pinpoint the root cause of an issue in minutes.
Implement Fallback Ladders: We have engineered software "fallback ladders" that allow a robot to gracefully degrade its performance or safely recover from a non-critical fault, keeping the use case running.
Solve the Long Tail: Having already addressed the high-frequency hardware and software issues, our focus has shifted to the "long tail" of edge-case failures - a stage of maturity that only comes with significant fleet hours.
To sustain this expanding fleet, we are aggressively scaling our service and support infrastructure. We have developed our own internal Field Service Management system and the tooling needed to service Figure 03 in diverse environments from our HQ, to customer sites, to residential homes. This service loop is a critical feedback mechanism: field failures are tracked, analyzed, and fed back to the engineering team for hardware revisions. To manage this, we have established rigorous processes for fleet-wide upgrades and recall campaigns, ensuring every robot in the field stays current with our latest engineering standards. We have also built a custom Fleet Management System that coordinates robots autonomously across various use cases while tracking real-time health, location, and operational status. Combined with our Over-the-Air (OTA) software update infrastructure, we can now deploy new behaviors and upgrades to the entire fleet simultaneously.
New Capabilities - Perception-Conditioned Whole-Body Control
Our capabilities are growing with fleet size. Our most recent unlock is for Helix’s System 0 (S0), an AI model for human-like whole-body control. Until now, S0 reasoned only about the robot's own body - joint state, base motion, and proprioception. It walked confidently across flat ground but was blind to the world in front of it. Stairs, ramps, and uneven terrain required hand-tuned mode switches and operator intervention, because a controller that cannot see cannot anticipate. Closing that gap meant giving the policy a representation of the environment it was about to step into, without sacrificing the stability, robustness, and human-like quality of motion that the existing whole-body controller already delivered.
S0 now has a new capability: it is conditioned on camera perception. RGB images from the onboard head cameras are passed through our stereo model, which lifts them into a 3D representation of the world around the robot. The resulting spatial understanding of the scene is passed to the policy alongside proprioceptive state - so S0 doesn't just feel the ground anymore, it sees it. The policy is trained end-to-end with reinforcement learning in simulation, across thousands of randomized terrains, and the same network weights that learn to climb procedurally generated staircases in sim now traverse real-world stairs on the robot. The transfer happens zero-shot - no real-world fine-tuning, no domain-specific calibration, no operator-in-the-loop adjustments, and across varying lighting conditions. In practice, the sim-to-real gap that has historically gated perception-driven control is no longer the bottleneck for this class of behavior: what S0 learns in simulation is what it does on hardware, with the same human-like gait and recovery characteristics it exhibited on flat ground.
This is a meaningful step toward a fully RL-trained, perception-conditioned controller operating end-to-end across complex environments on our humanoid, and stair traversal is only the first use case. The same architecture - perception in, whole-body control out, trained in simulation and deployed zero-shot - allows access to a much broader class of behaviors where the scene around the robot matters. Stairs are the demonstration; the underlying capability is general.
Conclusion
By overcoming the massive technical hurdles of high-volume manufacturing, Figure has unlocked a major catalyst for humanoid development: physical scale. Every robot that rolls off the BotQ line at our new hourly cadence is more than just a unit of hardware - it is a data-collection engine, a development tool, and a vessel for commercial deployment. With a fleet of this size, we are accumulating the "know-how" of real-world operations faster than anyone else in the space. By mastering the manufacturing, reliability, and orchestration of these robots today, we are ensuring that Figure possesses the data, the infrastructure, and the sheer volume required to lead the next era of robotics.