Figure Unveils Next-Gen Conversational Humanoid Robot With 3x AI Computing for Fully Autonomous Tasks

Silicon Valley’s Figure has taken the wraps off of its next-generation Figure 02 conversational humanoid robot that taps into NVIDIA Omniverse and NVIDIA GPUs for fully autonomous tasks.

Figure said it recently tested Figure 02 for data collection and use-case training at BMW Group’s Spartanburg, South Carolina, production line.

Figure 02 comes just 10 months after Figure launched the first version of its general-purpose humanoid robot. The company has accelerated its development timeline using NVIDIA Isaac Sim — a reference application built on the NVIDIA Omniverse platform — to design, train and test AI-based robots using synthetic data, as well as NVIDIA GPUs to train generative AI models.

“Our rapid progress, marked by advances in speech, vision, dexterity and computational power, brings us closer to delivering humanoid robots to address labor shortages for many industries,” said Brett Adcock, CEO of Figure.

The company added a second NVIDIA RTX GPU-based module on board Figure 02, which supplies 3x inference gains for handling fully autonomous real-world AI tasks compared with the robot’s first iteration.

Figure aims to commercialize industrial humanoid robots to address labor shortages, and it plans to produce consumer versions.

Founded in 2022, the startup is partnered with OpenAI to develop custom AI models, trained on NVIDIA H100 GPUs, that drive the robots’ conversational AI capabilities. Figure recently raised $675 million in funding from leading technology companies including NVIDIA.

“Developing autonomous humanoid robots requires the fusion of three computers: NVIDIA DGX for AI training, NVIDIA Omniverse for simulation and NVIDIA Jetson in the robot,” said Deepu Talla, vice president of robotics and edge computing at NVIDIA. “Leading companies, including  Figure, are tapping into the NVIDIA robotics stack, from edge to cloud, to drive innovation in humanoid robotics.”

 

Robotic Hands Capable of Handling Real-World Tasks

New human-scale hands, six RGB cameras, and perception AI models trained with synthetic data generated in Isaac Sim enable Figure 02 to perform high-precision pick-and-place tasks required for smart manufacturing applications.

Figure is among the initial members to join the new NVIDIA Humanoid Robot Developer Program, which provides early access to the latest tools and computing technologies for humanoid robot development. This includes the latest releases of NVIDIA Isaac Sim, Isaac Lab, NIM microservices (RoboCasa and MimicGen)OSMOJetson Thor and Project GR00T general-purpose humanoid foundation models.

 

Featured Product

3D Vision: Ensenso B now also available as a mono version!

3D Vision: Ensenso B now also available as a mono version!

This compact 3D camera series combines a very short working distance, a large field of view and a high depth of field - perfect for bin picking applications. With its ability to capture multiple objects over a large area, it can help robots empty containers more efficiently. Now available from IDS Imaging Development Systems. In the color version of the Ensenso B, the stereo system is equipped with two RGB image sensors. This saves additional sensors and reduces installation space and hardware costs. Now, you can also choose your model to be equipped with two 5 MP mono sensors, achieving impressively high spatial precision. With enhanced sharpness and accuracy, you can tackle applications where absolute precision is essential. The great strength of the Ensenso B lies in the very precise detection of objects at close range. It offers a wide field of view and an impressively high depth of field. This means that the area in which an object is in focus is unusually large. At a distance of 30 centimetres between the camera and the object, the Z-accuracy is approx. 0.1 millimetres. The maximum working distance is 2 meters. This 3D camera series complies with protection class IP65/67 and is ideal for use in industrial environments.