Unity Releases Enhancements for Robotics Industry

Computer Vision and ROS Integrations Unlock and Accelerate Robotics Applications for Industrials

Unity, the world's leading platform for creating and operating real-time 3D (RT3D) content, today released its Object Pose Estimation demonstration, which combines the power of computer vision and simulation technologies illustrating how Unity's AI and Machine Learning capabilities are having real-world impact on the use of robotics in industrial settings. Object Pose Estimation and its corresponding demonstration come on the heels of recent releases aimed at supporting the eminent Robot Operating System (ROS), a flexible framework for writing robot software. The combinations of these Unity tools and others open the door for roboticists to safely, cost-effectively, and quickly explore, test, develop, and deploy solutions.


"This is a powerful example of a system that learns instead of being programmed, and as it learns from the synthetic data, it is able to capture much more nuanced patterns than any programmer ever could," said Dr. Danny Lange, Senior Vice President of Artificial Intelligence, Unity. "Layering our technologies together shows how we are crossing a line, and we are starting to deal with something that is truly AI, and in this case, demonstrating the efficiencies possible in training robots."

Simulation technology is highly effective and advantageous when testing applications in situations that are dangerous, expensive, or rare. Validating applications in simulation before deploying to the robot shortens iteration time by revealing potential issues early. The combination of Unity's built-in physics engine and the Unity Editor can be used to create endless permutations of virtual environments, enabling objects to be controlled by (an approximation) of the forces which act on them in the real world.

The Object Pose Estimation demo succeeds the release of Unity's URDF Importer, an open-source Unity package for importing a robot into a Unity scene from its URDF file that takes advantage of enhanced support for articulations in Unity for more realistic kinematic simulations, and Unity's ROS-TCP-Connector, which greatly reduces the latency of messages being passed between ROS nodes and Unity, allowing the robot to react in near real-time to its simulated environment. Today's demo builds on this work by showing how Unity Computer Vision tools and the recently released Perception Package can be used to create vast quantities of synthetic, labeled training data to train a simple deep learning model to predict a cube's position. The demo provides a tutorial on how to recreate this project, which can be extended by applying tailored randomizers to create more complex scenes.

"With Unity, we have not only democratized data creation, we've also provided access to an interactive system for simulating advanced interactions in a virtual setting," added Lange. "You can develop the control systems for an autonomous vehicle, for example, or here for highly expensive robotic arms, without the risk of damaging equipment or dramatically increasing cost of industrial installations. To be able to prove the intended applications in a high-fidelity virtual environment will save time and money for the many industries poised to be transformed by robotics combined with AI and Machine Learning."

To learn more about Unity's work enabling the future of robotics, please visit our Unity Robotics page.

About Unity

Unity (NYSE: U) is the world's leading platform for creating and operating real-time 3D (RT3D) content. Creators, ranging from game developers to artists, architects, automotive designers, filmmakers, and others, use Unity to make their imaginations come to life. Unity's platform provides a comprehensive set of software solutions to create, run and monetize interactive, real-time 2D and 3D content for mobile phones, tablets, PCs, consoles, and augmented and virtual reality devices. The company's 1,800+ person research and development team keeps Unity at the forefront of development by working alongside partners to ensure optimized support for the latest releases and platforms. Apps developed by Unity creators were downloaded more than five billion times per month in 2020. For more information, please visit www.unity.com.

Featured Product

3D Vision: Ensenso B now also available as a mono version!

3D Vision: Ensenso B now also available as a mono version!

This compact 3D camera series combines a very short working distance, a large field of view and a high depth of field - perfect for bin picking applications. With its ability to capture multiple objects over a large area, it can help robots empty containers more efficiently. Now available from IDS Imaging Development Systems. In the color version of the Ensenso B, the stereo system is equipped with two RGB image sensors. This saves additional sensors and reduces installation space and hardware costs. Now, you can also choose your model to be equipped with two 5 MP mono sensors, achieving impressively high spatial precision. With enhanced sharpness and accuracy, you can tackle applications where absolute precision is essential. The great strength of the Ensenso B lies in the very precise detection of objects at close range. It offers a wide field of view and an impressively high depth of field. This means that the area in which an object is in focus is unusually large. At a distance of 30 centimetres between the camera and the object, the Z-accuracy is approx. 0.1 millimetres. The maximum working distance is 2 meters. This 3D camera series complies with protection class IP65/67 and is ideal for use in industrial environments.