EngineAI Releases Comprehensive Open-Source Resources to Accelerate Robotics Development

This initiative marks a significant step in promoting collaborative development and lowering technical barriers in the robotics industry.

Shenzhen EngineAI Robotics, an innovator in humanoid robots, has officially released a comprehensive suite of open-source resources, providing developers with structured guidance across key areas of robotics, from modular architecture design to multimodal control systems. This initiative marks a significant step in promoting collaborative development and lowering technical barriers in the robotics industry.


Zhao Tongyang, the founder and CEO of EngineAI said, "EngineAI views open-sourcing as more than a technical offering, it is an ecosystem-building effort. By sharing advanced tools and frameworks, this release aims to empower startups, research institutions, and independent developers. Our long-term vision is to create the world's leading general-purpose humanoid robot and continue to promote revolutionary innovation in embodied intelligence."

At the heart of the open-source release is a dual-framework offering: a training code repository and a deployment code repository. Together, they form an end-to-end solution that enables robotics development from algorithm training to real-world application.

The training framework, EngineAI RL Workspace, is a modular reinforcement learning platform built specifically for legged robotics. It integrates the full development pipeline, from environment setup to algorithm training and performance evaluation. The system is architected with four core clusters: environment modules, algorithm engines, shared toolkits, and integration layers. Each component is independently encapsulated, allowing developers to modify modules without impacting the entire system. This design significantly reduces communication overhead and facilitates multi-person collaboration.

EngineAI RL Workspace emphasizes development efficiency through reusable logic structures. Its single-algorithm executor supports both training and inference using a unified execution flow, enabling developers to focus on algorithmic innovation rather than infrastructure repetition. Furthermore, the decoupling of algorithms and environments allows for seamless iteration without altering interface definitions.

To support full-cycle experimentation, the workspace is equipped with advanced tools that support various stages of the project lifecycle. This includes dynamic recording systems that capture video during training and inference processes, as well as intelligent version management that ensures consistency across experiments, eliminating the need for manual file searches and preventing discrepancies caused by version mismatches.

Complementing the training tools is EngineAI ROS, a ROS2-based deployment framework designed to bridge algorithm models with practical use cases. Moreover, to ensure accessibility, EngineAI has also published detailed user guides for both the training and deployment frameworks, helping developers quickly onboard and integrate the tools into their projects.

This open source initiative vividly demonstrates EngineAI's commitment to open innovation by lowering entry barriers and encouraging global participation, allowing multinational developers to jointly shape the future of intelligent machines.

To obtain the complete data, please contact: engineai.ai@gmail.com.

About EngineAI

Founded in October 2023 and headquartered in Shenzhen, China, EngineAI is dedicated to developing world-leading general-purpose humanoid robots while continuously accelerating innovation in the embodied intelligence revolution.

Featured Product

3D Vision: Ensenso B now also available as a mono version!

3D Vision: Ensenso B now also available as a mono version!

This compact 3D camera series combines a very short working distance, a large field of view and a high depth of field - perfect for bin picking applications. With its ability to capture multiple objects over a large area, it can help robots empty containers more efficiently. Now available from IDS Imaging Development Systems. In the color version of the Ensenso B, the stereo system is equipped with two RGB image sensors. This saves additional sensors and reduces installation space and hardware costs. Now, you can also choose your model to be equipped with two 5 MP mono sensors, achieving impressively high spatial precision. With enhanced sharpness and accuracy, you can tackle applications where absolute precision is essential. The great strength of the Ensenso B lies in the very precise detection of objects at close range. It offers a wide field of view and an impressively high depth of field. This means that the area in which an object is in focus is unusually large. At a distance of 30 centimetres between the camera and the object, the Z-accuracy is approx. 0.1 millimetres. The maximum working distance is 2 meters. This 3D camera series complies with protection class IP65/67 and is ideal for use in industrial environments.