Today's safety concerns generally boil down to physical component and perception issues. If a robot is in a rugged environment, hardware components degrade and can lead to dangerous malfunctions.
Humanoids - Safety Standards for the Next Wave of Robots
Q&A with Nathan Bivans, Chief Technology Officer | Fort Robotics
Please tell us about yourself and your role with FORT Robotics.
As the Chief Technology Officer at FORT Robotics, I oversee the development of safety, security, and control technology for intelligent machines. Our work is squarely focused on ensuring that robots operate safely at all times–especially as they increasingly interact with humans. I previously developed and patented a wireless safety protocol while serving as the CTO of Humanistic Robotics. Prior to that, I’ve done hardware design on laptop team at Apple, developed distributed IoT systems at Lutron, and architected video and data networking equipment at Motorola USA. Spanning such a large breadth of technologies and markets over the past 25 years has given me an interesting perspective. I hold both a B.S. and M.E. with concentrations in signal processing and computer systems from Rensselaer Polytechnic Institute.
We are seeing tremendous interest in growth in regard to humanoid robotics. What are you seeing in regard to this?
There certainly has been a fascinating explosion in interest in humanoids. In working with many of them, we’ve seen a couple of new, critical challenges for safety. First, there is the issue of dynamic stability. Unlike traditional industrial robots that are bolted down or wheeled AMRs that simply stop when you cut the power, placing a humanoid into a safe state usually requires a much more sophisticated. Simply disconnecting power to a humanoid would most likely cause it to collapse, creating a significant safety hazard for both itself and any nearby people.
Second is the challenge of versatility, driven by advent of physical AI. Historically, robots were programmed for one single, specific task. But humanoids, with their advanced perception, physical agility, and AI-based intelligence, are being developed to reason and adapt to countless different situations. To grant them that level of independence, we absolutely must have firmly defined, sophisticated safety standards in place that cover this vast new range of behavior.
Can you describe the most common safety concerns of today, and how you foresee those challenges will evolve in the future?
Today's safety concerns generally boil down to physical component and perception issues. If a robot is in a rugged environment, hardware components degrade and can lead to dangerous malfunctions. Separately, a robot's sensors can easily be fooled by environmental data like dust, noise, or minor physical damage, causing it to misinterpret a safety risk.
These challenges are evolving fast. More capable robots that operate in more complex environments require much more dynamic and flexible safety that can adapt to this variety. This demands safety that is aware of not just the function of the machine, but also the context in which it is operating. The robot itself must learn to intelligently detect a safety issue in a complex environment and find a way to navigate it while remaining productive. Critically, safety must be treated as an integrated design requirement, not something we bolt on at the end.

How can we define and/or measure “safe behavior” from robots and humanoids?
Safe behavior truly depends on the situations the robots are in – their use cases and form factors. Historically, robotics developers work backwards by trying to figure out all the potential dangers and then designing the robotics system to recognize it. However, this approach can be too restrictive. If the environment changes in an unexpected way, the robot may not know how to react and could shut down as a precaution, limiting its functionality. Advances in simulation, backed by real-world testing and data can help train robots to operate cautiously without limiting their performance. As robots and humanoids become more autonomous, it's essential that the tools that we use to develop and validate their safety advance equally as fast.
Why is it crucial to think of safety as security standards in the initial stages of humanoid development?
Safety is a critical component of successful humanoid development. Not only because of the close collaboration of humanoids and humans, but also to ensure maximum productivity without limiting robot functionality to the point that it becomes impractical. Humanoids are more capable and connected so it’s also crucial to consider cybersecurity incidents where the safety system can be bypassed, showing the need for even more safety protocols. Standards provide critical guidance in the concept and design of any system but can be especially in developing such cutting-edge systems. Without standards guidance, developers are likely to repeat mistakes of past systems and take on undue liability.
Both software and hardware have their role in ensuring safety, could you elaborate on how they work together to ensure robust, real-time perception for robot collaboration?
Software and hardware must work together seamlessly to ensure safety. That includes protecting against not just physical malfunctions, but also data leaks and cyber threats. Because a robot’s systems are so deeply interconnected, you cannot secure one component without securing them all.
For many robotics startups, initial budget constraints can prevent them from investing in high-quality hardware required for this integrated safety and cybersecurity. Our role at FORT Robotics is to help bridge that gap. We emphasize safety as a core concern throughout development and by providing access to a comprehensive hardware and software stack. This allows developers to build truly reliable and secure systems from the ground up, rather than treating safety as an afterthought.
What role does AI play in developing humanoids’ ability to adapt and perform a range of tasks? Does this help or hinder safety?
A little bit of both. AI dramatically increases robot capabilities and flexibility, but it also introduces entirely new safety challenges. For instance, the advent of end-to-end and foundation physical AI models are advancing humanoid capabilities at a rapid pace, but their complex and opaque nature make it virtually impossible to apply typical methods of test and validation to ensure safety. New approaches, such as context-aware safety, are needed to bound these systems to safe operations without explicit knowledge of the end use case of the robot. We can no longer simply test our way to safety.
This is forcing us to rethink our approach. Our goal is to train humanoids to process safety information the way humans do with a speed limit sign. A human driver doesn't just blindly follow the posted number; they adapt their driving based on the weather, traffic, and other conditions. Ultimately, we want humanoids to be able to do the same in their own environments.
What is your prediction on humanoid robots and how they will evolve and be used in the next 5 years?
We are definitely seeing a lot of progress and excitement around humanoids, but there’s still a big gap between the lab and real-world applications. I believe we will see expanded capabilities in the mechanical and control aspect allowing us to see them in industrial settings, warehousing, and in limited use cases to start. As long as we prioritize safety in this process, possibilities are vast!
The content & opinions in this article are the author’s and do not necessarily represent the views of RoboticsTomorrow
Featured Product
