Sporting Velodyne's 3D LiDAR Sensor, NASA/JPL's 'RoboSimian' Competes in 2015 DARPA Robotics Challenge

Industry-Leading HDL-32E Sensor Also Deployed on Boston Dynamics’ SPOT; Helps Robots ‘See’ Danger Lurking in the Aftermath of a Disaster

MORGAN HILL, Calif. June 15, 2015

Sometimes, in order to elude danger, you need to have eyes in the back of your head. Indeed, there's nothing like real-time, 360-degree 3D vision to gain an advantage, especially when you're a four-legged, ape-like robot named "RoboSimian," designed by NASA's Jet Propulsion Laboratory (JPL) with LiDAR sensor technology from Velodyne.

With Velodyne's HDL-32E LiDAR sensor mounted atop its four shoulders, RoboSimian nabbed fifth place among nearly two dozen robots participating in the DARPA Robotics Challenge, held in Pomona, Calif., June 6-7. There, robots and the engineers who created them performed simple tasks in environments that are too dangerous for humans. Japan's 2011 Fukushima nuclear plant explosion provided the impetus for the Challenge. Partnering with NASA/JPL in the development of RoboSimian were the California Institute of Technology and the University of California, Santa Barbara.

Velodyne's 3D LiDAR sensor was central to RoboSimian's perception system, as well as that of a robot named "SPOT" from Boston Dynamics ( SPOT proved to be a spectator favorite, making an appearance several times during the three days of competition. The HDL-32E sensor, which is capable of viewing a full 360° with a 40° vertical spread, enables the robot to "look" up, down and around for the most comprehensive view of its environment.

Velodyne is recognized worldwide as the standard for high-definition, real-time 3D LiDAR (Light Detection and Ranging) sensors for autonomous vehicle applications, having created enabling technology for the industry. Velodyne introduced multi-channel, real-time 3D LiDAR during the 2004-2005 DARPA Grand Challenge and has since optimized the technology for a range of other applications, from unmanned aerial vehicles and mobile mapping to robotics and factory automation.

In Pomona, points were awarded based on the number of tasks completed and the time it took to complete them. Team Kaist of South Korea took home first-place honors - a $2 million research award ( Robots faced such tasks as driving a vehicle and getting in and out of it, negotiating debris blocking a doorway, cutting a hole in a wall, opening a valve and crossing a field with cinderblocks or other debris. Competitors also were asked to perform two surprise tasks - pulling down an electrical switch and plugging and unplugging an electrical outlet. Each robot in the Challenge had an "inventory" of objects with which it could interact. Engineers programmed the robots to recognize these objects and perform pre-set actions on them, such as turning a valve or climbing over blocks.

Team RoboSimian was in third place after the first day, scoring seven of eight possible points, and ultimately finishing fifth overall. RoboSimian moves around on four limbs, making it best suited to travel over complex terrain, including true climbing (

"The NASA/JPL robot was developed expressly to go where humans can not, so the element of sight - in this case, LiDAR-generated vision - was absolutely critical," said Wolfgang Juchmann, Ph.D., Velodyne Director of Sales & Marketing. "Velodyne is a worldwide leader in the development of real-time LiDAR sensors for robotics, as well as array of other applications, including mobile mapping and UAVs. With a continuous 360-degree sweep of its environment, our lightweight sensors capture data at a rate of almost a million points per second, within a range of 100 meters from whatever danger or obstacle may exist."

About the DARPA Robotics Challenge
According to the Department of Defense, some disasters, due to grave risks to the health and wellbeing of rescue and aid workers, prove too great in scale or scope for timely and effective human response. The DARPA Robotics Challenge ( seeks to address the problem by promoting innovation in human-supervised robotic technology for disaster-response operations. The primary technical goal of the DRC is to develop human-supervised ground robots capable of executing complex tasks in dangerous, degraded, human-engineered environments. Competitors in the DRC are developing robots that can utilize standard tools and equipment commonly available in human environments, ranging from hand tools to vehicles. To achieve its goal, the DRC is advancing the state of the art of supervised autonomy, mounted and dismounted mobility, and platform dexterity, strength, and endurance. Improvements in supervised autonomy, in particular, aim to enable better control of robots by non-expert supervisors and allow effective operation despite degraded communications (low bandwidth, high latency, intermittent connection). The California Institute of Technology manages JPL for NASA.

About Velodyne LiDAR
Founded in 1983 and based in California's Silicon Valley, Velodyne Acoustics, Inc. is a diversified technology company known worldwide for its high-performance audio equipment and real-time LiDAR sensors. The company's LiDAR division evolved after founder/inventor David Hall competed in the 2004-05 DARPA Grand Challenge using stereovision technology. Based on his experience during this challenge, Hall recognized the limitations of stereovision and developed the HDL-64 high-resolution LiDAR sensor. Velodyne subsequently released its compact, lightweight HDL 32E sensor, available for many applications including UAVs, and the new VLP-16 LiDAR Puck, a 16-channel real-time LiDAR sensor that is both substantially smaller and dramatically less expensive than previous generation sensors. Market research firm Frost & Sullivan has honored the company and the VLP-16 with its 2015 North American Automotive ADAS (Advanced Driver Assistance System) Sensors Product Leadership Award. Since 2007, Velodyne's LiDAR division has emerged as the leading developer, manufacturer and supplier of real-time LiDAR sensor technology used in a variety of commercial applications including autonomous vehicles, vehicle safety systems, 3D mobile mapping, 3D aerial mapping and security. For more information, visit

Featured Product

BitFlow Introduces 6th Generation Camera Link Frame Grabber: The Axion

BitFlow Introduces 6th Generation Camera Link Frame Grabber: The Axion

BitFlow has offered a Camera Link frame grabbers for almost 15 years. This latest offering, our 6th generation combines the power of CoaXPress with the requirements of Camera Link 2.0. Enabling a single or two camera system to operate at up to 850 MB/S per camera, the Axion-CL family is the best choice for CL frame grabber. Like the Cyton-CXP frame grabber, the Axion-CL leverages features such as the new StreamSync system, a highly optimized DMA engine, and expanded I/O capabilities that provide unprecedented flexibility in routing. There are two options available; Axion 1xE & Axion 2xE. The Axion 1xE is compatible with one base, medium, full or 80-bit camera offering PoCL, Power over Camera Link, on both connectors. The Axion 2xE is compatible with two base, medium, full or 80-bit cameras offering PoCL on both connectors for both cameras. The Axion-CL is a culmination of the continuous improvements and updates BitFlow has made to Camera Link frame grabbers.