Advanced 3D lidar sensors emit pulsed light waves into the surrounding environment. These pulses bounce off surrounding objects and return to the sensor. The sensor uses the time it took for each pulse to return to the sensor to calculate the distance it traveled.

Advancing AMR's with LIDAR

Q&A with Nandita Aggarwal, Sr. Lidar Architect | Velodyne Lidar

Tell us about yourself and your role with Velodyne.

Nandita has master’s degrees in Physics, and Optics. She has also done research towards fiber optic sensors resulting in a few publications. Her interest in systems led to her first systems engineering role for a genome sequencing product at Pacific Biosciences. She then worked on a UV, high-power hybrid laser (fiber and free space) at MKS Instruments. At Velodyne, she leads the program for Velabit - the smallest lidar solid state-based sensor.

My job title is ‘Sr. Lidar Architecture and Development Engineer’ that roughly translates to bringing concepts to systems in production. In day-to-day life it means collaborating with product management to get the customer requirements, then packaging the right technologies and working with the engineers to build the system, take it through validation, and all the way to mass production.

 

Can you share the limitations associated with the current sensor approach for AMRs that utilize a combination of 2D cameras, time-of-flight infrared depth cameras and 2D lidar?

There are certain limitations of autonomous mobile robots (AMRs) within indoor and outdoor settings. Currently, mobile robots that perform indoor tasks typically utilize this suite of sensors that is hampered by several disadvantages, including factors that impact efficiency and performance, along with the significant software and computing effort challenges required to fuse their data. 

For example, 2D cameras traditionally suffer in common indoor low light conditions and, in stereoscopic approaches, produce images that require relatively complex processing to estimate object distances. Time-of-flight depth cameras have relatively poor resolution and limited perception range, maxing out right around 10 m. Because 2D lidar has only one horizontal line of data, it has an extremely limited vertical field of view and does not enable robust object classification or tracking. Although combining 2D cameras, time-of-flight depth cameras and 2D lidars somewhat mitigates their individual weaknesses, designing a perception system around them still results in AMRs that demonstrate limited operational versatility and efficiency.

In terms of outdoor operations, mobile robots must achieve even higher levels of perception performance. Outdoor AMRs require robust perception to sense and avoid dynamic objects in a broad range of light, weather, and traffic conditions. They also encounter more challenging terrain and novel scenarios. 

 

What are the benefits of integrating advanced 3D lidar technology in a range of AMR applications? 

Advanced 3D lidar, like Velodyne’s sensors, boosts the rapidly expanding autonomous robotics industry by providing real-time 3D perception data for localization, mapping, object classification and object tracking. The sensors provide the opportunity to increase AMRs’ abilities to monitor and respond to their changing surroundings in both indoor and outdoor applications. Sensor data that can be efficiently processed, transmitted and stored can facilitate smooth development of situational awareness.

Advanced 3D lidar sensors emit pulsed light waves into the surrounding environment. These pulses bounce off surrounding objects and return to the sensor. The sensor uses the time it took for each pulse to return to the sensor to calculate the distance it traveled. Repeating this process millions of times per second generates the data to create a real-time 3D map of the environment. By this function, lidar-based solutions deliver the power of vision to robotic systems to provide constantly updated distance measurements, with centimeter level accuracy, between the robot and surrounding objects. This also means that lidar is able to function in low and high light conditions with accuracy and precision, making lidar technology superior in both indoor and outdoor situations?. 

 

How does 3D lidar improve efficiency and versatility of AMRs?

In order to accurately classify and track objects, perception sensors must provide AMRs with high-resolution image data and a broad field of view. The sensor’s vertical field of view enables the system to accurately identify objects based on their shape. Lidar sensors with at least 30 degrees vertical field of view enable this capability and represent a significant improvement over 2D lidar sensors that provide only a very narrow vertical stripe of perception. 

Combined with a minimum range of 30 m at 10% reflectivity, these performance specifications define a reasonable baseline level of performance for sensors on AMRs. By utilizing advanced lidar sensor technology meeting these requirements, AMRs can not only detect objects within their surroundings but also identify and track them within a real-time 3D map. Together, these abilities work to enable safe and efficient operation.

The parallels between time of flight cameras and lidar can simplify the transition process for integrators who are interested in replacing camera- based systems with lidar-centric approaches, a shift that offers a range of benefits, including:

  • Compared to depth cameras, lidar produces measurement data that generates a clearer image at longer ranges.

  • Unlike 2D and stereo cameras, lidar is not adversely affected by low light conditions, such as those found in mines, warehouses, and outdoor settings at night.

  • Unlike radar, lidar detects stationary objects as reliably as it detects moving ones, regardless of their material composition.

 

How does sensor data from lidar sensors contribute to improving the processes of AMRs?

Perception data produced by lidar sensors provide additional efficiency benefits to AMRs. Lidar creates images that are easily readable by computers and do not require extensive processing in order to detect objects or create maps. Lidar data can also be compressed and transmitted to central systems, human operators, or other robots. Thus, the data from lidar can be stored, recalled, and shared in a process of aggregate learning. That is, lidar data gathered either by a single robot over time or by a fleet of robots operating simultaneously, can be accumulated and combined within a single database. 

Even more, this information must be rapidly updated to account for potential highly dynamic conditions. For scenarios involving autonomous robots with demanding perception, processing and power requirements, introducing high-performance lidar to mobile robots presents integrators with the opportunity to redesign their systems for optimized performance and efficiency.

 

Why does perception capability play such an important role in revolutionizing AMRs?

Perception capabilities have major implications for AMR applications. For example, in scenarios in which AMRs will operate independently in defined areas, such as in warehouses, farmland, restaurants, retail stores, factories, and hospitals, AMRs can quickly build maps of the environment to use as a familiar reference for navigation. In more dynamic and novel scenarios, such as construction sites, ports, mines and disaster zones, AMRs will be able to transmit their perception data to other robots or human operators with whom they are cooperating.

 

What industries and environments are AMRs better equipped for with 3D lidar technology? Why?

Any scenario in which mobile robots are interacting with humans or expensive equipment requires precise knowledge of the robots’ relative positioning within the environment. Due to the wide advantages and benefits of high functioning AMRs, they can be integrated to improve processes across a wide variety of industries, including agriculture, construction, disaster recovery, warehouses, last mile delivery and so much more. 

Velodyne is already partnering with AMR OEMs to implement its lidar technology solutions in real world applications throughout the robotics industry, including partnerships announced this year with RenuRobotics and ANYbotics.

Equipped with Velodyne’s Puck sensors, ANYbotics’ four-legged robot ANYmal performs inspection and monitoring tasks in challenging industrial terrains such as mining and minerals, oil and gas, chemicals, energy and construction. Velodyne’s lidar sensors allow the AMRs to detect obstacles and allow ANYmal to avoid any collisions while navigating harsh environments with a high level of accuracy.

Renu Robotic’s Renubot is equipped with Puck sensors for safe, efficient high-precision navigation and to avoid obstacles when conducting utility-scale vegetation management, in all challenging environmental conditions.  Additionally, Velodyne’s advanced sensor technologies and development platforms enable the Renubot to effectively conduct precise mowing and grooming autonomously.  

 

Most AMRs run on rechargeable batteries, making the power requirements of their components another crucial consideration in design. Will you elaborate on how lidar can benefit power consumption?

Because most AMRs will run on rechargeable batteries, the power requirements of their components represent another crucial consideration in their design, as these will have a direct impact on their operating range and time between charges. Lidar sensors that consume 15 watts or less are ideal for mobile robot applications. By operating at this level of power consumption, advanced lidar sensors provide essential perception data while helping maximize the amount of time AMRs are available to perform their tasks in the field.

Velodyne creates the most power efficient sensors in the industry. Compared to other lidar sensors in its class, such as those of Luminar and Innoviz, Velodyne’s lidar sensors use less power while maximizing performance and range. 

 
 
The content & opinions in this article are the author’s and do not necessarily represent the views of RoboticsTomorrow

Comments (0)

This post does not have any comments. Be the first to leave a comment below.


Post A Comment

You must be logged in before you can post a comment. Login now.

Featured Product

Model TR1 Tru-Trac

Model TR1 Tru-Trac

The Model TR1 Tru-Trac® linear measurement solution is a versatile option for tracking velocity, position, or distance over a wide variety of surfaces. An integrated encoder, measuring wheel, and spring-loaded torsion arm in one, compact unit, the Model TR1 is easy to install. The spring-loaded torsion arm offers adjustable torsion load, allowing the Model TR1 to be mounted in almost any orientation - even upside-down. The threaded shaft on the pivot axis is field reversible, providing mounting access from either side. With operating speeds up to 3000 feet per minute, a wide variety of configuration options - including multiple wheel material options - and a housing made from a durable, conductive composite material that minimizes static buildup, the Model TR1 Tru-Trac® is the ideal solution for countless applications.