Autonomous Solutions, Inc. (ASI) receives SBIR funding for Deep Learning architecture to support multiple sensors in GPS-denied environments
ASI receives SBIR Phase I grant from the U.S. Army to develop a Deep Learning (DL) architecture that will support sensor fusion in environments with limited, or no, GPS.
Autonomous Solutions, Inc. (ASI) has been awarded a SBIR Phase I grant from the U.S. Army Combat Capabilities Development Command Ground Vehicles Systems Center (formerly TARDEC) to develop a Deep Learning (DL) architecture that will support sensor fusion in environments with limited, or no, GPS.
"Environmental sensing today typically includes cameras, LiDAR and radar," said Jeff Ferrin, CTO of ASI. "Each of these devices has a specific purpose, but not all of them work well in every situation. For example, cameras are great at collecting high-resolution color information, but do not provide much useful information in the dark."
In addition to the challenges faced by cameras in poorly lit or degraded visual environments, LiDAR and radar sensors also have limitations. LiDAR performs well in most light conditions but may yield false positives in heavy rain, fog, snow or dust, due to its use of light spectrum wavelengths. Radar usually penetrates these degraded visual environments, but often lacks spatial resolution.
"ASI's goal is to design a deep learning architecture that fuses information from LiDAR, radar and cameras," said Ferrin. "We plan to build upon machine learning techniques we have already developed for LiDAR data."
Deep learning is a branch of artificial intelligence and machine learning that allows valuable information to be extracted from large volumes of data. Cameras are often used in deep learning models because of their high output of information in regularly sampled data structures.
The case is different for LiDAR and radar. Naturally, these two sensor types do not provide regularly sampled data, making it difficult to formulate problems using current deep learning frameworks. This gap in current research efforts - deep learning for LiDAR and radar - is the focus of this grant.
Improved utilization of data from multiple devices can paint a more accurate picture of a vehicle's surroundings, keeping it safer and making it more efficient. The details of the grant solicitation state, "It is anticipated that harnessing a wide variety of sensors altogether will benefit the autonomous vehicles by providing a more general and robust self-driving system, especially for navigating in different types of challenging weather, environments, road conditions and traffic."
"In the last few years, we have seen a growing need in the world of robotics to advance industry capabilities in machine learning, deep learning, and other artificial intelligence algorithms to improve performance in these challenging environments," said Ferrin.
ASI is required to demonstrate the feasibility of the deep learning architecture in a simulation environment, including a road following system that controls an autonomous vehicle, on a course with obstacles and a degraded visual environment.
About ASI
Autonomous Solutions, Inc. (ASI) is a world leader in industrial vehicle automation. ASI serves clients across the world in the mining, agriculture, automotive, government, and manufacturing industries with remote control, teleoperation, and fully automated solutions from its headquarters and 100-acre proving ground in northern Utah.
Featured Product
3D Vision: Ensenso B now also available as a mono version!
This compact 3D camera series combines a very short working distance, a large field of view and a high depth of field - perfect for bin picking applications. With its ability to capture multiple objects over a large area, it can help robots empty containers more efficiently. Now available from IDS Imaging Development Systems. In the color version of the Ensenso B, the stereo system is equipped with two RGB image sensors. This saves additional sensors and reduces installation space and hardware costs. Now, you can also choose your model to be equipped with two 5 MP mono sensors, achieving impressively high spatial precision. With enhanced sharpness and accuracy, you can tackle applications where absolute precision is essential. The great strength of the Ensenso B lies in the very precise detection of objects at close range. It offers a wide field of view and an impressively high depth of field. This means that the area in which an object is in focus is unusually large. At a distance of 30 centimetres between the camera and the object, the Z-accuracy is approx. 0.1 millimetres. The maximum working distance is 2 meters. This 3D camera series complies with protection class IP65/67 and is ideal for use in industrial environments.
