A.I.-equipped Robots to Take on Task of Inspecting Piers and Bridges More Frequently

Waves, winds, currents, wakes from passing boats and eddies swirling around structures make water one of the most complex environments for experienced boat captains, let alone robots. Now, researchers at Stevens Institute of Technology are developing algorithms that teach robots to adapt to the constantly changing dynamics of the sea in order to address one of our nation’s greatest concerns: protecting and preserving our aging water-rooted infrastructure, such as piers, pipelines, bridges and dams.

The work, led by Brendan Englot, a professor of mechanical engineering at Stevens, grapples with the ongoing issue of the frequency with which these underwater structures are checked. There are far more underwater structures than there are divers to inspect them with desirable frequency. Sometimes, they must dive below the surface to extreme and dangerous depths, requiring several weeks to recover. Englot is training robots to take on such tasks – but it’s not easy.

“There are so many difficult disturbances pushing the robot around, and there is often very poor visibility, making it hard to give a vehicle underwater the same situational awareness that a person would have just walking around on the ground or being up in the air,” says Englot.

Englot is up for the challenge.

His research group employs a type of artificial intelligence known as reinforcement learning which uses algorithms that are not based on an exact mathematical model; rather the goal-oriented algorithms teach robots how to carry out a complex objective by performing actions and observing the results. As the robot collects data, it updates its “policy” to figure out optimal ways to maneuver and navigate underwater.

The data they are collecting is sonar, the most reliable tool for navigating undersea. Like a dolphin using echolocation, Englot’s robots send out high frequency chirps and measure how long it takes the sound to return after bouncing off surrounding structures – collecting data and gaining situational awareness all while being knocked around by any number of forces.

Englot recently sent a robot on an autonomous mission to map a Manhattan pier. “We didn’t have a prior model of that pier,” says Englot. “We were able to just send our robot down and it was able to come back and successfully locate itself throughout the whole mission.” Guided by algorithms created in the Englot lab, the robot moved independently, gathering information to produce a 3D map showing the location of the pier’s pilings.

These first steps are encouraging, but Englot is working to expand his robots’ capabilities. Englot foresees routine inspections by robots on everything from ship hulls to off-shore oil platforms. In addition, robots can map the Earth’s vast, underwater terrain.

However, achieving these goals means addressing sonar’s limitations. “Imagine walking through a building and navigating the hallways with the same gray-scale, grainy visual resolution as a medical ultrasound,” says Englot.

Once a structure has been mapped an autonomous robot could plan a second pass, a higher resolution inspection of critical areas using a camera. Englot further imagines eel-like robots that can weave through crevices and narrow spaces, maybe even assisting in rescues. “To really take advantage of those kinds of designs first we need to be able to navigate with confidence,” he says. Englot continues to tweak his algorithms to provide that confidence.

Englot is also advancing underwater technology beyond the current patchwork maps tediously created by joystick-controlled robots, like a rover on a faraway planet. “Some of the toughest challenges in robot autonomy are underwater,” he says. There is a long way to go, but overcoming challenges drew Englot to the field of robotics in the first place.

Comments (0)

This post does not have any comments. Be the first to leave a comment below.


Post A Comment

You must be logged in before you can post a comment. Login now.

Featured Product

3D Vision: Ensenso B now also available as a mono version!

3D Vision: Ensenso B now also available as a mono version!

This compact 3D camera series combines a very short working distance, a large field of view and a high depth of field - perfect for bin picking applications. With its ability to capture multiple objects over a large area, it can help robots empty containers more efficiently. Now available from IDS Imaging Development Systems. In the color version of the Ensenso B, the stereo system is equipped with two RGB image sensors. This saves additional sensors and reduces installation space and hardware costs. Now, you can also choose your model to be equipped with two 5 MP mono sensors, achieving impressively high spatial precision. With enhanced sharpness and accuracy, you can tackle applications where absolute precision is essential. The great strength of the Ensenso B lies in the very precise detection of objects at close range. It offers a wide field of view and an impressively high depth of field. This means that the area in which an object is in focus is unusually large. At a distance of 30 centimetres between the camera and the object, the Z-accuracy is approx. 0.1 millimetres. The maximum working distance is 2 meters. This 3D camera series complies with protection class IP65/67 and is ideal for use in industrial environments.