Researchers turn 3D world into ‘projection screen' for better robot-to-human communication

Knowing exactly where and what task a robot will do next can help workers avoid injury.

Researchers at the Georgia Institute of Technology discovered a new way to improve human and robot safety in manufacturing scenarios by developing a method for robots to project their next action into the 3D world and onto any moving object.


"We can now use any item in our world as the ‘display screen instead of a projection screen or monitor," says Heni Ben Amor, research scientist in Georgia Techs School of Interactive Computing. "The robots intention is projected onto something in the 3D world, and its intended action continues to follow the object wherever that moves as long as necessary."

The discovery, born from two algorithms and a spare car door, is ideal for manufacturing scenarios in which both humans and robots assemble together. Instead of controlling the robot with a tablet or from a distant computer monitor, the human worker can safely stand at the robots side to inspect precision, quickly make adjustments to its work, or move out of the way as the robot and human take turns assembling an object. Knowing exactly where and what task a robot will do next can help workers avoid injury.

Watch Video Demonstration

"The goal of this research was to get information out of the virtual space inside the computer and into the real physical space that we inhabit," Ben Amor adds. "As a result of that, we can increase safety and lead to an intuitive interaction between humans and robots."

The discovery was developed over a four-month period by Ben Amor and Rasmus Andersen, a visiting Ph.D. student from Aalborg University in Denmark. The team realized that, by combining existing research available at Georgia Techs Institute for Robotics & Intelligent Machines (IRIM) with new algorithms, plus personal experience with auto manufacturers, they could make "intention projection" possible.

They first perfected algorithms that would allow a robot to detect and track 3D objects, beginning with previous research from Georgia Tech and Aalborg University that was further developed. They next developed a second set of entirely new algorithms that can display information onto a 3D object in a geometrically correct way. Tying these two pieces together allows a robot to perceive an object, then identify where on that object to project information and act, then continuously project that information as the object moves, rotates or adapts. Andersen led the coding.

IRIM has contributed previous research to BMW, Daimler AG, and Peugeot. The recent discovery was inspired by what Ben Amor had observed during earlier work with Peugeot in Paris and from Andersen's previous work on interaction with mobile robots. The group next plans to formally publish their research.

About the Georgia Tech College of Computing
The Georgia Tech College of Computing is a national leader in the creation of real-world computing breakthroughs that drive social and scientific progress. With its graduate program ranked 9th nationally by U.S. News and World Report, the Colleges unconventional approach to education is expanding the horizons of traditional computer science students through interdisciplinary collaboration and a focus on human-centered solutions. For more information about the Georgia Tech College of Computing, its academic divisions and research centers, please visit http://www.cc.gatech.edu.

Featured Product

uEye XC AUTOMATICALLY PERFECT IMAGES

uEye XC AUTOMATICALLY PERFECT IMAGES

Combining the ease of use of a webcam with the performance and reliability of an industrial camera? The uEye XC autofocus camera from IDS Imaging Development Systems proves that this is possible. Its high-resolution imaging, simple setup and adaptability make it an invaluable tool for improving quality control and streamlining workflows in industrial settings - especially for cases where users would normally employ a webcam. The uEye XC autofocus camera features a 13 MP onsemi sensor and supports two different protocols: USB3 Vision, which enables programmability and customization, and UVC (USB Video Class). The UVC functionality enables a single cable connection for easy setup and commissioning, while delivering high-resolution images and video. This makes the uEye XC camera an ideal option for applications that require quick setup and need to manage variable object distances. Additional features such as digital zoom, automatic white balance and color correction ensure precise detail capture, which is essential for quality control.