Researchers turn 3D world into ‘projection screen’ for better robot-to-human communication

Knowing exactly where and what task a robot will do next can help workers avoid injury.

Researchers at the Georgia Institute of Technology discovered a new way to improve human and robot safety in manufacturing scenarios by developing a method for robots to project their next action into the 3D world and onto any moving object.


"We can now use any item in our world as the ‘display screen' instead of a projection screen or monitor," says Heni Ben Amor, research scientist in Georgia Tech's School of Interactive Computing. "The robot's intention is projected onto something in the 3D world, and its intended action continues to follow the object wherever that moves as long as necessary."

The discovery, born from two algorithms and a spare car door, is ideal for manufacturing scenarios in which both humans and robots assemble together. Instead of controlling the robot with a tablet or from a distant computer monitor, the human worker can safely stand at the robot's side to inspect precision, quickly make adjustments to its work, or move out of the way as the robot and human take turns assembling an object. Knowing exactly where and what task a robot will do next can help workers avoid injury.

Watch Video Demonstration

"The goal of this research was to get information out of the virtual space inside the computer and into the real physical space that we inhabit," Ben Amor adds. "As a result of that, we can increase safety and lead to an intuitive interaction between humans and robots."

The discovery was developed over a four-month period by Ben Amor and Rasmus Andersen, a visiting Ph.D. student from Aalborg University in Denmark. The team realized that, by combining existing research available at Georgia Tech's Institute for Robotics & Intelligent Machines (IRIM) with new algorithms, plus personal experience with auto manufacturers, they could make "intention projection" possible.

They first perfected algorithms that would allow a robot to detect and track 3D objects, beginning with previous research from Georgia Tech and Aalborg University that was further developed. They next developed a second set of entirely new algorithms that can display information onto a 3D object in a geometrically correct way. Tying these two pieces together allows a robot to perceive an object, then identify where on that object to project information and act, then continuously project that information as the object moves, rotates or adapts. Andersen led the coding.

IRIM has contributed previous research to BMW, Daimler AG, and Peugeot. The recent discovery was inspired by what Ben Amor had observed during earlier work with Peugeot in Paris and from Andersen's previous work on interaction with mobile robots. The group next plans to formally publish their research.

About the Georgia Tech College of Computing
The Georgia Tech College of Computing is a national leader in the creation of real-world computing breakthroughs that drive social and scientific progress. With its graduate program ranked 9th nationally by U.S. News and World Report, the College's unconventional approach to education is expanding the horizons of traditional computer science students through interdisciplinary collaboration and a focus on human-centered solutions. For more information about the Georgia Tech College of Computing, its academic divisions and research centers, please visit http://www.cc.gatech.edu.

Featured Product

BitFlow Introduces 6th Generation Camera Link Frame Grabber: The Axion

BitFlow Introduces 6th Generation Camera Link Frame Grabber: The Axion

BitFlow has offered a Camera Link frame grabbers for almost 15 years. This latest offering, our 6th generation combines the power of CoaXPress with the requirements of Camera Link 2.0. Enabling a single or two camera system to operate at up to 850 MB/S per camera, the Axion-CL family is the best choice for CL frame grabber. Like the Cyton-CXP frame grabber, the Axion-CL leverages features such as the new StreamSync system, a highly optimized DMA engine, and expanded I/O capabilities that provide unprecedented flexibility in routing. There are two options available; Axion 1xE & Axion 2xE. The Axion 1xE is compatible with one base, medium, full or 80-bit camera offering PoCL, Power over Camera Link, on both connectors. The Axion 2xE is compatible with two base, medium, full or 80-bit cameras offering PoCL on both connectors for both cameras. The Axion-CL is a culmination of the continuous improvements and updates BitFlow has made to Camera Link frame grabbers.