New study shows methods robots can use to self-assess their own performance

Establishing human-robot trust isn’t always easy. Beyond the fear of automation going rogue, robots simply don’t communicate how they are doing. When this happens, establishing a basis for humans to trust robots can be difficult.

 

Now, research is shedding light on how autonomous systems can foster human confidence in robots. Largely, the research suggests that humans have an easier time trusting a robot that offers some kind of self-assessment as it goes about its tasks, according to Aastha Acharya, a Draper Scholar and Ph.D. candidate at the University of Colorado Boulder.

 

Acharya said we need to start considering what communications are useful, particularly if we want to have humans trust and rely on their automated co-workers. “We can take cues from any effective workplace relationship, where the key to establishing trust is understanding co-workers’ capabilities and limitations,” she said. A gap in understanding can lead to improper tasking of the robot, and subsequent misuse, abuse or disuse of its autonomy.

 

To understand the problem, Acharya joined researchers from Draper and the University of Colorado Boulder to study how autonomous robots that use learned probabilistic world models can compute and express self-assessed competencies in the form of machine self-confidence. Probabilistic world models take into account the impact of uncertainties in events or actions in predicting the potential occurrence of future outcomes.

 

In the study, the world models were designed to enable the robots to forecast their behavior and report their own perspective about their tasking prior to task execution. With this information, a human can better judge whether a robot is sufficiently capable of completing a task, and adjust expectations to suit the situation.

 

To demonstrate their method, researchers developed and tested a probabilistic world model on a simulated intelligence, surveillance and reconnaissance mission for an autonomous uncrewed aerial vehicle (UAV). The UAV flew over a field populated by a radio tower, an airstrip and mountains. The mission was designed to collect data from the tower while avoiding detection by an adversary. The UAV was asked to consider factors such as detections, collections, battery life and environmental conditions to understand its task competency.

 

Findings were reported in the article “Generalizing Competency Self-Assessment for Autonomous Vehicles Using Deep Reinforcement Learning,” where the team addressed several important questions. How do we encourage appropriate human trust in an autonomous system? How do we know that self-assessed capabilities of the autonomous system are accurate?

 

Human-machine collaboration lies at the core of a wide spectrum of algorithmic strategies for generating soft assurances, which are collectively aimed at trust management, according to the paper. “Humans must be able to establish a basis for correctly using and relying on robotic autonomy for success,” the authors said. The team behind the paper includes Acharya’s advisors Rebecca Russell, Ph.D., from Draper and Nisar Ahmed, Ph.D., from the University of Colorado Boulder.

 

The research into autonomous self-assessment is based upon work supported by DARPA’s Competency-Aware Machine Learning (CAML) program.

 

In addition, funds for this study were provided by the Draper Scholar Program. The program gives graduate students the opportunity to conduct their thesis research under the supervision of both a faculty adviser and a member of Draper’s technical staff, in an area of mutual interest. Draper Scholars’ graduate degree tuition and stipends are funded by Draper.

 

Since 1973, the Draper Scholar Program, formerly known as the Draper Fellow Program, has supported more than 1,000 graduate students pursuing advanced degrees in engineering and the sciences. Draper Scholars are from both civilian and military backgrounds, and Draper Scholar alumni excel worldwide in the technical, corporate, government, academic, and entrepreneurship sectors.

 

Draper

At Draper, we believe exciting things happen when new capabilities are imagined and created. Whether formulating a concept and developing each component to achieve a field-ready prototype, or combining existing technologies in new ways, Draper engineers apply multidisciplinary approaches that deliver new capabilities to customers. As a nonprofit engineering innovation company, Draper focuses on the design, development and deployment of advanced technological solutions for the world’s most challenging and important problems. We provide engineering solutions directly to government, industry and academia; work on teams as prime contractor or subcontractor; and participate as a collaborator in consortia. We provide unbiased assessments of technology or systems designed or recommended by other organizations—custom designed, as well as commercial-off-the-shelf. Visit Draper at www.draper.com.

Comments (0)

This post does not have any comments. Be the first to leave a comment below.


Post A Comment

You must be logged in before you can post a comment. Login now.

Featured Product

Fort Robotics - Avoid Costly Downtime with Safety & Security for Machines

Fort Robotics - Avoid Costly Downtime with Safety & Security for Machines

Machine safety and security are two critical components of any industrial operation. Our latest video explores this question and provides insights into how security measures can enhance machine safety. Nivedita Ojha, VP of Product at FORT, breaks down the key considerations when it comes to securing your machines and keeping your workers safe, explaining why there is no safety without security.