Friday, March 3, 2023

Competence and Trust for Human-Autonomy Teams

 
I recently attended two talks on how autonomous systems interact with people.  The first considered the human operator who helps an autonomous system; the second studied the human whom the autonomous system is helping.

The 37th AAAI Conference on Artificial Intelligence included a "Bridge Session" on Artificial Intelligence and Robots.  In that session, Connor Basich, a Ph.D. student at the University of Massachusetts, gave a talk with the title "Competence-Aware Autonomy: An Essential Skill for Robots in the Real World."  According to Basich, a competence-aware autonomous system has "the ability to know, reason about, and act on the extent of its own capabilities in any situation in the context of different sources of external assistance."  In his talk, Basich referred to work by himself and Sholomo Zilberstein (his advisor) as well as papers by Sadegh Rabiee and Joydeep Biswas (at the University of Texas) and others.

Their work on competence-aware autonomy is motivated by the need for safe autonomy, including safe autonomous vehicles.  Some autonomous systems will ask a human operator to intervene when it cannot determine a safe action, which is good, but sometimes the autonomous system relies too much upon the operator, which is undesirable.  Thus, the autonomous systems needs a better way to determine whether the operator's intervention is needed or not.

Competence-aware perception (also called introspective perception) can predict which parts of the sensed input are inaccurate or will produce erroneous results downstream in the perception pipeline.  Rabiee and Biswas developed introspective vision for SLAM (IV-SLAM). This approach uses a convolutional neural network to predict the reliability of each part of the input image.  The algorithm then tells the SLAM algorithm to sample more features from the reliable parts of the image.

More generally, a competence-aware system has multiple levels of autonomy; each level calls the human operator under different conditions.  In the manual operation level, the operator does everything.  In a higher level of autonomy, the system first attempts the selected action but calls the operator if the attempt fails.  A competence-aware system can select the optimal level of autonomy for the current situation (Basich, 2020; Basich et al., 2023).

It appears to me that competence-aware autonomy is an innovative type of metareasoning.  It's metareasoning because it monitors and controls the system's reasoning processes.  For instance, the IV-SLAM approach uses a metareasoning policy to control the SLAM algorithm.  Basich's competence-aware system uses a metareasoning policy to determine the level of autonomy (which encompasses the entire reasoning process).

Dawn Tilbury, Herrick Professor of Engineering and Department Chair of Robotics at the University of Michigan, gave a seminar at the University of Maryland about her research on how much drivers trust automated systems.  In her work, the human subjects drove a simulated car that had level 3 autonomy that kept the vehicle in a lane, maintained a constant speed, warned the drive of obstacles, and stopped the vehicle to avoid collision (emergency braking).  In addition, the subjects were rewarded for their performance on a visual search task that they did while driving, but they were penalized for emergency stops, so they needed to steer the vehicle around obstacles when needed.  In this setup, the warning system was inaccurate (it sometimes gave false alarms, and it sometimes failed to warn the driver).  If the subject trusted the system too much, he spent too much time on the search task and failed to react when the it missed an obstacle.  If the subject trusted the system too little, he spent too little time on the search task and too much time driving.  After developing a way to measure trust, Tilbury and her collaborators gave the automated system the ability to monitor the level of trust and tell the subject when he needed to pay attention more and when he needed to trust the system more.  They found that this successfully nudged the subjects to trust the system appropriately.