Facial Expressions Help Smooth Relations Between Robots And Humans

As robots have become a more common feature of workplaces, and indeed a host of other locations, their interactions with humans have become more important. This has prompted analysis of the design of the robots, and particularly the design of their faces in order to understand how best to facilitate interaction between man and machine.

For instance, research from the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory suggests that human facial expressions could be crucial in establishing that trust, at least on the battlefield.

“We wanted to characterize and quantify factors that impact the emotional experience that humans have with trust in automated driving,” the researchers explain. “With this information, we want to develop a robust way to predict decision errors in automation use to eventually enable active, online mitigation strategies and effective calibration techniques when humans and agents are teaming in real-time.”

The researchers used flexible mixture modeling to sort participants into four groups based upon observed differences in traits, such as age, personality, and states (ie trust or stress).  They then analyzed the facial recognition-based measures of emotional expression and self-report measures of trust under a range of different conditions of automation.

“It is often stated that for appropriate trust to be developed and effectively calibrated, an individual’s expectations must match the system’s actual behaviors,” the researchers explain. “This research shows that calibration metrics will vary across people; certain groups may be more prone to over-trust automation, while others may mistrust the automation from the start. From a research and development perspective, this approach provides a way to tailor human-automation interaction modalities to individuals without needing to define and validate specific models for each person.”

Realistic expressions

Recent research from Imperial College London explores how robots used in medical settings could be designed to help us interact effectively with them. The researchers wanted to explore how “patient robots” are used to help train medical students, and whether the faces of these machines could help to better communicate feelings of pain, and therefore help both to reduce errors and biases during any physical examination.

“Improving the accuracy of facial expressions of pain on these robots is a key step in improving the quality of physical examination training for medical students,” the researchers explain.

The researchers asked participants to perform a physical examination on the stomach of a robotic “patient”, with data recorded about the specific force applied to the abdomen by the volunteers. This pressure then triggered various facial expressions in the robot to try and replicate the kind of pain-related expressions humans exhibit.

Expressions of pain

This process was able to reveal the specific order in which the various parts of a robot’s face must be activated in order to accurately express pain, while also highlighting the correct speed and magnitude of expression.

The results show that the most useful facial expressions occurred when the upper part of the face was activated first, with expressions around the eyes followed up by those around the mouth.

The findings were useful at also exposing the differences in how pain was perceived by the “doctors” along both gender and ethnic lines. These differences then translated into the force they applied when conducting the physical examination.

For instance, the results showed that white “doctors” tended to perceive shorter delays in facial expressions as being more realistic when the robots were also portrayed as being white, whereas Asian “doctors” thought longer delays were more realistic. This difference then affected the force with which the white and Asian participants applied force to the robotic patient during its examination.

Diverse training

The ability to ascertain pain in patients from their facial expressions is an important part of both medical practice and training, but testing this awareness during training can be difficult as simulators often don’t display facial expressions pertaining to pain in real-time. They also limit the number of identities for the patient in terms of their ethnicity and gender.

“Previous studies attempting to model facial expressions of pain relied on randomly generated facial expressions shown to participants on a screen,” the researchers explain. “This is the first time that participants were asked to perform the physical action which caused the simulated pain, allowing us to create dynamic simulation models.”

If robots are to play a greater role in working life then it’s important that they’re able to communicate effectively using both verbal and non-verbal means. Adopting the right facial expression might be key to doing that.

Facebooktwitterredditpinterestlinkedinmail