There are no end of heuristics and cliches about the power of first impressions to a human being. We are apparently capable of inferring all manner of traits and characteristics from that initial glance. Just how accurate those assumptions are of course, is perhaps a topic for another discussion.
Suffice to say however, for machines, this early computation is somewhat harder, but a recent study highlights the progress being made.
Automated first impressions
The researchers used an algorithm that had been trained by machine learning to assign a rating for numerous faces based on either its trustworthiness or dominance.
As with all machine learning approaches, the first step was to train the data on a bunch of faces that had been hand-rated by participants on the citizen science website TestMyBrain.org. Over 6,000 faces were rated by 32 different people for things such as trustworthiness, IQ and dominance. Interestingly, there is no sense of the ratings being in any way right or wrong, they are what they are, an objective rating of a face by a human being, so in that sense they are an accurate reflection of real life with all of it’s biases and quirks.
These were then used to both train the algorithm, and then subsequently test it to see how well it correlated with the human judges. The results are certainly fascinating.
As we might expect, the machine was capable of coming up with largely the same results as the humans, as it’s been trained to do just that. What’s more, it was capable of doing so for all of the traits that the humans were judging for. What’s interesting though is how it does this.
Learning to learn
The paper describes how the researchers would tinker with the faces shown to the machine to try and hone in on which parts of the face were most responsible for various judgements, whether on dominance or trustworthiness.
It emerged that there are in fact key areas, such as the mouth, that are used to determine trustworthiness, or the furrowed brow for dominance. In this way, the machine was judging the faces in much the same way humans do so.
“These observations indicate that our models have learned to look in the same places that humans do, replicating the way we judge high-level attributes in each other,” the authors say.
The initial applications of the technology are reasonably glib, in that they’ve used their algorithm to gauge whether actors playing real-world characters have sufficiently similar faces to represent them.
A slightly weightier application may be to test the changes in perception between various cultural or demographic groups over time. It is, of course, very early days, but the study does show the progress being made, and provide a glimpse into some interesting possible applications.
Of course, at the moment the algorithm simply replicates the human process, with all of it’s quirks and biases, so perhaps the next step will be to properly assess how accurate our first impressions are, and then train the algorithms using this updated dataset.