Humans have a complex relationship with AI. On the one hand, we expect it to be trustworthy and benevolent, but on the other, we’re reluctant to cooperate or compromise with it. That’s the finding from new research from the University of London, which examined how humans might behave alongside AI.
The study, which used methods drawn from behavioral game theory, saw a large-scale online experiment conducted to test our levels of cooperation with AI systems and with fellow humans.
The importance of cooperation
Cooperation is vital to the smooth running of society, especially in areas such as traffic where there are frequent and often subtle interplays between motorists that help with the smooth flow of traffic. Would such interactions work as effectively with an autonomous vehicle, however?
The study found that while people often behave towards AI as they would to a human on their first encounter with it, this spirit of cooperation doesn’t seem to last for long. After a while, people were found to be less willing to cooperate with the AI, and indeed even became more likely to exploit it than they would a human.
“We put people in the shoes of someone who interacts with an artificial agent for the first time, as it could happen on the road,” the researchers say. “We modeled different types of social encounters and found a consistent pattern. People expected artificial agents to be as cooperate as fellow humans. However, they did not return their benevolence as much and exploited the AI more than humans.”
Unfair interactions
The behavior of humans was replicated across a number of experiments, with the willingness to betray their AI “colleague” commonplace across each of them.
“Cooperation is sustained by a mutual bet: I trust you will be kind to me, and you trust I will be kind to you. The biggest worry in our field is that people will not trust machines. But we show that they do!” the researchers continue. “They are fine with letting the machine down, though, and that is the big difference. People even do not report much guilt when they do.”
While there have been numerous headlines warning about the potential for AI to exhibit unfair behaviors towards humans, the research highlights the possibility for the reverse to also be the case, especially if people believe that the AI will be benign and benevolent towards them.
“Algorithm exploitation has further consequences down the line. If humans are reluctant to let a polite self-driving car join from a side road, should the self-driving car be less polite and more aggressive in order to be useful?” the researchers ponder.