How Bots Affect How We Negotiate

Negotiation has seldom had a higher profile, with the likes of Brexit providing an almost daily commentary of negotiations between the UK and the European Union. This commentary portrays the negotiations as a battle of wits between two people (or two teams of people), with the affair often framed in zero-sum terms that see one side win at the expense of the other (finding win-win outcomes is perhaps the topic for another article).

This image of negotiations as a dog-eat-dog affair makes it very easy to imagine either party employing a range of underhand tactics to try and get one over on the other side.  Suffice to say, it’s unlikely that this is the approach actually taken in most negotiations, but a fascinating new study from the University of Southern California’s Institute of Technologies explored whether negotiations might be helped (or hindered) by the presence of a bot in the mix.

The study found that the likelihood of the human volunteer engaging in deceptive tactics in their negotiation depended largely on their prior negotiating experience, which is probably to be expected, but where things get interesting is that the virtual agents also played a significant role.

Virtual intermediaries

The findings run counter to previous work suggesting that when we use virtual intermediaries, we’re generally far more comfortable to use these virtual agents to employ deceptive tactics than we would be if we were actually negotiating for ourselves.

“We want to understand the conditions under which people act deceptively, in some cases purely by giving them an artificial intelligence agent that can do their dirty work for them,” the researchers explain.

The researchers highlight how virtual agents are increasingly common, from the virtual assistants we have on our smartphones to automated bidders on eBay.  It’s not beyond the realms of imagination to see such virtual agents undertaking negotiations on our behalf, whether with our boss for a pay rise, when we’re buying a house, or even settling a legal dispute.

Given this likely scenario, the researchers wanted to understand how these virtual agents affect our behavior, and therefore how they should be designed to secure the best outcomes.

“We wanted to predict how people are going to respond differently as this technology becomes available and gets to us more widely,” they explain.

Virtual aids

The researchers were looking out for a range of unethical behaviors, including lying, aggressive pressure being applied, withholding information, and feigning negative emotions, such as anger.  Alongside these, they also looked for more positive behaviors, such as building a rapport with their negotiating partner and the use of sympathy.  The experiments they conducted saw a mixture of human-to-human interactions and interactions where virtual agents acted as a proxy instead.

The results revealed that a number of conditions tended to result in people engaging in deceptive and unethical behaviors.  For instance, if they had prior experience of negotiations, this generally resulted in a rise, especially if this prior experience was of a negative kind.  Interestingly, however, people also seemed more willing to act in poor faith when they had a virtual agent acting on their behalf.

It appeared that what people say they will do, and what they actually do are not always the same.  During the experiment, participants would often program their virtual agents to act in a similar manner as if they were employing a lawyer to represent them, which often resulted in them being more willing for this third party to engage in deceptive tactics.

“People with less experience may not be confident that they can use the techniques or feel uncomfortable, but they have no problem programming an agent to do that,” the researchers explain.

A positive influence

The use of virtual agents during negotiations was not a purely slippery slope, however, as the study also showed that when the virtual agent was programmed to be fair and honest, it encouraged the humans they interacted with to be fair and honest too.  This did have limitations, however, as when the virtual agent responded to negative emotional displays with kindness and good manners, it didn’t prompt their human counterpart to improve their manners in kind, nor reduce their willingness to engage in duplicitous behavior.

So, if virtual agents are to become a more common feature of negotiations in future, designers should perhaps take account of our apparent willingness to deploy more deceptive tactics as we become more experienced in negotiations.

“As people get to use the agents to do their bidding, we might see that their bidding might get a little less ethical,” the researchers conclude.  “While we certainly don’t want people to be less ethical, we do want to understand how people really do act, which is why experiments like these are so important to creating real, human-like artificial agents.”

Facebooktwitterredditpinterestlinkedinmail