Codifying the ethics of autonomous driving

The rise of automated vehicles has provoked a range of ethical and moral discussions, largely revolving around constructs such as the trolley dilemma, which nicely encapsulates the kind of moral decision making an autonomous vehicle might be forced to make.  Historically it’s been considered something that’s beyond most autonomous systems.

A recent study from The Institute of Cognitive Science at the University of Osnabrück suggests that the kind of moral thinking humans undertake can in fact be accurately modeled for autonomous systems to use.

Virtual training

The system uses immersive virtual reality to study human behavior in a wide range of simulated road traffic scenarios.  For instance, participants were asked to drive in a typical suburban setting in foggy weather.  They would experience a number of unexpected dilemmas with objects, animals and humans that would force them to decide which was to be spared.

The results were then placed into a statistical model that helped the team to develop a number of rules that were capable of explaining the observed behavior.  The work suggests that our moral decision making can not only be explained well, but also modeled in a way that machines can understand.

“Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object,” the authors say.

As the study suggests that our moral decision making can be codified, the authors advocate urgent discussions around just what those moral judgements should be, and indeed whether we want machines to be making them.

“We need to ask whether autonomous systems should adopt moral judgements, if yes, should they imitate moral behavior by imitating human decisions, should they behave along ethical theories and if so, which ones and critically, if things go wrong who or what is at fault?,” the authors say.

It creates a kind of double dilemma, as first we have to determine whether moral values are appropriate for inclusion in the guidelines we codify into machines, and then whether we want machines to behave like humans at all, or rather aspire towards something better.

What is clear is that the debate around autonomous vehicles is just the start, as with autonomous systems emerging in a growing number of fields, such ethical and moral dilemmas will become more common place.  Now is the time to start ensuring that such systems have the right rules in place so that the decisions they make are the right ones.

Related

Facebooktwitterredditpinterestlinkedinmail