Autonomous Vehicles And Legal Liabilities

Recently I looked at a paper from Stockholm Business School that explored the various issues surrounding the roll-out of new technologies.  It especially focused on things such as autonomous vehicles, and highlighted the legal, social and ethical barriers that stand in the way.

A recent paper from the University of Brighton focuses on the legalities of autonomous vehicles, and especially the liabilities in the event of a fatal accident.  Suffice to say, it’s not an issue that current laws adequately cater for as criminal liability typically requires action and a mental intent.

The paper suggests that three scenarios could unfold in relation to autonomous vehicles:

  1. Perpetrator via another – this is when an offence is committed by someone unable to decide for themselves, such as a mentally deficient individual or an animal.  The individual themselves is not normally liable, but any persons who instructed them can.  The dog owner for instance rather than the dog.  This has clear implications for those who design AI-based machines, as well as those who use them, with the AI itself largely regarded as an innocent party.
  2. Natural probable consequence – this is when the normal behavior of an AI system is misused by someone to perform a criminal act.  This would include the classic paperclip scenario popular among AI theorists, whereby the AI decides to perform criminal acts in the course of its duties because that’s the, unintentionally, best way to do so.  The question here revolves around whether the programmer can realistically expect such an outcome to occur.
  3. Direct liability – the final scenario would involve both an action and an intent, but even this isn’t as straighforward as it sounds.  Whilst actions are easy to prove, intent is much harder.

Taking the stand

Of course, in most legal cases, the defendant can plead the case, but how might this unfold if the defendant is a computer?  The paper presents a number of possibilities, including a defense akin to that of pleading insanity for humans, or alternatively being infected by a virus in the same way humans plead coercion or intoxication.

Indeed, the paper argues that these defenses have already been used successfully in computer-related cases, with defendants arguing that their machines had been infected by malware and acting without their knowledge.

Then of course, the issue of punishment must be considered.  What could the possible punishment be for an autonomous system?  It’s a question that at the moment has no real answer.

It’s an area that has as many questions as answers at the moment, and you sense that whilst society will undoubtedly attempt to prepare the ground for the arrival of AI, there will also inevitably be a good deal of making it up as we go.

Papers like this provide a good start point from which to begin this work however, and will hopefully provoke a more detailed exploration by all stakeholders in the safe and successful development of AI-driven technologies.

Related

Facebooktwitterredditpinterestlinkedinmail