Determining Liability In Accidents Involving Driverless Cars

Towards the back end of last year, I covered new research that found that semi-autonomous vehicles are more likely to be blamed for accidents involving human drivers, but the issue of liability in the wake of an accident remains a tricky one.  Indeed, the decision from the National Transportation Safety Board split liability between Uber, the vehicle, the victim, the safety driver on board the vehicle, and the state of Arizona.

It underlines the difficulty policy makers face in properly distributing loss in the event of an accident between the self-driving car and the human driver.  Research from Columbia Law School promises to help the industry move forward, with their joint fault-based liability rule used to regulate both the vehicle manufacturers and human drivers.

The researchers were especially keen to factor in the adaptation that humans must undergo as autonomous vehicles become a more frequent presence on our roads.  Will, for instance, we become more carefree as autonomous vehicles grow in usage?

“Human drivers perceive AVs as intelligent agents with the ability to adapt to more aggressive and potentially dangerous human driving behavior,” the researchers say. “We found that human drivers may take advantage of this technology by driving carelessly and taking more risks, because they know that self-driving cars would be designed to drive more conservatively.”

Modeling the world

The researchers built a game theory inspired model to show how actors might interact with one another.  These actors include manufacturers, law makers, the vehicles themselves and human drivers, with each having differing goals.  The team used game theory to better model the way each actors strategy might interact with the others.

The aim was to better understand the level of risk each player was willing to take on.  The model was tested on a range of examples, with the researchers believing that the outcomes offer valuable insights into the behavioral evolution of autonomous vehicles and human drivers as their popularity increases.

The models suggest that the best liability policy is crucial if human drivers are not to adopt excessively risky behaviors as autonomous vehicles become more popular.  This may require government subsidies to manufacturers to help get more autonomous vehicles on the roads, which the researchers believe would significantly improve traffic safety and efficiency.

“The tragic fatality in Arizona involving a self-driving automobile elicited tremendous attention from the public and policy makers about how to draw the lines of legal liability when AVs interact with human drivers, cyclists, and pedestrians,” the researchers say. “The emergence of AVs introduces a particularly thorny type of uncertainty into the status quo, and one that feeds back onto AV manufacturing and design. Legal liability for accidents between automobiles and pedestrians typically involves a complex calculus of comparative fault assessments for each of the aforementioned groups. The introduction of an autonomous vehicle can complicate matters further by adding other parties to the mix, such as the manufacturers of hardware and programmers of software. And insurance coverage distorts matters further by including third party stakeholders. We hope our analytical tools will assist AV policy-makers with their regulatory decisions, and in doing so, will help mitigate uncertainty in the existing regulatory environment around AV technologies.”

As autonomous vehicles become more widespread on our roads, it’s important that we understand how human drivers might respond.  This research provides an interesting step towards doing that.

Facebooktwitterredditpinterestlinkedinmail