The Morality of a Self-Driving AI

The growth of the autonomous vehicle industry has seen a lot of wild suppositions about when we’ll see the first completely self-driving cars available for public use. As it stands, however, most of the self-driving vehicles on the roads today are solely concerned with aiding drivers as opposed to driving themselves. As most who follow the industry know, there hasn’t been a genuinely autonomous vehicle developed as yet. Still, several manufacturers have made great strides in creating cars that respond and react to the world around them. In testing, results have been promising, suggesting that we might actually see these vehicles become commercially viable shortly.

Some Kinks to Work Out

Some autonomous vehicle crashes have resulted in fatalities. Only a few crashes have occurred to date, but they raise some interesting questions about the potential morality of an AI system, should full autonomy become reality. The United States has started moving towards creating legislation that deals with the morality and ethics of AI systems. As it stands, however, Germany is the only country so far that has implemented guidelines to manufacturers regarding the ethics of their self-driving AI. The rule-of-thumb that Germany uses is that, should an accident be unavoidable, the AI shouldn’t have any distinction between targets, nor should it offset victims against each other. The guidelines do offer support for companies who want to include general programming to limit the number of casualties, including that of its passenger.

Coding a Moral AI

The distinction between victims is an important one. A study published in the October 2018 edition of the journal Nature noted that, based on data collected from an international sample size, humans tended to rate some decisions more moral than others. Among the preferences the respondents displayed was a tendency to want to save more people as opposed to fewer, and an inclination to keep younger people alive as opposed to older ones. Many regions also sought to prioritize the status of the individual, preferring high-status people to those of lower economic backgrounds.

The Inherent Issues in Equality

The real moral preferences of the human race differ by cultural location and sensibilities. While the study in Nature does provide a broad guide, it is by no means exhaustive. Trying to code an AI along the lines of that study creates a situation where that AI would be deemed illegal in places like Germany, and anywhere else that follows their lead in implementing guidelines for the ethics and morals of a self-driving vehicle. However, it is essential to note that the legislation, while enforcing equality, does so in an uncomfortable manner, since specific individuals are seen as more valuable to society due to their contribution than others.

A New-Age Trolley Problem

The trolley problem is an ethical dilemma that has been used since the start of the twentieth century as an exploration into individual ethics. The premise is that a runaway trolley is bearing down on five people in front of it, and the person being asked the question has access to a lever that, if pulled, will switch the trolley to a new line, saving the five people. The catch is that on the second line, there will still be a single individual who will die, being unaware that the trolley is headed in their direction. Even today, humans faced with this dilemma take pause, and questioners have upped the stakes by attaching emotional or economic value to one or more of the hypothetical individuals involved in the problem. If humans haven’t been able to solve this problem to date definitively, how realistic is it to expect an AI to be able to solve a similar problem? It’s a conundrum that those involved in coding the morality of self-driving AI will need to face head-on.

CloudWedge
Logo