www.fgks.org   »   [go: up one dir, main page]

Suppose that you’re driving your car in the right hand lane of a one-way street on a winter evening. As you approach a red light at an intersection, you tap the brakes and begin to skid. Ahead of you the left lane is closed and is blocked by a concrete barrier in front of a crosswalk. There are no obstructions in the right lane. A pedestrian has legally entered the crosswalk on the right side of the street and is attempting to cross over to the other side. You have just enough time and just enough control of the car to make a decision about which lane to enter but you cannot stop your car. Should you choose to continue on in the right lane, the pedestrian will be struck by the car and will likely die. Should you choose to direct your car into the left lane, the collision of your car with the barrier will save the life of the pedestrian but will very likely kill you, the driver. What do you choose to do?

Now ask yourself that same question, except this time consider that your child is in the car and would likely die from an impact with the barrier. Next consider that the pedestrian in the road is also accompanied by a child. Still further, consider that this time your spouse and child are in your car with you and there are three elderly people in the crosswalk. Has your choice changed? More generally, what is the moral thing to do in each of these situations and is there any commonality between them?

These are modern versions of a philosophical problem known as The Trolley Problem and there aren’t any easy answers. But aside from being simply an interesting philosophical thought experiment, what possible relevance could a problem like this have in the modern world? As it turns out, this problem has become hugely relevant with the advent and proliferation autonomous vehicles. Driverless cars themselves are — presently — incapable of making moral judgements on their own, so these types of decisions would need to be pre-programmed into the logic of every driverless car which, in turn, means that a human would need to input a desired outcome ahead of time. So then who should decide these questions? Legally, we might also ask, who should be responsible for auto accidents in the age of driverless cars? Like similar problems in modern applied ethics, these questions will require firm answers to guide our technology and our laws in the direction our societies deem to be proper.

The Origin of the Trolley Problem in Brief

Thomas Aquinas suggested that a man killing his assailant would be justified in doing so if and only if it was not his intention to kill. This would later become known as the doctrine of double effect.

A modern interpretation of the doctrine of double effect was put forth by Philippa Foot in 1967. The problem (at the time) had nothing to do with driving, but instead was one of a number of thought experiments she used to examine the morality of abortion. As an example of double effect she suggested the following:

“The steering driver faces a conflict of negative duties, since it is his duty to avoid injuring five men and also his duty to avoid injuring one. In the circumstances he is not able to avoid both, and it seems clear that he should do the least injury he can. The judge, however, is weighing the duty of not inflicting injury against the duty of bringing aid. He wants to rescue the innocent people threatened with death but can do so only by inflicting injury himself. Since one does not in general have the same duty to help people as to refrain from injuring them, it is not possible to argue to a conclusion about what he should do from the steering driver case.”

In addition, anyone who has taken an introductory Philosophy course is probably most familiar with the problem of the fat man:

“An out-of-control trolley speeds toward five people on a track and will surely kill them should it be allowed to continue unimpeded. If you push a fat man onto the tracks, he will die, but the five will live as his body will slow down the trolley. Do you push the fat man onto the tracks or do you let the group of five die?”

BjQT8zo.jpgBjQT8zo.jpg

As a more practical example, consider the historical legal debate over the safety of cars. As the automobile replaced the horse as the most common means of conveyance, the law did its best to catch up. A 1909 book, The Law of Automobiles cited (pg xii) a Georgia Court of Appeals decision regarding the classification of automobiles:

“It is insisted in the argument that automobiles are to be classed with ferocious animals, and that the law relating to the duty of the owners of such animals is to be applied. It is not the ferocity of automobiles that is to be feared but the ferocity of those who drive them. Until human agency intervenes, they are usually harmless.

Screen Shot 2017-03-29 at 9.39.50 PM.pngScreen Shot 2017-03-29 at 9.39.50 PM.png

In the same book, an entire subsection of a chapter is devoted to “[The] Tendency [of automobiles] to frighten horses” and I can’t help but think that human drivers today are analogous to the horses of the early 20th century. By this I mean that I believe most of the real-world instantiations of these moral hypotheticals will occur when human drivers and autonomous drivers share the road in equal numbers and the imprecision of human driving forces autonomous vehicles into dangerous situations. Perhaps a modern reading of the court decision above would be:

“It is not the ferocity of automobiles that is to be feared, but the ferocity of those who program them.”

Where are we now?

In the US, the Department of Transportation, through the National Highway Traffic Safety Administration (NHTSA) has published the Federal Automated Vehicle Policy, which suggests loose guidelines for the design and implementation of autonomous vehicles and advises autonomous car manufactures to comply, first and foremost, with relevant state and local law. There are no major federal initiatives (yet) that seek to regulate the behavior of autonomous vehicles in ethically ambiguous situations. A small section of the NHTSA release is devoted to Ethical Considerations and what follows below is a vague description of the problem at hand:

“Various decisions made by an HAV’s computer “driver” will have ethical dimensions or implications. Different outcomes for different road users may flow from the same real-world circumstances depending on the choice made by an HAV computer, which, in turn, is determined by the programmed decision rules or machine learning procedures. Even in instances in which no explicit ethical rule or preference is intended, the programming of an HAV may establish an implicit or inherent decision rule with significant ethical consequences.” (Section I.E.11)

Internationally, a number of forums have been scheduled to discuss these same issues but as of yet, there are no clear responses to any of these problems.

So Who Decides?

While the Trolley Problem is presented as a binary choice for humans — live or die — a computer will be better able to use fuzzy logic, probability, and complex decision-making algorithms to make decisions in ethically ambiguous situations. That is, instead of considering who lives and who dies in an unavoidable collision, an autonomous vehicle may be able to instead consider who has a higher probability of living in a given situation and respond accordingly. For instance, a computer may be able to determine how best to strike another human being (if collision is unavoidable) and maximize that individual’s chance of survival, as well as the driver’s.

Ultimately, the question is to what extent these automated decisions should be standardized. Suppose that a car manufacturer makes an autonomous vehicle that is programmed to protect the life of its occupant above all else. Any consumer who wants to maximize their own safety would find such a vehicle appealing. Without standardization, it’s plausible that a more expensive car model may, by design, provide greater levels of preferential protection for the occupant instead of maximizing the safety of the general public. Similarly, we’d need to decide if government owned autonomous vehicles should behave differently than privately owned vehicles. If a driverless bus observes pedestrians in the path of an out of control car, should the bus sacrifice itself and risk injury to its passengers to save them? It seems like a strange question to ask, but we should consider if the civic duties of autonomous public transport should go above and beyond their base functionality for the greater good.

In the US, car safety is rated by both the IIHS (an independent agency) and the NHTSA (a government organization). How will these organizations rate autonomous vehicles in the future? Should a car with a better crash-rating be programmed to chance the safety of diver over pedestrians? There are no clear answers regarding industry standardization, but private owners will surely want a say in how their car behaves, too. More selfless individuals might feel uncomfortable knowing that their car would choose to kill a child in any situation and would demand custom logic to ensure that their vehicle prioritizes others.

MIT has launched an initiative to gather public feedback on these issues through an interactive website called Moral Machine.

1.jpg1.jpg

Through this initiative, the general public has the option to provide input on and design a variety of scenarios involving the moral behavior of self-driving vehicles and even design their own scenarios. As the mission statement suggests:

“This website aims to take the discussion further, by providing a platform for 1) building a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas, and 2) crowd-sourcing assembly and discussion of potential scenarios of moral consequence.”

In addition to basic scenarios, the Moral Machine project also considers other interesting situations. In certain scenarios generated by Moral Machine, the pedestrians are also criminals. This type of scenario ties in directly with a larger discussion about whether or not society should value all human life equally. Presently, thirty-eight US states have fetal homicide laws but only some specifically relate to vehicular homicide. All vary widely in their approach and their definition of “life”. So, how should an autonomous vehicle value the life of pregnant women, children, the sick, and the elderly?

2.jpg2.jpg

The Future

Autonomous vehicle manufacturers will be programing a variety of social, legal, and personal obligations into the hardware of their machines and there’s not a chance that serious conflicts won’t emerge as the industry grows. Even the simplest well-meaning logical rule can spawn innumerable ethical contingencies or unintended behavior from a computerized vehicle.

In the US, it is certain that we will see a variety of laws and regulations implemented nationwide. This makes practical sense for a few reasons. First, there are more variables to consider when driving in New York City compared to a highway in the rural Midwest. Economically too, driverless cars will be more viable or more in demand in certain areas based upon a number of factors. Globally, just as crash standards vary from country to country we can expect the behavior of autonomous vehicles to follow suit. Overall, however, I see standardization of the decision making of autonomous vehicles as the only reasonable long-term approach to one of the most profound ethical dilemmas of our age, especially as we begin to rely on autonomous vehicles and craft for trade and commerce across borders.

Until automated vehicles become the norm — and perhaps even more so thereafter — the only question is  this: how should a robot value your life?

If you enjoy our articles, be a part of our growth and help us produce more writing for you:
Total
61
Shares

4 comments

  1. I don’t think autonomous cars can distinguish whether an obstruction is human or inanimate, let alone determine the age, health, pregnancy, and/or criminal backgrounds of the people in front of it. I think such sentience is a very long way off. So this adds another element to the ethical problem: what should the car do if it doesn’t know whether it’s going to hit humans or objects? And how does it know there aren’t ten humans behind the barrier that would likely be killed if it ran into the barrier?

    Without cars having near-human sentience, I don’t think these fine-grained moral questions will come into play. I’m pretty sure these choices will be based on legal liability for the car manufacturers. If the car deviates from the path it’s on, hits an obstruction, and the passengers are killed, the manufacturer is liable. If the car maintains it’s course but can’t stop due to environmental conditions and kills one or more pedestrians, that’s either no-fault or the driver is liable. You know which one the manufacturer is going to choose.

  2. Thanks for reading, Chris.

    That’s an interesting thought. I imagine a “phase-in” using BRT-type lanes is probably what will happen in a lot of areas, especially with things like freight delivery or mass transit.

    I’m not sure where I stand on this quite yet. You make a great point, but I also wonder if it’d be beneficial to mix HAVs in with human traffic with more immediacy if it helped reduce traffic fatalities off the bat.

  3. Interesting article.

    Autonomous vehicles are not new. Implementing them into an uncontrolled space is. In the past autonomous vehicles have operated in isolated systems. raised LRTs, Airport People movers, mine equipment, etc… The moral dilemmas were minimized because, well, people and outside obstacles shouldn’t be there.

    Can we mix autonomous cars with pedestrian and other traffic on open road ways? Currently I say no, these moral problems are not that simple to solve, neither are the liability issues. Isolating autonomous vehicle operation to dedicated and protected lanes (e.g., similar to BRT lanes) to avoid these moral decisions might be the easier first step to broader implementation.

Leave a Reply

Inline
Inline