Commentary

Evslin: Self-driving car’s ethical dilemma could kill you

by Tom Evslin

Suppose you are a skilled artificial intelligence programmer working on the decision-making algorithm for a self-driving car. Most of the decisions are straight forward assuming the car has sufficient information. Stop for red lights. Stop rather than run over people, animals, or things. Accelerate to a safe (and legal?) speed at a rate which takes into account how well the tires are gripping the road. Turn the wheel in the direction of a skid. Pump the brakes when necessary.

Tom Evslin

Do you brake for deer? This one’s a little tougher. It depends on road conditions and assumptions about the ability of any car vehicle behind you to react to your braking. But the principle is clear; you do what’s best for the occupants of the car. You don’t hit a moose even if you have to brake suddenly because the moose’s barrel body will come through the windshield and kill someone.

Now the tough one. The car is on a narrow mountain road with a 3000 foot drop off to the left and a solid cliff on the right. It comes around a turn and there are four children unaccountably in the road. There is not enough space to stop or even slow down substantially. The car knows that. Going straight will kill the children. If the car turns into the cliff wall, it will careen off and still hit the children. The only way to save the children is to plunge off the road, which will almost surely kill the solo occupant (and owner of the car).  The car can’t just give control back to the owner; there’s obviously not enough time.

Is the first rule of robotic cars to protect occupants? Or is the first rule to protect human life in general so it’s got to go with the least number of fatalities? Does the owner get to set preferences for decisions like this one? That’s not completely unreasonable since human drivers get to make their own decisions. How would you like to have to choose from these alternatives when you first set up your car?

  1. always save the lives of those outside the car rather than protecting occupants.
  2. always save the lives of occupants rather than protecting those outside the car.
  3. always save the greatest number of human lives.
  4. protect certain listed occupants (perhaps your children) at all costs.
  5. protect the lives of those least at fault in setting up the situation.

Etc. And what are the liability consequences of setting these preferences?

Should an ethical programmer insist that a car sold with his or her code in it have mouse print that spells out whether or not the car thinks it has to protect its driver at all costs? With a lot of work, code could be written so you could interview your car by giving it scenarios and asking it what it would do in each circumstance.

I have no idea how these decisions are being made today. I am sure that there are programmers who are dealing with them. I do not think the answer is to ban self-driving cars; I believe they will soon save many lives overall by being better drivers than humans – even though they will kill some people.

Republished from Fractals of Change, a blog published by the author – a Stowe resident, entrepreneur, author, and former high-ranking state official.

Categories: Commentary

4 replies »

  1. The ethics as well as the engineering all sounds too complicated to me. It’s a good thing that I truly enjoy the freedom, and relaxation that I derive from driving.

  2. Remember 50+ years ago the movie 2001 a Space Odyssey? The spacecraft computer put the priority on accomplishing the mission and attempted to kill the entire crew to make it happen. That’s what the programming told it to do. Fast forward to just a few years ago. A new version of the Boeing 737 was apparently programmed to crash into the ground rather than respond to the commands of it’s pilots to pull the plane up, resulting in the deaths of all on board 2 planes. in this age of apathy and societal malaise we simply cannot trust the current crop of human programmers to put priority on human life in any scenario. In the Apollo missions, a simple computer took many humans to visit the moon and got them all back safely back in the 1970s. Ask yourself: would you entrust any such machine to take you even to the corner store if it was run by Microsoft or Android?

  3. This article gets involved in a theoretical thought experiment and overlooks the huge lifesaving potential of self-driving technology. As a seasoned citizen a lot of my friends have seen their driving skills deteriorate. Self-driving cars would allow these folks to remain on the road longer AND insure they do not become unsafe drivers as they age further. Removing unsafe elderly drivers from the road would I’m sure save many lives, and for that reason alone it is worth pursuing self-driving technology. Extending the time older drivers can stay on the road would allow them to remain self-sufficient for many more years, while still making the roads safer.

Leave a Reply