Self Driving Electronic Computer Car On Road

Would you Buy an Autonomous Car that would Kill You to Save Others?

Moral Machines: would you buy an Autonomous Car that would kill you to save other lives?

One of the most interesting questions about Artificial Intelligence these days has to do with ethics and morals. Sure, we can program a car to drive itself, and obey the rules of the road. We’d like it to avoid accidents as far as possible. Yes, it should stop if it can – or swerve to avoid a collision. Yes, it should avoid hurting people.

But what if your clever car is facing a dilemma? What if the only options available involve injury or loss of life? What if one option will kill two people, and the other option will kill only one. What if that one is you, the passenger?

Morally speaking, an ethical Autonomous Vehicle (AV) ought not to value one life over another – even that of its owner. According to research reported in Science Magazine most people think that’s a good idea, as long as they’re not the ones being sacrificed for the greater good.

We found that participants in six Amazon Mechanical Turk studies approved of utilitarian AVs (that is, AVs that sacrifice their passengers for the greater good) and would like others to buy them, but they would themselves prefer to ride in AVs that protect their passengers at all costs.

The dilemma is well illustrated in this video:

While we’re on the subject, we can extend the moral considerations even further. What importance should a moral machine ascribe to the question of personal responsibility on the part of the people involved? If the pedestrians have disobeyed a crossing signal, should they be given the same care as if they were law-abiding?

In another scenario, suppose the vehicle’s passengers have refused to wear their seat belts, thereby greatly increasing their risk of death in a collision. The car knows they are unrestrained. Should it then give a greater priority to the passengers’ safety – perhaps at the risk of other road users – or is it their own fault for not wearing safety belts?

, , , , ,

5 Responses to Would you Buy an Autonomous Car that would Kill You to Save Others?

  1. Antoin O Lachtnain July 19, 2016 at 11:43 am #

    I am sorry. This is all ridiculous. These scenarios are just not realistic. A thing happened between Humean utilitarianism and autonomous cars, that is, safety culture and quality management. Nowadays systems are (or at least can be) engineered so that if procedures are followed, there is practically no risk of a human fatality. There is just no realistic scenario where an autonomous car would be travelling so fast that it could not brake if there were oncoming pedestrians. At the very least, it could brake from, say, 50kmh to 20 Km, at which speed a pedestrian fatality is extremely unlikely. Remember, the computer driver will be able to react a lot faster than the human driver and will be able to slow down or stop in a much shorter distance as a result.

    The big issues with autonomous cars are not claptrap like this from ethics philosophers who slept through the 1990s. They are figuring out the liability when something goes wrong, for instance, if there is a failure in a convoy of vehicles travelling closely together at speed.

  2. Conn Ó Muíneacháin July 19, 2016 at 12:17 pm #

    Good point, Antoin. I do find the philosophical questions fascinating, though, however hypothetical.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.