One of the most interesting questions about Artificial Intelligence these days has to do with ethics and morals. Sure, we can program a car to drive itself, and obey the rules of the road. We’d like it to avoid accidents as far as possible. Yes, it should stop if it can – or swerve to avoid a collision. Yes, it should avoid hurting people.
But what if your clever car is facing a dilemma? What if the only options available involve injury or loss of life? What if one option will kill two people, and the other option will kill only one. What if that one is you, the passenger?
Morally speaking, an ethical Autonomous Vehicle (AV) ought not to value one life over another – even that of its owner. According to research reported in Science Magazine most people think that’s a good idea, as long as they’re not the ones being sacrificed for the greater good.
We found that participants in six Amazon Mechanical Turk studies approved of utilitarian AVs (that is, AVs that sacrifice their passengers for the greater good) and would like others to buy them, but they would themselves prefer to ride in AVs that protect their passengers at all costs.
The dilemma is well illustrated in this video:
While we’re on the subject, we can extend the moral considerations even further. What importance should a moral machine ascribe to the question of personal responsibility on the part of the people involved? If the pedestrians have disobeyed a crossing signal, should they be given the same care as if they were law-abiding?
In another scenario, suppose the vehicle’s passengers have refused to wear their seat belts, thereby greatly increasing their risk of death in a collision. The car knows they are unrestrained. Should it then give a greater priority to the passengers’ safety – perhaps at the risk of other road users – or is it their own fault for not wearing safety belts?