Look out for AI social dilemmas

Is it a problem if you prefer a self-driving car that protects you regardless of the risks to others? After all, the car is yours, and it is your right to put your own safety first. Moreover, you are just one person. By purchasing a self-protecting car, you are only increasing risk to pedestrians ever so minutely. In the grand scheme of things, it is negligible.

But here is the catch. If everyone makes exactly the decision that you have made, then the aggregate increase in risk to pedestrians may not be so negligible. As a result, the total number of traffic accident fatalities may not be minimized.

This all seems pretty obvious in hindsight. But before we conducted our survey, most of the discussion of AI ethics centered around the perspective of an impartial observer. The question was: What is the right thing to do in a given situation? Our experiment changed the question to: Which outcome would people bring about if they acted in their self-interest? And as it turns out, the answers to these questions are not identical.

This type of problem is called a social dilemma. It comes up everywhere. When your car emits CO2, it makes but a tiny contribution to the total amount of pollution in the air. But when everyone’s car does the same, the total amount of pollution may become hazardous to human health.

Similarly, when a fisherman catches a little more fish than his allotment, no one is harmed. But if every fisherman caught a little more fish, there may no longer be enough fish to sustain renewal of the fish population, and all fishermen lose.

References

  • Bonnefon, J. F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573-1576.

Previous
Previous

The Ethical Opt-Out Problem in AI

Next
Next

Consider the incentives of paying customers in shaping AI behavior