Consider the incentives of paying customers in shaping AI behavior

Imagine a self-driving car is about to crash into five people crossing the street, and will likely kill them all. The car calculates that it has one alternative: it can swerve sharply towards the curb, collide with a large tree, which would kill the passenger in the car!

You’ve been invited to a focus group by a major car manufacturer. They have been struggling with how to program the car to handle this situation. You are asked: What do you think the car should do in this situation? You say with righteous conviction: “The car should save more lives, even if that means killing its own occupant.” Indeed, in our own studies, the vast majority of people stated that they believe the car should always minimize the total number of casualties, even if this entails sacrificing their own passenger.

A few months later, someone from the car company contacts you by phone, and tells you that thanks to your participation in our survey, you’ve won a lottery for one of their first driverless cars. You can choose between two models: one programmed based on your own suggestion to sacrifice you if that saves more lives, The other car model will always put your safety as the highest priority. Which car will you choose?

In our surveys, most people thought AVs should be programmed to save more lives, even if this means harming the passenger. But they also thought they would not purchase such a car. If you felt this inner conflict, rest assured that you are not alone.

Walmart founder Sam Walton famously said “There is only one boss. The customer.” So by default, car makers will do whatever the customers want. So we must face the fact that in some situations, AI’s obligation to the paying customer causes a negative impact on third parties. This negative impact—such as increased risk to pedestrians—is what economists call an ‘negative externality.’

There is another complication. When you purchase a car with a bull bar, you increase your own safety but decrease the safety of others outside the vehicle. which is why they were banned in some countries. But if the negative externality is a consequence of choices made by an AI, such as the autonomous car’s behavior, then the negative externality is no longer visible, unlike the shiny metal bull bar. We might call this a black box externality.

References

  • Bonnefon, J.-F., Shariff, A. & Rahwan, I. The Social Dilemma of Autonomous Vehicles. Science 352, 1573–1576 (2016).

  • Hardy, B. J. A study of accidents involving bull bar equipped vehicles. TRL REPORT 243 (1996).

  • Bull bars banned from next year. BBC News (2001)

Previous
Previous

Look out for AI social dilemmas

Next
Next

Understand the limits of public opinion about AI ethics