Do not use AI as a scapegoat
Consider a doctor using an AI to help her diagnose a mole on a patient’s skin, to determine if it is cancerous. If the doctor makes the incorrect diagnosis, and the patient ends up misdiagnosed, there is a temptation for the doctor to blame the mistake on the AI.
The same goes for any expert using AI to inform decision-making. A judge may consult an AI before deciding whether to grant bail, and blame any mistakes on the algorithm. An interviewer may blame a bad hire on the AI that assists in job candidate evaluation, and so on.
Tripat Gill, an expert in consumer psychology at Wilfrid Laurier University, Canada, explored how using AI as a scapegoat can also alter consumer morality. Gill ran experiments in which people imagined themselves driving a car, or being driven by an autonomous vehicle. He then asked them whether the car should avoid harming a pedestrian, at the expense of their own safety.
The data revealed a clear effect of AI as a moral scapegoat. Participants considered harm to a pedestrian to be more permissible with an autonomous vehicle, compared to a regular car that they drove themselves. In fact, there was a nearly twofold increase in the proportion of participants choosing harm to a pedestrian by an autonomous vehicles. This shift in moral judgments was driven by the attribution of responsibility to the autonomous vehicle.
We must ensure that when humans use AI systems, whether as consumers or decision-makers, their ability to blame machines for misdeeds does not alter their incentives to act morally or vigilantly. Designers of AI systems will be happy to hear this, since it reduces their own liability. Nevertheless, a balance between human and machine responsibility is needed, in order to hold each party (the users and the designers of the AI) accountable in correct proportion to their causal responsibility.
References
Gill, T. Blame It on the Self-Driving Car: How Autonomous Vehicles Can Alter Consumer Morality. J. Consum. Res. 47, 272–291 (2020).