Consider who is responsible for the dilemma
In the Moral Machine experiment, our study participants were more likely to save those who obeyed the law, by crossing the street at the green light, than those who were jaywalking. This preference was quite strong. All else being equal, crossing legally increased the chance of survival by almost 40%.
Here, public opinion is partially consistent with the recommendations set in the world’s first ethical guidelines for autonomous vehicles, developed in Germany. Specifically, the ethical guidelines state that “parties involved in the generation of mobility risks must not sacrifice non-involved parties.” When someone jaywalks or crosses a red pedestrian light, and an autonomous vehicle faces a dilemma about whether to harm this person or someone else, then the jaywalker has contributed to generating the dilemma. While death or serious injury is certainly not a proportionate punishment for crossing the street illegally, it seems reasonable not to punish someone else for such a mistake. Still, the German guidelines do not say that programmers should sacrifice the jaywalkers either. In a sense, the guidelines provide a deliberately incomplete answer, while prohibiting programmers from offsetting individuals against each other.
The broader point applies to AI systems beyond autonomous driving. We must consider whether the AI should take into account the contributions of the different stakeholders to the situation that it finds itself in. For example, consider an AI algorithm that makes triage decisions—determining the priority of patients' treatments, when medical resources such as medicine or personnel are scarce. This type of situation was made real in the COVID-19 pandemic, and caused much controversy. Triage decisions typically consider the severity of one’s condition and prognosis—i.e. the likelihood of survival. Should they also consider the patient’s personal life decisions—e.g. to be a smoker—in resolving the dilemma? What if an AI system did so?
References
Awad, E. et al. The Moral Machine experiment. Nature 563, 59–64 (2018).
Luetge, C. The German Ethics Code for Automated and Connected Driving. Philos. Technol. (2017) doi:10.1007/s13347-017-0284-0.
Stanley, A. A Faulty Algorithm Screwed Residents Out of Stanford’s Vaccine Distribution Plan. Gizmodo (2020).