Do not let AI be a pushover

German philosopher Immanuel Kant (1724–1804), known for his ‘categorical imperative’, once wrote: “[H]e who is cruel to animals becomes hard also in his dealings with men.” Nearly three hundred years later, the Montréal Declaration for Responsible Development of Artificial Intelligence stated that AI systems “should not encourage cruel behavior toward robots designed to resemble human beings or non-human animals in appearance or behavior.” The fear is that cruelty to robots that resemble humans or animals may, in turn, encourage cruelty towards the real thing.

Imagine a different situation. You call a customer service line to return a defective product. You are faced with a sequence of incomprehensible menu options designed to frustrate you until you give up on your grievance. Finally, you reach a customer service agent, who declares that he’s a robot. Given your frustrated state, you may be tempted to be more abusive in your language towards the robot. After all, you are not hurting any real person. But this abusive behavior may actually lead to a longer resolution of your problem, not to mention that it may encourage you to be more abusive next time you speak to a human customer service agent.

This scenario is plausible. In research led by my former student Fatimah Ishowo-Oloko, we had people play repeated games with humans or with bots. We found that when people believed they were playing with bots, they were less likely to be cooperative in general, even though the algorithm was actually nicer and more cooperative than people. We know this because when the bot posed as a human, people were much more cooperative, and the degree of cooperation was even greater than when humans were playing with each other. This is consistent with recent work led by Jurgis Karpus, which found evidence for a human tendency towards algorithm exploitation: People cooperate less with benevolent AI agents than with benevolent humans, and feel less guilty doing so

All these examples highlight the possibility that AI systems may need to push back on human cruelty, rudeness, and non-cooperation. We may have to design machines that hold people up to a higher standard of conduct, rather than being merely passive objects that allow them to manifest, and perhaps develop, people’s own evil instincts.

Reference

  • The Moral Status of Animals. Stanford Encyclopedia of Philosophy https://plato.stanford.edu/entries/moral-animal/ (2017).

  • Abrassart, C. et al. Montréal Declaration for Responsible Development of Artificial Intelligence. Montréal Declaration https://www.montrealdeclaration-responsibleai.com/the-declaration (2018).

  • Anderson, S. L. The unacceptability of Asimov’s three laws of robotics as a basis for machine ethics. Machine ethics 285–296 (2011)

  • Darling, K. Extending legal protection to social robots: The effects of anthropomorphism, empathy, and violent behavior towards robotic objects. in Robot Law (Edward Elgar Publishing, 2016).

  • Coghlan, S., Vetere, F., Waycott, J. & Barbosa Neves, B. Could social robots make us kinder or crueller to humans and animals? Adv. Robot. 11, 741–751 (2019).

  • Ishowo-Oloko, F. et al. Behavioural evidence for a transparency–efficiency tradeoff in human–machine cooperation. Nature Machine Intelligence 1, 517–521 (2019).

  • Karpus, J., Krüger, A., Verba, J. T., Bahrami, B. & Deroy, O. Algorithm exploitation: Humans are keen to exploit benevolent AI. iScience 24, 102679 (2021).

Previous
Previous

Recognize Ethical Dilemmas

Next
Next

Watch out for bad role model AIs