Features of Evil AI: Imperviousness to punishment

Jail.jpg

In Season 3 of the animated series Futurama, Bender Bending Rodríguez says “Being a robot's great, but we don't have emotions and sometimes that makes me very sad.” Bender’s existential statement notwithstanding, we can safely assume that for the foreseeable future, machines have no concept of pleasure or pain, happiness or sadness. This is despite major advances in the field of ‘affective computing’, which enables machines to both perceive and express emotions. 

So for the time being, putting an AI in jail, or punishing it physically, would not deter it from misbehaving. Instead, we need to attribute moral responsibility to the human actors that have the ultimate moral agency, and thus the ultimate responsibility, since they do care about punishment and reward.

Today, corporations are considered ‘legal persons’—as opposed to individual human beings, who are also ‘natural’ persons. This personhood gives corporations certain rights and responsibilities, such as the right to own property or enter into a contract. When a corporation breaks the law, say by polluting a river, it may be subjected to governmental fines or considered liable for monetary compensation to victims. These acts do not cause physical pain to the corporation, but they certainly constitute punishments that have deterrent force. Ultimately, these punishments deter the humans behind the corporation: shareholders, employees, board of management, and so on. In the most serious cases, these individuals themselves may be subject to civil or criminal prosecution as well.

Some have argued that AI systems should have a kind of legal personhood akin to corporations. But this solution only works if the connection between the offending AI and the humans responsible for it is clearly established. Otherwise, there is a risk of misplacing moral responsibility, causal accountability and legal liability regarding the AI’s mistakes and misuses.

In the very long term, some people believe AIs could, or even should, develop full moral rights. Such development may very well require us to think of deterrents suitable for machines—say, monetary fines, jail, or software deletion. For the time being, however, we need to focus on the humans behind the AI.

References

  • Picard, R. W. Affective Computing. (MIT Press, 2000).

  • El Kaliouby, R. & Colman, C. Girl Decoded: A Scientist’s Quest to Reclaim Our Humanity by Bringing Emotional Intelligence to Technology. (Currency, 2021).

  • Lima, G., Jeon, C., Cha, M. & Park, K. Will Punishing Robots Become Imperative in the Future? in Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems 1–8 (Association for Computing Machinery, 2020).

  • Chesterman, S. ARTIFICIAL INTELLIGENCE AND THE LIMITS OF LEGAL PERSONALITY. International and Comparative Law Quarterly vol. 69 819–844 (2020).

  • Floridi, L. & Taddeo, M. Romans would have denied robots legal personhood. Nature 557, 309 (2018).

  • Gunkel, D. J. Robot Rights. (MIT Press, 2018).

  • Coeckelbergh, M. Robot rights? Towards a social-relational justification of moral consideration. Ethics Inf. Technol. 12, 209–221 (2010).

Previous
Previous

Features of Evil AI: Scalability

Next
Next

Features of Evil AI: Unpredictability