Features of Evil AI: Scalability

Scales_Of_Harm_3.jpg

One of the characteristics of evil AI is that it reduces the cost of producing many intentional harms. In addition, creating many copies of AI programs means that it is easy to replicate evil. In economics parlance, AI reduces the marginal cost of evil.

AI can scale-up small-scale harm to massive proportions. Consider your interaction with customer support over the phone. Many companies now use automated systems to handle routine issues without human intervention—e.g. fetching your bank account balance or telling you about the bank’s opening hours. But we have all experienced how irritating these systems can be. Their supposedly advanced speech understanding AI never seems to understand what we say—not my accent anyway—and they keep swinging you back and forth between the same set of menu items. How many millions of human hours have been wasted by these AIs? When the same speech processing system is used by thousands of companies to deal with millions of customer calls, the AI behavior that causes mild irritation for one or more humans becomes a global campaign of irritation, and massive waste of time.

AI can also scale-up existential harm. Consider the use of autonomous weapons in warfare. Today, going to war with another nation is a very expensive decision, both economically and politically. If AI can make warfare more economically efficient, then governments may be more willing to go to war—they may become more trigger happy. More importantly, with a fully-automated or largely-automated army, the political cost of going to war would go down tremendously. Politicians would not have to justify to their citizens why their fellow countrymen and women are returning from the war zone psychologically distressed, severely insured, or dead.

Other forms of evil have to do with the use of AI in decision-making by governments and corporations. AI systems increasingly decide who gets a loan, a job, or a certain kind of expensive medical treatment. If these decisions are not based on sound principles, they may lead to evil, such as discrimination against certain genders or ethnicities. And if these systems are adopted widely, the cost of any mistake multiplies very quickly.

Previous
Previous

Aims of Evil AI: The four motives of evil

Next
Next

Features of Evil AI: Imperviousness to punishment