Features of Evil AI: Autonomy

Landmines.jpg

An even more peculiar feature of evil AI is that it is capable of fully autonomous evil. In some, but not all cases, this evil may not even be intended by the AI’s designers.

First, consider Lethal Autonomous Weapons (LAWs), such as a robotic dog or drone fitted with a machine gun and the ability to recognize and fire at enemies. These machines act autonomously, but are ultimately intentionally ‘pointed’ at someone or some group, with the intention of causing such harm. In some ways, they are not very different from landmines.

But both landmines and AI-powered autonomous weapons may also cause unintended harm. Indeed, according to the Landmine and Cluster Munition Monitor, approximately 80% of the casualties of landmines are civilian, and most are killed at times of peace. And opponents of LAWs, such as computer scientists Stuart Russell, have argued that LAWs would violate the 1949 Geneva Convention on humane conduct in war, because current technology is nowhere near good enough to distinguish between combatants and non-combatants.

Unintended autonomous evil by AI is not limited to collateral damage. AI agents can sometimes figure out evil ways of doing things without any knowledge of their programmers. For example, recent research showed that, if left to their own devices, AI algorithms can learn to collude with one another to fix prices—i.e. charge consumers too much—rather than converge to the price determined by true market competition. The scary part is: they do this without even communicating with one another!

Autonomous market collusion is the tip of the iceberg of possibilities or fully autonomous evil. In the not-so-distant future, we may give an AI a reasonable goal—say, maximizing the sale of a medication that treats a disease. The AI may figure out ways—perhaps with the help of another AI—to spread the disease in order to increase demand for the medication. To prevent this sort of scenario, we need to endow AI with commonsense knowledge—e.g. that causing disease is not an acceptable way to increase sales. We also need to give AI makers and owners the right incentives—e.g. by requiring compensation to victims, or imposing jail sentences.

References

  • Landmine and Cluster Munition Monitor. http://www.the-monitor.org/en-gb/home.aspx.

  • Russell, S., Hauert, S., Altman, R. & Veloso, M. Robotics: Ethics of artificial intelligence. Nature 521, 415–418 (2015).

  • Calvano, E., Calzolari, G., Denicolo, V. & Pastorello, S. Artificial Intelligence, Algorithmic Pricing and Collusion. SSRN Electronic Journal doi:10.2139/ssrn.3304991.

Previous
Previous

Features of Evil AI: Unpredictability

Next
Next

Terminology: Black box