Threats of Evil AI: The threat to institutions
Can AI algorithms run our educational, governmental or economic institutions? In many ways, they already do. For example, AI is increasingly used to grade essays and detect plagiarism in schools and universities. During the COVID-19 pandemic, which caused millions of students to study at home, automated remote proctoring of exams became a booming industry. In addition to creeping students out, it raised a whole bunch of security and privacy concerns.
AI is also increasingly used in law enforcement. Algorithmic policing uses AI-driven analytics to help police with their patrolling and stop-and-frisk decisions. This raises the possibility that, if algorithms are not trained correctly, they may perpetuate racial and other biases, for example by disproportionately harassing innocent people from a particular demographic--all while giving the illusion of algorithmic objectivity.
Another application in law enforcement is risk assessment, where a judge consults an algorithm to help decide whether to grant someone parole or pretrial bail. Again, there are various ways in which these prediction algorithms may be biased.
AI is also being used by healthcare institutions, not only to assist medical diagnosis, but also to identify who is eligible for care. A recent study analyzed a widely used algorithm that scores patients on their need for additional (expensive) care. Black patients received considerably lower scores than white patients of comparable sickness, and so were much less likely to be given extra care.
Does this mean we should eliminate the use of AI in our institutions? Not so fast. As economist Sendhil Mullainathan put it in a New York Times op-ed, algorithms may well be biased, but so are people. In fact, “biased algorithms are easier to fix than biased people.” Using AI for decision-making is forcing us to be more explicit about our expectations, and inviting us to scrutinize and improve how our institutions work.
So while AI poses a threat to our institutions, it also presents an opportunity to improve the status quo. But this would not happen without concerted effort, which may require redesigning those very institutions.
References
Chin, M. Exam anxiety: How remote test-proctoring is creeping students out. The Verge 29, (2020).
Lyall, S. The dangerous rise of policing by algorithm. Prospect Magazine (2021).
Julia Angwin and Jeff Larson and Surya Mattu and Lauren Kirchner. Machine Bias. ProPublica (2016).
Obermeyer, Z., Powers, B., Vogeli, C. & Mullainathan, S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 447–453 (2019).
Mullainathan, S. Biased algorithms are easier to fix than biased people. NY Times (2019).