Prevent algorithm capture
In politics, ‘regulatory capture’ means that a political entity or regulatory agency is co-opted to serve the commercial, political or ideological interests of a small group of people, whose special interest is prioritized over the general public. For example, taxi owners may capture transportation regulators in order to ban ride sharing services that take some of their market share. Oil companies may capture environmental protection authorities to reduce oversight over the environmental impact of their activities, etc.
AI affords a new kind of regulatory capture: whoever controls the algorithm wins. This phenomenon, which I call ‘Algorithm Capture,’ can be quite nefarious, because computer code may be more insidious—no employee shows up with a fancy car in the parking garage of the regulatory agency, raising suspicions about bribery.
Furthermore, Algorithm Capture can extend to the realm of ideological warfare. Consider a highly skilled AI software developer responsible for programming a search engine algorithm, or the news ranking algorithm of a social media platform. This is the algorithm that decides how to prioritize news and posts in your social media feed. This programmer can increase censorship of conservative or liberal content, or alter the user interface or nudge the voting behavior of some groups to manipulate elections.
As another example, consider the emergence of Algorithmic Hiring—the use of AI to help make hiring or college admission decisions by analyzing applicants’ resumes, test scores, interview footage, etc. A programmer may design the AI to implement a kind of ‘affirmative action’ policy, increasing the representation of particular groups based on their gender, race, nationality, etc. Another programmer may design the AI to be fair in that it ignores such demographic factors. A third programmer may alter the algorithm to subtly favor ‘native accents’ in the interview video, excluding immigrants even if the job does not require such language skills. Regardless of where you stand on this controversial topic, you may be concerned if hiring policies that impact potentially millions of people reflect the ideologies of programmers, without some sort of oversight.
To preempt this risk, we first need awareness of the possibilities, including having a solid scientific understanding of how algorithms shape societal outcomes. Second, we need to be able to audit algorithms deployed in sensitive domain, in order to flag illegal behavior, such as discrimination in hiring or the manipulation of elections.
References
Hardin, G. Tragedy of the Commons. Science 162, 1243–1248 (1968).
Tynan, D. Facebook accused of censorship after hundreds of us political pages purged. Guardian (2018).
Bond, R. M. et al. A 61-Million-Person Experiment in Social Influence and Political Mobilization. Nature 489, 295–298 (2012).
Engler, A. Auditing employment algorithms for discrimination. Brookings Institution https://www.brookings.edu/research/auditing-employment-algorithms-for-discrimination/ (2021).
Guszcza, J., Rahwan, I., Bible, W., Cebrian, M. & Katyal, V. Why we need to audit algorithms. (2018).