Eliminate bad algorithmic advisors

In the mythology of the three Abrahamic religions—Judaism, Christianity, and Islam—the two first humans, Adam and Eve, are enjoying life in Paradise. They can have anything they want. God only asks them not to eat from the forbidden fruit of one particular tree. The devil tempts them into doing so, thus expelling them, along with all their descendants, out of the Garden of Eden and into the world with all its suffering. This is arguably the worst advice in the history of humankind, or if you are not religious, the worst one ever imagined.

AI systems have been giving us advice for a long time already. GPS-powered mapping apps suggest routes for us. Search and social media engines suggest news and entertainment. Algorithmic advisors recommend investment products and strategies. Nowadays, millions of people interact with AI conversational bots like Amazon’s Alexa, Apple’s Siri, and advice-giving conversational agents like Replika.

Advice giving AIs are also used widely by professionals. AI systems now advise sales agents on improving their pitch, doctors on medical diagnosis, judges on jail and bail decisions, and law enforcement agencies on how to schedule their patrols. The list continues to grow. 

In all these domains, it is possible for AI systems to give people bad advice that either harms themselves—e.g. by investing their money in scams or risky funds, or by following a self-treatment routine that ends up worsening their health. Today, there are strict guidelines about who can give investment, medical, or tax advice. Similar guidelines and certification mechanisms will likely be developed to filter which AIs can give what kinds of advice to people. This can go a long way to eliminating bad advice.

Bad AI advisors may also nudge people to perform illegal or immoral acts. This will likely be a bigger problem among professionals—e.g. a judge or a law enforcement officer justifying racial profiling because “the AI said so.” When AI advice is seen as authoritative, psychological phenomena like obedience to authority may kick in. Holding the humans accountable is key, not least to give them incentive to pick good AI advisors.

Reference

  • Gong.io. https://www.gong.io/.

  • Sutton, R. T. et al. An overview of clinical decision support systems: benefits, risks, and strategies for success. npj Digital Medicine vol. 3 (2020).

  • Grgic-Hlaca, N., Engel, C. & Gummadi, K. P. Human Decision Making with Machine Assistance: An Experiment on Bailing and Jailing. SSRN Electronic Journal doi:10.2139/ssrn.3465622.

  • Tambe, M. Security and Game Theory: Algorithms, Deployed Systems, Lessons Learned. (Cambridge University Press, 2011).

  • Milgram, S. & Gudehus, C. Obedience to authority. (1978).

Previous
Previous

Watch out for bad role model AIs

Next
Next

Watch out for AI as a partner in crime