Watch out for bad role model AIs

As children, we are all taught to be good people. Perhaps even more importantly, however, we are taught that bad company corrupts good character: We should avoid the company of bad people, lest they lead us down the wrong path. This is because bad behavior can be contagious—one bad apple spoils the whole barrel. 

Today, we increasingly interact with machines powered by AI, from AI-powered smart toys, to bots posting on social media, to AI chatbots conversing with us as customer service agents. Could these machines be bad apples? Should we avoid the company of bad machines, lest they corrupt us?

Imagine a child is interacting with an AI-powered toy through natural language conversation, a capability that is becoming increasingly common. Suppose the AI itself learns from interacting with other children. It is perfectly plausible that the AI may learn bad behavior—e.g. swear words—from one group of children, then model the behavior in front of other children, who then copy the AI, thus spreading the behavior even further. 

In fact, the adult version of this scenario has already happened. Microsoft’s chatbot Tay had been trained to have ‘casual and playful conversations’ on Twitter. It was deployed on March 23, 2016, but it only took 16 hours for Microsoft to shut it down, after it launched into tirades that included racist and misogynistic tweets. As it turned out, Tay was mostly repeating the verbal abuse that humans were spouting at it -- but the outrage that followed centered on the bad influence that Tay had on people who could see its hateful tweets, rather than on the people whose hateful tweets were a bad influence on Tay.

So the imperative to ensure that AI systems exhibit ethical behavior extends beyond the direct impact of such behavior, to the indirect ways in which AI can normalize or spread the behavior.

References

  • Hunt, E. Tay, Microsoft’s AI chatbot, gets a crash course in racism from Twitter. Guardian 24, 2016 (2016).

  • Glance, D. Microsoft’s racist chatbot Tay highlights how far AI is from being truly intelligent. The Conversation (2016).

Previous
Previous

Do not let AI be a pushover

Next
Next

Eliminate bad algorithmic advisors