Avoid over-trusting machines

Infeltration_4.jpg

As part of a 2009 social experiment, New York-based artist Kacie Kinzer unleashed the Tweenbot, a cute, 10-inch cardboard robot with a smiley face, aimlessly wandering through Washington Square Park. It held a sign asking passers-by to guide it from one corner of the park to the other. With the help of 29 people, the little bot completed its journey in 42 minutes. “Every time the robot got caught under a park bench, ground futilely against a curb, or became trapped in a pothole, some passerby would always rescue it and send it toward its goal.” The Tweenbot is now in the permanent collection of the New York Museum of Modern Art (MoMA).

Tweenbots were extremely simple: a robot-shaped cardboard with a smiley on top of a battery-operated car that only moves straight. Imagine what an AI-powered bot with the ability to read and express emotions can do! Could their ability to elicit positive emotional responses from people allow robots to exploit them?

A group of Harvard and Brown Universities set out to test just that. They had a robot stand outside a secure student dormitory facility. By simply posing as a food delivery robot, the bot managed to get more than 75% of people to let it in. Even worse, out of 13 out of 15 people who identified the bot as a bomb threat still let it into the building. And that robot did not even have a smiley face!

Over-trust in machines is not limited to infiltrating buildings, but also escaping them. In a study by researchers at Georgia Institute of Technology, a group of participants observed a robot perform poorly on a navigation task. Yet, in a subsequent simulated emergency situation, they all blindly followed the same robot around. The robot managed to lead the majority of participants into a dark room with no exit.

There are many possible reasons for over-trusting machines. One is human gullibility, say to robots with smiley faces. Another reason is a lack of understanding of the AI’s level of competence. We may over-trust self-driving cars, and allow their mass production, if we lack a sufficient understanding of their safety compared to humans. We may over-trust a medical diagnosis system if we do not know how accurate it is, not just in general, but also for specific groups of people. The foundation of trust is predictability, and to predict, we must understand.

References

  • Tweenbots. http://www.tweenbots.com/

  • Dish, D. For The Love Of Robots. The Atlantic https://www.theatlantic.com/daily-dish/archive/2009/04/for-the-love-of-robots/202967/ (2009).

  • Forrest, B. Tweenbots: Cute Beats Smart. O’Reilly Radar (2009).

  • Booth, S. et al. Piggybacking Robots: Human-Robot Overtrust in University Dormitory Security. in Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction 426–434 (Association for Computing Machinery, 2017).

  • Robinette, P., Li, W., Allen, R., Howard, A. M. & Wagner, A. R. Overtrust of robots in emergency evacuation scenarios. in 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI) 101–108 (2016).

Previous
Previous

Avoid under-trusting machines

Next
Next

Design good carrots and sticks