Avoid under-trusting machines

Shark_Vs_Puppy_4.jpg

Putting too much trust in a faulty or dangerous machine is bad. But it is also bad to put too little trust in a machine that performs well. For example, suppose autonomous vehicles become substantially safer than human drivers, but people distrust them for some irrational reason. This may be a cause for concern, considering that traffic accidents are a major cause of death, especially for young people. The same goes for other domains in which AI may outperform humans in safety-critical domains, such as medical diagnosis.

People may under-trust machines for many reasons. One reason is ignorance: they may simply not know that the machine is indeed superior. If we have robust scientific evidence that self-driving cars are safer than most human drivers, but this information is not communicated effectively to consumers, no one will buy into the technology.

Fear, due to anxiety and phobia, is another reason behind under-trust in machines. We already have many examples of this. Fear of flying is a very common phenomenon, despite the fact that the riskiest part of any flight is the car trip to the airport.

A related phenomenon is skewed risk perception. People overestimate the likelihood of certain risks, such as aircraft crashes and shark attacks, compared to a vehicle accident. And it doesn’t help that such rare events receive disproportionate media attention, not to mention their presence in Hollywood movies like Steven Spielberg’s 1975 cult classic Jaws. Now you know why some of the current self-driving car prototypes look like cute puppies, rather than something like a ‘shark mouth’ nose art on a World War II bomber.

Previous
Previous

Allow people to challenge machine decisions

Next
Next

Avoid over-trusting machines