[This post has already been read 1802 times!]
Lucubrate Magazine, Issue 37, August 17th, 2018
We are at a tipping point of a new digital divide. While some embrace Artificial Intelligence, many people will always prefer human experts even when they’re wrong.
Unless you live under a rock, you probably have been inundated with recent news on machine learning and artificial intelligence (AI). With all the recent breakthroughs, it almost seems like AI can already predict the future. Police forces are using it to map when and where crime is likely to occur. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.
Of course, many decisions in our lives require a good forecast, and AI agents are almost always better at forecasting than their human counterparts. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictions. Recent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.
If we want AI to really benefit people, we need to find a way to get people to trust it. To do that, we need to understand why people are so reluctant to trust AI in the first place.
Should you trust Dr. Robot?
As a case in point, IBM’s attempt to promote its Watson for Oncology programme was a PR disaster. Using one of the world’s most powerful supercomputer systems to recommend the best cancer treatment to doctors seemed like an audacious undertaking straight out of sci-fi movies. The AI promised to deliver top-quality recommendations on the treatment of 12 cancers that accounted for 80% of the world’s cases. As of today, over 14,000 patients worldwide have received advice based on its calculations.
But when doctors first interacted with Watson they found themselves in a rather difficult situation. On the one hand, if Watson provided guidance about a treatment that coincided with their own opinions, physicians did not see much value in Watson’s recommendations.
The supercomputer was simply telling them what they already know, and these recommendations did not change the actual treatment. This may have given doctors some peace of mind, providing them with more confidence in their own decisions. But IBM has yet to provide evidence that Watson actually improves cancer survival rates.
On the other hand, if Watson generated a recommendation that contradicted the experts’ opinion, doctors would typically conclude that Watson wasn’t competent enough (or blame the unorthodox solutions on system failures). What is more, the machine wouldn’t be able to explain why its treatment was plausible because its machine learning algorithms were simply too complex to be fully understood by humans. Consequently, this has caused even more mistrust and disbelief, leading many doctors to ignore the seemingly outlandish AI recommendations and stick to their own expertise in oncology.
As a result, IBM Watson’s premier medical partner, the MD Anderson Cancer Center, recently announced it was dropping the programme. Similarly, a Danish hospital reportedly abandoned the AI programme after discovering that its cancer doctors disagreed with Watson in over two-thirds of cases.
Photo: IBM
The origins of trust issues: It’s a human thing
Many experts believe that our future society will be built on effective human-machine collaboration. But a lack of trust remains the single most important factor stopping this from happening.
The problem with Watson for Oncology was that doctors simply didn’t trust it. Human trust is often based on our understanding of how other people think and having experience of their reliability. This helps create a psychological feeling of safety. AI, on the other hand, is still fairly new and unfamiliar to most people. It makes decisions using a complex system of analysis to identify potentially hidden patterns and weak signals from large amounts of data.
Even if it can be technically explained (and that’s not always the case), AI’s decision-making process is usually too difficult for most people to understand. And interacting with something we don’t understand can cause anxiety and make us feel like we’re losing control. Many people are also simply not familiar with many instances of AI actually working because it often happens in the background. Instead, they are acutely aware of instances where AI goes terribly wrong:
- A Google algorithm that classifies people of color as gorillas.
- A self-driving Uber that runs a red light in San Francisco.
- An automated YouTube ad campaign that displays ads next to anti-semitic and homophobic videos.
- An Amazon Alexa device that starts offering adult content to children.
- A Pokémon Go algorithm that replicates and amplifies racial segregation.
- A Microsoft chatbot that decides to become a white supremacist in less than a day.
- A Tesla car operating in autopilot mode that resulted in a fatal accident.
These unfortunate examples have received a disproportionate amount of media attention, emphasizing the message that humans cannot always rely on technology. In the end, it all goes back to the simple truth that machine learning is not foolproof, in part because the humans who design it aren’t.
Photo: Movie – Terminator Genisys Wallpaper
The effects of watching Terminator: A new AI divide in society?
Feelings about AI also run deep. But why do some people embrace AI, while others are deeply suspicious about it?
In December 2017, my colleagues and I ran a set of experiments where we asked people from a range of backgrounds to watch various sci-fiction films about AI and fill out survey questionnaires on their opinions about automation (both before watching the movies and after). We asked them questions about their general attitudes towards the Internet, their experiences with AI technology and their willingness to automate specific tasks in everyday life: which tasks were they happy to automate with a hypothetical AI assistant and which tasks would they insist on carrying out themselves.
Surprisingly, it didn’t matter whether movies like Terminator, I, Robot, Ex-Machina or Her depicted a Utopian or Dystopian future. We found that, regardless of whether the film they watched depicted AI in a positive or negative light, simply watching a cinematic vision of our technological future polarised the participants’ attitudes. Optimists became more extreme in their enthusiasm for AI, indicating that they were eager to automate more everyday tasks. Conversely, skeptics became even more guarded in their attitudes toward AI. They doubted the potential benefits of AI and were more willing to actively resist AI tools used by their friends and families.
The implications that stem from these findings are concerning. On the one hand, this suggests that people use relevant evidence about AI in a biased manner to support their existing attitudes, a deep-rooted human tendency known as confirmation bias. We believe that this cognitive bias is the main driving force behind the polarising effects we’ve observed in our study.
On the other hand, given the unrelenting pace of technological progress, refusing to partake in the advantages offered by AI could place a large group of people at a serious disadvantage. As AI is reported and represented more and more in popular culture and in the media, it could contribute to a deeply divided society, split between those who believe in (and consequently benefit from) AI and those who reject it.
More pertinently, refusing to accept the advantages offered by AI could place a large group of people at a serious disadvantage. This is due to the fact that differences in AI trust could lead to differential access to job opportunities and, consequently, differences in socio-economic status. The resulting clashes between AI followers and AI deniers could prompt governments to step in with heedless regulation that stifles innovation.
Illustration: which-50.com
An exit out of the AI trust crisis
Distrust in AI could be the biggest dividing force in society. Therefore, if AI is to live up to its full potential, we have to find a way to get people to trust it, particularly if it produces recommendations that radically differ from what we are normally used to. Fortunately, we already have some ideas about how to improve trust in AI — there’s light at the end of the tunnel.
- Experience: One solution may be to provide more hands-on experiences with automation apps and other AI applications in everyday situations (like this robot that can get you a beer from the fridge). Thus, instead of presenting the Sony’s new robot dog Aibo as an exclusive product for the upper-class, we’d recommend making these kinds of innovations more accessible to the masses. Simply having previous experience with AI can significantly improve people’s attitudes towards the technology, as we found in our experimental study. And this is especially important for the general public that may not have a very sophisticated understanding of the technology. Evidence also suggests the more you use technologies like the Internet, the more you trust them.
- Insight: Another solution may be to open the “black-box” of machine learning algorithms and be slightly more transparent about how they work. Companies such as Google, Airbnb, and Twitter already release transparency reports on a regular basis. These reports provide information about government requests and surveillance disclosures. A similar practice for AI systems could help people have a better understanding of how algorithmic decisions are made. Therefore, providing people with a top-level understanding of machine learning systems could go a long way towards alleviating algorithmic aversion.
- Control: Lastly, creating more of a collaborative decision-making process will help build trust and allow the AI to learn from human experience. In our work at Avantgarde Analytics, we have also found that involving people more in the AI decision-making process could improve trust and transparency. In a similar vein, a group of researchers at the University of Pennsylvania recently found that giving people control over algorithms can help create more trust in AI predictions. Volunteers in their study who were given the freedom to slightly modify an algorithm felt more satisfied with it, more likely to believe it was superior and more likely to use it in the future.
These guidelines (experience, insight, and control) could help to make AI systems more transparent and comprehensible to the individuals affected by their decisions. Our research suggests that people might trust AI more if they had more experience with it and control over how it is used rather than simply being told to follow orders from a mysterious computer system.
People don’t need to understand the intricate inner workings of AI systems, but if they are given at least a bit of information about and control over how they are implemented, they will be more open to accepting AI into their lives.
Do you have a comment or do you want to give your feedback on this article? Do you want to write letters to the editor? Please use the link https://lucu.nkb.no/feedback/
The photo on top: IBM Watson Health: Oncology & Genomics Solutions
This article is reprinted from the OECD-forum, Network
Category: echnology, Magazine
Views: 193