Roughly half of Americans subscribe to to some sort of conspiracy theory, and their fellow humans haven’t had much success coaxing them out of their rabbit holes.
Perhaps they could learn a thing or two from an AI-powered chatbot.
In a series of experiments, the artificial chatbot was able to make more than a quarter of people feel uncertain about their most cherished conspiracy belief. The average conversation lasted less than 8½ minutes.
The results were reported Thursday in the journal Science.
The failure of facts to convince people that we really did land on the moon, that Al Qaeda really was responsible for the 9/11 attacks, and that President Biden really did win the 2020 election, among other things, has fueled anxiety about a post-truth era that favors personal beliefs over objective evidence.
“People who believe in conspiracy theories rarely, if ever, change their mind,” said study leader Thomas Costello, a psychologist at American University who investigates political and social beliefs. “In some sense, it feels better to believe that there’s a secret society controlling everything than believing that entropy and chaos rule.”
But the study suggests the problem isn’t with the persuasive power of facts — it’s our inability to marshal the right combination of facts to counter someone’s specific reasons for skepticism.
Costello and his colleagues attributed the chatbot’s success to the detailed, customized arguments it prepared for each of the 2,190 study participants it engaged with.
For instance, a person who doubted that the twin towers could have been brought down by airplanes because jet fuel doesn’t burn hot enough to melt steel was informed that the fuel reaches temperatures as high as 1,832 degrees, enough for steel to lose its structural integrity and trigger a collapse.
A person who didn’t believe Lee Harvey Oswald had the skills to assassinate President John F. Kennedy was told that Oswald had been a sharpshooter in the Marines and wouldn’t have had much trouble firing an accurate shot from about 90 yards away.
And a person who believed Princess Diana was killed so Prince Charles could remarry was reminded of the 8-year gap between Diana’s fatal car accident and the future king’s second wedding, undermining the argument that the two events were related.
The findings suggest that “any type of belief that people hold that is not based in good evidence could be shifted,” said study co-author Gordon Pennycook, a cognitive psychologist at Cornell University.
“It’s really validating to know that evidence does matter,” he said.
The researchers began by asking Americans to rate the degree to which they subscribed to 15 common conspiracy theories, including that the virus responsible for COVID-19 was created by the Chinese government and that the U.S. military has been hiding evidence of a UFO landing in Roswell, N.M. After performing an unrelated task, participants were asked to describe a conspiracy theory they found particularly compelling and explain why they believed it.
The request prompted 72% of them to share their feelings about a conspiracy theory. Among this group, 60% were randomly assigned to discuss it with the large language model GPT-4 Turbo.
The conversations began with the chatbot summarizing the human’s description of the conspiracy theory. Then the human rated the degree to which he or she agreed with the summary on a scale from 0 to 100.
From there, the chatbot set about making the case that there was nothing fishy going on. To make sure it wasn’t stretching the truth in order to be more persuasive, the researchers hired a professional fact-checker to evaluate 128 of the bot’s claims about a variety of conspiracies. One was judged to be misleading, and the rest were true.
The bot also turned up the charm. In one case, it praised a participant for “critically examining historical events” while reminding them that “it’s vital to distinguish between what could theoretically be possible and what is supported by evidence.”
Each conversation included three rounds of evidence from the chatbot, followed by a response from the human. (You can try it yourself here.) Afterward, the participants revisited their summarized conspiracy statements. Their ratings of agreement dropped by an average of 21%.
In 27% of cases, the drop was large enough for the researchers to say the person “became uncertain of their conspiracy belief.”
Meanwhile, the 40% of participants who served as controls also got summaries of their preferred conspiracy theory and scored them on the 0-to-100 scale. Then they talked with the chatbot about neutral topics, like the U.S. medical system or the relative merits of cats and dogs. When these people were asked to reconsider their conspiracy theory summaries, their ratings fell by just 1%, on average.
The researchers checked in with people 10 days and 2 months later to see if the effects had worn off. They hadn’t.
The team repeated the experiment with another group and asked people about their conspiracy-theory beliefs in a more roundabout way. This time, discussing their chosen theory with the bot prompted a 19.4% decrease in their rating, compared with a 2.9% decrease for those who chatted about something else.
The conversations “really fundamentally changed people’s minds,” said co-author David Rand, a computational social scientist at MIT who studies how people make decisions.
“The effect didn’t vary significantly based on which conspiracy was named and discussed,” Rand said. “It worked for classic conspiracies like the JFK assassination and moon landing hoaxes and Illuminati, stuff like that. And it also worked for modern, more politicized conspiracies like those involving 2020 election fraud or COVID-19.”
What’s more, being challenged by the AI chatbot about one conspiracy theory prompted people to become more skeptical about others. After their conversations, their affinity for the 15 common theories fell significantly more than it did for people in the control group.
“It was making people less generally conspiratorial,” Rand said. “It also increased their intentions to do things like ignore or block social media accounts sharing conspiracies, or, you know, argue with people who are espousing those conspiracy theories.”
In another encouraging sign, the bot was unable to talk people out of beliefs in conspiracies that were actually true, such as the CIA’s covert MK-Ultra project that used unwitting subjects to test whether drugs, torture or brainwashing could enhance interrogations. In some cases, the chatbot discussions made people believe these conspiracies even more.
“It wasn’t like mind control, just, you know, making people do whatever it wants,” Rand said. “It was essentially following facts.”
Researchers who weren’t involved in the study called it a welcome advance.
In an essay that accompanied the study, psychologist Bence Bago of Tilberg University in the Netherlands and cognitive psychologist Jean-Francois Bonnefon of the Toulouse School of Economics in France said the experiments show that “a scalable intervention to recalibrate misinformed beliefs may be within reach.”
But they also raised several concerns, including whether it would work on a conspiracy theory that’s so new there aren’t many facts for an AI bot to draw from.
The researchers took a first pass at testing this the week after the July 13 assassination attempt on former President Trump. After helping the AI program find credible information about the attack, they found that talking with the chatbot reduced people’s belief in assassination-related conspiracy theories by 6 or 7 percentage points, which Costello called “a noticeable effect.”
Bago and Bonnefon also questioned whether conspiracy theorists would be willing to engage with a bot. Rand said he didn’t think that would be an insurmountable problem.
“One thing that’s an advantage here is that conspiracy theorists often aren’t embarrassed about their beliefs,” he said. “You could imagine just going to conspiracy forums and inviting people to do their own research by talking to the chatbot.”
Rand also suggested buying ads on search engines so that when someone types a query about, say, the “deep state,” they’ll see an invitation to discuss it with an AI chatbot.
Robbie Sutton, a social psychologist at the University of Kent in England who studies why people embrace conspiracy beliefs, called the new work “an important step forward.” But he noted that most people in the study persisted in their beliefs despite receiving “high-quality, factual rebuttals” from a “highly competent and respectful chatbot.”
“Seen this way, there is more resistance than there is open-mindedness,” he said.
Sutton added that the findings don’t shed much light on what draws people to conspiracy theories in the first place.
“Interventions like this are essentially an ambulance at the bottom of the cliff,” he said. “We need to focus more of our efforts on what happens at the top of the cliff.”