Recent research published in Science has revealed that engaging in fact-based conversations with AI chatbots can help individuals who believe in conspiracy theories to break free from their beliefs. The study, conducted by Thomas Costello and his team at the Massachusetts Institute of Technology, offers hope in addressing the issue of conspiracy theories, some of which can erode trust in public institutions and scientific evidence.
While some conspiracy theories may seem harmless, such as the belief that Finland does not exist, others can have serious consequences, such as discouraging vaccination or action against climate change. In extreme cases, belief in conspiracy theories has even been linked to loss of life.
Conspiracy theories are notoriously difficult to debunk once individuals have embraced them. This is due to various factors, including the sense of community that conspiracy theorists often find within their circles and the extensive research they have conducted to support their beliefs. When individuals no longer trust scientific evidence or sources outside their community, changing their beliefs becomes a formidable challenge.
The rise of generative AI has raised concerns about the spread of false information, as AI systems can easily create convincing fake content. Even when used with good intentions, AI systems can still provide incorrect information, as they may contain biases that promote negative beliefs about certain groups of people.
Given these concerns, it is surprising that engaging in conversations with AI chatbots, known for producing fake news, can actually lead some individuals to abandon their conspiracy theories, with the change in beliefs lasting for at least two months.
However, this research also raises a dilemma. While it is encouraging that AI chatbots can have an impact on conspiracy theorist beliefs, it raises questions about the potential influence on true beliefs. If chatbots are effective in dispelling sticky, anti-scientific beliefs, what does that mean for beliefs based on accurate information?
To delve deeper into the research, the study involved over 2,000 participants across two experiments. All participants engaged in conversations with an AI chatbot after describing the conspiracy theory they believed in. The “treatment” group, comprising 60% of the participants, conversed with a personalized chatbot tailored to their specific conspiracy theory and reasons for believing in it. This chatbot used factual arguments over three rounds of conversation to persuade participants that their beliefs were incorrect. The remaining participants had a general discussion with a chatbot.
The researchers found that approximately 20% of participants in the treatment group exhibited reduced belief in conspiracy theories following the conversation. When the researchers followed up with the participants two months later, most of them still showed diminished belief in conspiracy theories. The accuracy of the AI chatbots was also assessed and found to be mostly reliable.
This research demonstrates that a three-round conversation with an AI chatbot can be effective in persuading some individuals to abandon their conspiracy theories. Chatbots offer promise in addressing two challenges related to false beliefs. Firstly, as computers, they are perceived as unbiased and trustworthy, especially by those who have lost faith in public institutions. Secondly, chatbots can construct persuasive arguments, which is more effective than simply presenting facts.
However, chatbots are not a panacea. The study revealed that they were more effective for individuals who did not have strong personal reasons for believing in conspiracy theories. Therefore, they may not be as helpful for those whose beliefs are deeply rooted in their community.
It is important to approach chatbots with caution when fact-checking information. While they can be persuasive in promoting accurate information, they can also perpetuate misinformation or conspiracy beliefs if their underlying data is flawed or biased. Chatbots may respond to biased prompts, reinforcing individuals’ existing beliefs in misinformation. This is similar to how search engines respond to biased search terms, reinforcing false beliefs. Ultimately, chatbots are tools, and their effectiveness depends on the skill and intention of their creators and users. Conspiracy theories originate from people, and it will be people who ultimately put an end to them.