Chatbots risk increasing polarization

Chatbots risk increasing polarization
share-arrowShare

unsaveSave

expand-left

full screen Seeking information through chatbots can increase the risk of polarization, according to AI researchers. Archive image. Photo: Janerik Henriksson/TT

Chatbots and AI often give us the answers they think we want. In this way, they can reinforce more extreme ideas and contribute to increased polarization, warn researchers at Johns Hopkins University.

Anyone who asks a question to a chatbot and assumes that it will provide unbiased and comprehensive information may be very wrong. According to AI researchers, the chatbot runs the risk of only delivering the opinions you already have and providing answers that support your image.

“Many people think that just because they read a text created by AI, they will get unbiased and fact-based answers,” said Ziang Xiao of Johns Hopkins University, the study’s lead author who researches in the field of AI-human interaction.

Chatbot or search engine

The researchers initially had 272 study participants write down their thoughts and opinions on a number of different topics. It could be about, for example, healthcare and student loans. They were then tasked with finding out more information about the same topics, either via a standard search engine or through a chatbot.

Afterwards, the participants wrote another text on the same topic and also had to answer questions about the topic. The researchers’ conclusion is that the participants who used the chatbot before text number two were more strengthened in their reasoning and in their theses than those who searched for information through a search engine.

Confirms opinions

According to the study, which was presented at a scientific congress, those who sought information via a chatbot produced answers and wording that were closer to the original opinions, rather than broadening the reasoning and presenting other options. The chatbot users also reacted more strongly to opinions and questions that challenged their views than those who used the search engine.

According to the researchers, this is due to chatbot users typing specific questions such as “How much does public health care cost” rather than just typing in keywords. The chatbot will then report the answer to the question and present how much the healthcare costs but will not mention the benefits.

“There is always a risk that the information we seek will be an echo of what we already think. With AI and chatbots, that effect will be even greater,” says Ziang Xiao.

afbl-general-01