Put Your AD here!

AI Chatbots Tackle Conservative ‘Conspiracy Theories’

AI Chatbots Tackle Conservative ‘Conspiracy Theories’


This article was originally published on Liberty Nation - Opinion. You can read the original article HERE

The moment nobody’s been waiting for has finally arrived: An artificially intelligent chatbot can help people diminish their beliefs in conspiracy theories. No, seriously. Researchers discovered that a brief conversation with a large language model (LLM) called DebunkBot, similar to Open AI’s ChatGPT, could decrease a person’s belief in the discussed conspiracy, on average,  by up to 20%. The study’s authors are already trying to conjure ways to incorporate this technology into the real world – “for a more targeted approach.” One author suggested placing the chatbot in doctors’ offices “to help debunk misapprehensions about vaccinations” or in online forums where “unfounded beliefs” commonly circulate.

Because too often the term “conspiracy theory” seems linked to right-wing viewpoints, the researchers’ “targeted approach” sounds like a propagandistic scheme to persuade people who think outside the progressive box to fall in line with more liberal ideas. After all, this is the same technology proven to have biased tendencies that favor left-leaning positions on political issues. What’s the real plan here?

Chatbots and Falsehoods

First, to be clear, a chatbot is an AI interface with which users interact. An LLM is an underlying technology powering the chatbot. The terms are often used interchangeably, but they are not mutually exclusive. Chatbots can simulate conversation. Some LLMs understand and generate text yet can’t converse with their users. DebunkBot is a chatbot using an LLM as its engine.

The study, published in the journal Science, included 2,000 participants. Each talked with the LLM GPT-4 Turbo for three rounds. Participants began by telling the chatbot about a conspiracy theory they believed, explained why, and provided some evidence that led them to think the theory was true. In a friendly and personalized manner, the chatbot provided “facts” opposing the participants’ evidence.

Some conspiracies participants believed centered around JFK’s assassination, how the Twin Towers fell on 9/11, and, of course, whether the last US election was stolen.

The authors of the study claimed:

“Widespread belief in unsubstantiated conspiracy theories is a major source of public concern and a focus of scholarly research. Despite often being quite implausible, many such conspiracies are widely believed. Prominent psychological theories propose that many people want to adopt conspiracy theories (to satisfy underlying psychic ‘needs’ or motivations), and thus, believers cannot be convinced to abandon these unfounded and implausible beliefs using facts and counterevidence.”

“Underlying psychic needs”? Be that as it may, probably not every person who believes a conspiracy theory does so for psychological reasons. There appears to be a growing distrust in government institutions worldwide; perhaps that plays a part. Another reason might be that algorithms trap people in bubbles and filter out stories and posts that are contradictory to users’ likes, reinforcing their beliefs. But let’s skip to a different issue: Who decides what is and isn’t a conspiracy theory?

Definitions aside, all this neglects a significant factor: How can chatbots reliably persuade people to latch onto specific ideas, such as vaccination advice, when LLMs are notoriously known for giving false or misleading information? They do it often enough that there’s even a term for it: hallucinations.

Hallucinations

According to a study published in arXiv, AI hallucinations are not just “occasional errors but an inevitable feature of these systems,” which have zero probability of ever being eliminated, not even “through architectural improvements [or] dataset enhancements.” The authors explained:

“Hallucinations in large language models (LLMs) occur when the models generate content that is false, fabricated, or inconsistent with their training data. These happen when the model, in an attempt to produce coherent responses, fills in gaps with plausible-sounding but incorrect information. Hallucinations can range from subtle inaccuracies to completely fictional assertions, often presented with high confidence. It is important to note that LLM hallucinations can occur even with the best training [and] fine-tuning.”

Do the researchers of the debunking study know LLMs hallucinate? They must. They even hired a “professional fact-checker” to review the chatbot’s accuracy. From a sample of 128 claims it made, they found that 99.2% were true, and 0.8% were “deemed misleading.” Wait – only 128 claims? That seems odd, considering more than 2,000 people participated in the study. Let’s assume there’s a good reason. Of the claims reviewed for accuracy, none were “entirely” false. Okay. Maybe the chatbot had a decent first run. What happens if this technology becomes widely used in an attempt to change people’s opinions? Would somebody always be monitoring the chatbot’s claims?

If the plan is to alter people’s thinking, regardless of the reasons, and the chatbot succeeded in having a lasting influence 50% of the time, produced questionable responses 5% of the time, and had mixed or no results for the remainder, the people orchestrating this endeavor would presumably consider those winning odds. But the bigger problem here is an ugly reality many people likely prefer to ignore: Whether the intentions are moral, evil, capitalistic, altruistic, or for the “greater good,” when people set out to change others’ opinions, their real goal is usually to modify behavior. Big Tech, aside from censoring people, thrives off of predicting and altering behavior. Why would AI be any different?

Behavior Modification

In 2012, the scientific journal Nature published an article titled “A 61-Million-Person Experiment in Social Influence and Political Mobilization.” Facebook researchers authored the article, which detailed a study conducted near the 2010 US Congressional midterm elections. “[T]he researchers experimentally manipulated the social and informational content of voting-related messages in the news feeds of nearly 61 million Facebook users,” wrote Shoshana Zuboff in The Age of Surveillance Capitalism. Because of the manipulated messages, an estimated 340,000 additional voters cast ballots for the 2010 midterm elections.

Zuboff highlighted a statement written by a former Facebook product manager:

Experiments are run on every user at some point in their tenure on the site . . . The fundamental purpose of most people at Facebook working on data is to influence and alter people’s mood and behavior. They are doing it all the time to make you like stories more, click on more ads, to spend more time on the site. This is just how a website works, everyone does this and everyone knows that everyone does this.”

What will we learn about AI and the companies employing it after the technology has been around for ten or twelve years? Most websites have replaced their online customer support staff with chatbots. AI is a standard feature on many new cellphone models. Some researchers from the debunking study have already “considered buying ads that pop up when someone searches a keyword related to a common conspiracy theory,” The New York Times noted. How long before LLMs are spouting messages at bus stops, from tablets at cash registers, in drive-thrus, and so on?

Knowing that LLMs are politically biased toward liberal stances, it is not a far leap to imagine more chatbots dishing out left-wing ideas, attempting to strip people of right-wing views, and influencing them to conform to a progressive mold.

This article was originally published by Liberty Nation - Opinion. We only curate news from sources that align with the core values of our intended conservative audience. If you like the news you read here we encourage you to utilize the original sources for even more great news and opinions you can trust!

Read Original Article HERE



YubNub Promo
Header Banner

Comments

  Contact Us
  • Postal Service
    YubNub Digital Media
    361 Patricia Drive
    New Smyrna Beach, FL 32168
  • E-mail
    admin@yubnub.digital
  Follow Us
  About

YubNub! It Means FREEDOM! The Freedom To Experience Your Daily News Intake Without All The Liberal Dribble And Leftist Lunacy!.


Our mission is to provide a healthy and uncensored news environment for conservative audiences that appreciate real, unfiltered news reporting. Our admin team has handpicked only the most reputable and reliable conservative sources that align with our core values.