Put Your AD here!

For the most part, AI is expensive garbage

For the most part, AI is expensive garbage


This article was originally published on The Expose. You can read the original article HERE

Print Friendly, PDF & Email

If ever there was a case of “garbage in, garbage out” then AI is it. And, ultimately it has all been driven by the objective of censoring information that does not fit the politically correct narrative.


Let’s not lose touch…Your Government and Big Tech are actively trying to censor the information reported by The Exposé to serve their own needs. Subscribe now to make sure you receive the latest uncensored news in your inbox…


Why AI ‘misinformation’ algorithms and research are mostly expensive garbage

By Professor Norman Fenton

The Hunter Biden laptop story is just one of many stories which were deemed by the corporate media (and most academics) to be “misinformation” but which were subsequently revealed as true. Indeed Mark Zuckerberg has now admitted that Facebook (Meta), along with the other big tech companies, were pressured into censoring the story before the 2020 US election and also subsequently pressured by the Biden/Harris administration to censor stories about covid which were wrongly classified as misinformation.

The problem is that the same kind of people who decided what was and was not misinformation (generally people on the political left) were also the ones who were funded to produce artificial intelligence (“AI”) algorithms to “learn”:

  1. which people were “spreaders of misinformation”; and
  2. what new claims were “misinformation.”

Between 2016 and 2022, I attended many research seminars in the UK on using AI and Machine Learning to “combat misinformation and disinformation.” From 2020, the example of Hunter Biden’s laptop was often used as a key “learning” example, so algorithms classified it as “misinformation” with subclassifications like “Russian propaganda” or “conspiracy theory.”

Moreover, every presentation I attended invariably started with, and was dominated by, examples of “misinformation” that were claimed to be based on “Trump lies” such as those among what the Washington Post claimed were the “30,573 false or misleading claims made by Trump over 4 years.” But many of these supposed false or misleading claims were already known to be true to anybody outside of the Guardian/New York Times/Washington Post reading bubble. For example, they claimed that Trump said “Neo-Nazis and white supremacists were very fine people” and that anybody denying was pushing misinformation, whereas even the far left-leaning Snopes had debunked that in 2017. Similarly, they claimed “evidence that Biden had dementia” or that “Biden liked to smell the hair of young girls” was misinformation despite multiple videos showing exactly that – so, don’t believe your lying eyes; indeed as recently as one week before Biden’s dementia could no longer been hidden during his live Presidential debate performance, the corporate media were adamant that such videos were misinformation “cheap tricks.”

But the academics presenting these Trump, Biden and other political examples, ridiculed anybody who dared question the reliability of the self-appointed oracles who determined what was and was not misinformation. At one major conference taking place on Zoom I posted in the chat: “Is anybody who does not hate Trump welcome in this meeting.” The answer was “No. Trump supporters are not welcome and if you are one you should leave now.” Sadly, most academics do not believe in freedom of thought, let alone freedom of expression when it comes to any views that challenge the “progressive” narrative on anything.

In addition to the Biden and Trump-related “misinformation” stories which turned out to be true, there were also multiple examples of covid related stories (such as those claiming very low fatality rates and lack of effectiveness and safety of the vaccines) classified as misinformation that also turned out to be true. In all these cases anybody pushing these stories was classified as a “spreader of misinformation,” “conspiracy theorist” etc. And it is these kinds of assumptions which drive how the AI “misinformation” algorithms that were developed and implemented by organisations like Facebook and Twitter worked.

Let me give a simplified example. The algorithms generally start with a database of statements which are pre-classified as either “misinformation” (even though many of which turned out to be true), or “not misinformation” (even though many of which turned out to be false). For example, the following were classified as misinformation:

  • “Hunter Biden left a laptop with evidence of his criminal behaviour in a repair shop.”
  • “The covid vaccines can cause serious injury and death.”

The converse of any statement classified as “misinformation” was classified as “not misinformation.”

A subset of these statements are used to “train” the algorithm and others to “test” the algorithm.

So, suppose the laptop statement is one of those used to train the algorithm and the vaccine statement is one of those used to test the algorithm. Then, because the laptop statement is classified as misinformation, the algorithm learns that people who repost or like a tweet with the laptop statement are “misinformation spreaders.” Based on other posts these people make, the algorithm might additionally classify them as, for example, “far-right.” The algorithm is likely to find that some people already classified as “far-right” or “misinformation spreader” – or people they are connected to – also post a statement like: “The covid vaccines can cause serious injury and death.” In that case, the algorithm will have “learnt” that this statement is most likely misinformation. And, hey presto, since it gives the “correct” classification to the “test” statement, the algorithm is “validated.”

Moreover, when presented with a new test statement such as, “The covid vaccines do not stop infection from covid” (which was also pre-classified as “misinformation”) the algorithm will also “correctly learn” that this is “misinformation” because it has already “learnt” that the statement, “The covid vaccines can cause serious injury and death” is misinformation and that people who claimed the latter statement – or people connected with them – also claimed the former statement.

The way I have outlined how the AI process is designed to detect “misinformation,” is also the way that “world-leading misinformation experts” set up their experiment to “profile” the “personality type” that is susceptible to misinformation. The same methods are also now used to profile and monitor people that the academic “experts” claim are “far-right” or racist.

Hence, an enormous amount of research was (and is still) spent on developing “clever” algorithms which simply censor the truth online or promote lies. Much of the funding for this research is justified on the grounds that “misinformation” is now one of the greatest threats to international security. Indeed, in January 2024 the World Economic Forum declared that “misinformation and disinformation were the biggest short term global risks.” European Commission President Ursula von der Leyen also declared that “misinformation and disinformation are greater threats to the global business community than war and climate change.” In the UK alone, the Government has provided many hundreds of millions of pounds of funding to numerous University research labs working on misinformation. In March 2024 the Turing Institute alone, which has several dedicated teams working on this and closely related areas, was awarded £100 million of extra Government funding – it had already received some £700 million since its inception in 2015. Somewhat ironically, the UK HM Government 2023 National Risk Register includes as a chronic risk:

Yet it continues to prioritise research funding in AI to combat this increased risk of “harmful misinformation and disinformation”!

As Mike Benz has made clear in his recent work and interviews (backed up with detailed evidence), almost all of the funding for the Universities or research institutes worldwide doing this kind of work, along with the “fact checkers” that use it, comes from the US State Dept, NATO and the British Foreign Office who, in the wake of the Brexit vote and Trump election in 2016, were determined to stop the rise of “populism” everywhere. It is this objective which has driven the mad AI race to censor the internet. Look at this video in which Mike Benz walks us through an event that took place in 2019:

it was hosted by the Atlantic Council, a NATO front organisation, to train journalists from mainstream organisations all around the world on how to “counter misinformation.” Note how they make it clear that “misinformation” includes for them “malinformation,” which they define as information that is true but which might harm their own narrative. They explain how to muzzle such “malinformation,” especially from the (then) President Trump’s social media posts in advance of the 2020 election. Despite claims that this did not happen (and indeed any such claims were themselves classified as misinformation) the journalists involved in this subsequently boasted very publicly that they not only did it but that it prevented Trump’s re-election in 2020.

Update: Two highly relevant articles from colleagues:

About the Author

Norman Fenton is a Professor Emeritus of Risk Information Management at the Queen Mary University of London.  He is also a Director of Agena, a company that specialises in risk management for critical systems. He is a mathematician by training whose current focus is on critical decision-making and, in particular, on quantifying uncertainty using causal, probabilistic models that combine data and knowledge (Bayesian networks).  The approach can be summarised as “smart data rather than big data.”

This article was originally published by The Expose. We only curate news from sources that align with the core values of our intended conservative audience. If you like the news you read here we encourage you to utilize the original sources for even more great news and opinions you can trust!

Read Original Article HERE



YubNub Promo
Header Banner

Comments

  Contact Us
  • Postal Service
    YubNub Digital Media
    361 Patricia Drive
    New Smyrna Beach, FL 32168
  • E-mail
    admin@yubnub.digital
  Follow Us
  About

YubNub! It Means FREEDOM! The Freedom To Experience Your Daily News Intake Without All The Liberal Dribble And Leftist Lunacy!.


Our mission is to provide a healthy and uncensored news environment for conservative audiences that appreciate real, unfiltered news reporting. Our admin team has handpicked only the most reputable and reliable conservative sources that align with our core values.