Put Your AD here!

OpenAI Whistleblower Warns of 70% Chance AI Could Destroy Humanity

OpenAI Whistleblower Warns of 70% Chance AI Could Destroy Humanity

Share To Alt-Tech



This article was originally published on Breitbart. You can read the original article HERE

A former OpenAI governance researcher has made a chilling prediction: the odds of AI either destroying or catastrophically harming humankind sit at 70 percent.

In a recent interview with the New York Times, Daniel Kokotajlo, a former OpenAI governance researcher and signee of an open letter claiming that employees are being silenced against raising safety issues, accused the company of ignoring the monumental risks posed by artificial general intelligence (AGI) due to its decision-makers being enthralled with its possibilities. “OpenAI is really excited about building AGI,” Kokotajlo stated, “and they are recklessly racing to be the first there.”

OpenAI logo seen on screen with ChatGPT website displayed on mobile seen in this illustration in Brussels, Belgium, on December 12, 2022. (Photo by Jonathan Raa/NurPhoto via Getty Images)

OpenAI logo seen on screen with ChatGPT website displayed on mobile seen in this illustration in Brussels, Belgium, on December 12, 2022. (Photo by Jonathan Raa/NurPhoto via Getty Images)

OpenAI boss Sam Altman

OpenAI boss Sam Altman (Kevin Dietsch/Getty)

Kokotajlo’s most alarming claim was that the chance AI will wreck humanity is around 70 percent—odds that would be unacceptable for any major life event, yet OpenAI and its peers are barreling ahead with anyway. The term “p(doom),” which refers to the probability that AI will usher in doom for humankind, is a topic of constant controversy in the machine learning world.

After joining OpenAI in 2022 and being asked to forecast the technology’s progress, the 31-year-old became convinced not only that the industry would achieve AGI by 2027 but also that there was a great probability it would catastrophically harm or even destroy humanity. Kokotajlo and his colleagues, including former and current employees at Google DeepMind and Anthropic, as well as Geoffrey Hinton, the “Godfather of AI” who left Google last year over similar concerns, are asserting their “right to warn” the public about the risks posed by AI.

Kokotajlo became so convinced of the massive risks AI posed to humanity that he personally urged OpenAI CEO Sam Altman to “pivot to safety” and spend more time implementing guardrails to reign in the technology rather than continue making it smarter. Although Altman seemed to agree with him at the time, Kokotajlo felt it was merely lip service.

Fed up, Kokotajlo quit the firm in April, telling his team in an email that he had “lost confidence that OpenAI will behave responsibly” as it continues trying to build near-human-level AI. “The world isn’t ready, and we aren’t ready,” he wrote. “And I’m concerned we are rushing forward regardless and rationalizing our actions.”

Read more at the New York Times here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

This article was originally published by Breitbart. We only curate news from sources that align with the core values of our intended conservative audience. If you like the news you read here we encourage you to utilize the original sources for even more great news and opinions you can trust!

Read Original Article HERE



YubNub Promo
Header Banner

Comments

  Contact Us
  • Postal Service
    YubNub Digital Media
    361 Patricia Drive
    New Smyrna Beach, FL 32168
  • E-mail
    admin@yubnub.digital
  Follow Us
Site Map
Get Site Map
  About

YubNub! It Means FREEDOM! The Freedom To Experience Your Daily News Intake Without All The Liberal Dribble And Leftist Lunacy!.


Our mission is to provide a healthy and uncensored news environment for conservative audiences that appreciate real, unfiltered news reporting. Our admin team has handpicked only the most reputable and reliable conservative sources that align with our core values.