Share To Alt-Tech
A former OpenAI governance researcher has made a chilling prediction: the odds of AI either destroying or catastrophically harming humankind sit at 70 percent.
In a recent interview with the New York Times, Daniel Kokotajlo, a former OpenAI governance researcher and signee of an open letter claiming that employees are being silenced against raising safety issues, accused the company of ignoring the monumental risks posed by artificial general intelligence (AGI) due to its decision-makers being enthralled with its possibilities. “OpenAI is really excited about building AGI,” Kokotajlo stated, “and they are recklessly racing to be the first there.”
Kokotajlo’s most alarming claim was that the chance AI will wreck humanity is around 70 percent—odds that would be unacceptable for any major life event, yet OpenAI and its peers are barreling ahead with anyway. The term “p(doom),” which refers to the probability that AI will usher in doom for humankind, is a topic of constant controversy in the machine learning world.
After joining OpenAI in 2022 and being asked to forecast the technology’s progress, the 31-year-old became convinced not only that the industry would achieve AGI by 2027 but also that there was a great probability it would catastrophically harm or even destroy humanity. Kokotajlo and his colleagues, including former and current employees at Google DeepMind and Anthropic, as well as Geoffrey Hinton, the “Godfather of AI” who left Google last year over similar concerns, are asserting their “right to warn” the public about the risks posed by AI.
Kokotajlo became so convinced of the massive risks AI posed to humanity that he personally urged OpenAI CEO Sam Altman to “pivot to safety” and spend more time implementing guardrails to reign in the technology rather than continue making it smarter. Although Altman seemed to agree with him at the time, Kokotajlo felt it was merely lip service.
Fed up, Kokotajlo quit the firm in April, telling his team in an email that he had “lost confidence that OpenAI will behave responsibly” as it continues trying to build near-human-level AI. “The world isn’t ready, and we aren’t ready,” he wrote. “And I’m concerned we are rushing forward regardless and rationalizing our actions.”
Read more at the New York Times here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.
This article was originally published by Breitbart. We only curate news from sources that align with the core values of our intended conservative audience. If you like the news you read here we encourage you to utilize the original sources for even more great news and opinions you can trust!
Comments