This article was originally published on Technocracy News. You can read the original article HERE
This wild-west approach to developing software that will change the world is a recipe for disaster. So far, Altman has smooth-talked around the issue to assure people he is trustworthy. I don’t buy it. ⁃ Patrick Wood, TN Editor.
OpenAI, the developer of the popular AI assistant ChatGPT, has seen a significant exodus of its artificial general intelligence (AGI) safety researchers, according to a former employee.
Fortune reports that Daniel Kokotajlo, a former OpenAI governance researcher, recently revealed that nearly half of the company’s staff focused on the long-term risks of superpowerful AI have left in the past several months. The departures include prominent researchers such as Jan Hendrik Kirchner, Collin Burns, Jeffrey Wu, Jonathan Uesato, Steven Bills, Yuri Burda, Todor Markov, and cofounder John Schulman. These resignations followed the high-profile exits of chief scientist Ilya Sutskever and researcher Jan Leike in May, who co-headed the company’s “superalignment” team.
OpenAI, founded with the mission to develop AGI in a way that “benefits all of humanity,” has long employed a significant number of researchers dedicated to “AGI safety” – techniques for ensuring that future AGI systems do not pose catastrophic or existential dangers. However, Kokotajlo suggests that the company’s focus has been shifting towards product development and commercialization, with less emphasis on research to ensure the safe development of AGI.
Kokotajlo, who joined OpenAI in 2022 and quit in April 2023, stated that the exodus has been gradual, with the number of AGI safety staff dropping from around 30 to just 16. He attributed the departures to individuals “giving up” as OpenAI continues to prioritize product development over safety research.
The changing culture at OpenAI became more apparent to Kokotajlo before the boardroom drama in November 2022, when CEO Sam Altman was briefly fired and then rehired, and three board members focused on AGI safety were removed. Kokotajlo felt that this incident sealed the deal, with no turning back, and that Altman and president Greg Brockman had been consolidating power since then.
While some AI research leaders consider the AI safety community’s focus on AGI’s potential threat to humanity to be overhyped, Kokotajlo expressed disappointment that OpenAI came out against California’s SB 1047, a bill aimed at putting guardrails on the development and use of the most powerful AI models.
Despite the departures, Kokotajlo acknowledged that some remaining employees have moved to other teams where they can continue working on similar projects, and the company has also established a new safety and security committee and appointed Carnegie Mellon University professor Zico Kolter to its board of directors.
As the race to develop AGI intensifies among major AI companies, Kokotajlo warned against groupthink and the potential for companies to conclude that their success in the race is inherently good for humanity, driven by the majority opinion and incentives within the organization.
This article was originally published by Technocracy News. We only curate news from sources that align with the core values of our intended conservative audience. If you like the news you read here we encourage you to utilize the original sources for even more great news and opinions you can trust!
Comments