ChatGPT artificial intelligence computer program on PC screen made by OpenAI.
News

OpenAI Employees: AI Companies Dodge Oversight, Threaten Humanity

3 minute read
Dom Nicastro avatar
By
SAVED
Former and current OpenAI employees and others want better governance and accountability for companies like OpenAI.

The Gist

  • AI giants evade oversight. Leading AI companies avoid effective oversight due to strong financial incentives, according to former and current employees.
  • Weak AI accountability. The lack of sufficient accountability and regulatory structures poses serious risks, including potential human extinction.
  • Experts call for AI oversight. Experts call for increased guidance and oversight from the scientific community, policymakers and the public to mitigate these risks.

Leading artificial intelligence companies avoid effective oversight because of money and operate without sufficient accountability government or other industry standards, former and current employees said in a letter published today.

In other words, they get away with a lot — and that's not great news for a technology that comes with risks including human extinction.

"We are hopeful that these risks can be adequately mitigated with sufficient guidance from the scientific community, policymakers, and the public," the group wrote in the letter titled, "A Right to Warn about Advanced Artificial Intelligence." "However, AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this."

The letter was signed by seven former OpenAI employees, four current OpenAI employees, one former Google DeepMind employee and one current Google DeepMind employee. It was also endorsed by AI powerhouses Yoshua Bengio, Geoffrey Hinton and Stuart Russell.

AI Poses Serious Risks

While the group believes in the potential of AI technology to deliver unprecedented benefits to humanity, it says risks include:

  • Further entrenchment of existing inequalities
  • Manipulation and misinformation
  • Loss of control of autonomous AI systems potentially resulting in human extinction

"AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm," the group wrote. "However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily."

The list of employees who shared their names (others were listed anonymously) includes: Jacob Hilton, formerly OpenAI; Daniel Kokotajlo, formerly OpenAI; Ramana Kumar, formerly Google DeepMind; Neel Nanda, currently Google DeepMind formerly Anthropic; William Saunders, formerly OpenAI; Carroll Wainwright, formerly OpenAI; and Daniel Ziegler, formerly OpenAI.

This isn't the first time Hilton spoke publicly about his former company. And he was pretty vocal today on X as well.

Kokotajlo, who worked on OpenAI, quit last month and was vocal about it in a public forum as well. He said he "Quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI (artificial general intelligence)." Saunders, also on the governance team, departed along with Kokotajlo.

Wainright's time at OpenAI dates back at least to the debut of ChatGPT. Ziegler, according to this LinkedIn profile, was with OpenAI from 2018 to 2021.

Related Article: Musk, Wozniak and Thousands of Others: 'Pause Giant AI Experiments'

AI Companies Won't Be Transparent

Leading AI companies won't give up critical information surrounding the development of AI technologies on their own, according to this group. Today, it's up to current and former employees rather than governments that can hold them accountable to the public.

"Yet," the group wrote, "broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues. Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated."

These employees fear various forms of retaliation, given the history of such cases across the industry.

Related Article: OpenAI Names Sam Altman CEO 5 Days After It Fired Him

Promoting a Culture of Open Criticism and Risk Reporting

Here's the gist of what this group calls on leading AI companies to do:

AI companies should not:

  • Enter into or enforce any agreement prohibiting “disparagement” or criticism for risk-related concerns. Retaliate against individuals for risk-related criticism by hindering any vested economic benefit.
  • Retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed. "Therefore, once an adequate process for anonymously raising concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise exists, we accept that concerns should be raised through such a process initially. However, as long as such a process does not exist, current and former employees should retain their freedom to report their concerns to the public."

AI companies should: 

  • Facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, regulators and appropriate independent organization with relevant expertise
  • Support a culture of open criticism by allowing current and former employees to raise risk-related concerns about its technologies to the public, the company's board, regulators and an appropriate independent organization with relevant expertise
  • Ensure trade secrets and other intellectual property interests are appropriately protected.

OpenAI had no public response to the group's letter. In its most recent tweet, it shared its post about deceptive uses of AI.

Learning Opportunities

"OpenAI is committed to enforcing policies that prevent abuse and to improving transparency around AI-generated content," the company wrote May 30. "That is especially true with respect to detecting and disrupting covert influence operations (IO), which attempt to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them."

About the Author
Dom Nicastro

Dom Nicastro is editor-in-chief of CMSWire and an award-winning journalist with a passion for technology, customer experience and marketing. With more than 20 years of experience, he has written for various publications, like the Gloucester Daily Times and Boston Magazine. He has a proven track record of delivering high-quality, informative, and engaging content to his readers. Dom works tirelessly to stay up-to-date with the latest trends in the industry to provide readers with accurate, trustworthy information to help them make informed decisions. Connect with Dom Nicastro:

Main image: Rokas
Featured Research