An angled shot of the facade of The White House emphasizing it's columns and lighting in reported story of President Biden requiring pledge from AI companies on safety and security.
News

Biden’s Magnificent Seven: Top AI Titans Sign Key Pledge

3 minute read
Jennifer Torres avatar
By
SAVED
Pioneering AI safety and transparency, seven leading AI giants sign off on Biden’s commitment to safer and more transparent AI.

The Gist

  • AI safety. Major AI firms sign Biden administration pledge to prioritize safety in technology development.
  • Industry collaboration. Tech companies vow to work together to ensure AI transparency.
  • Tech accountability. Leading AI businesses accept responsibility for secure AI practices.

At the invitation of President Biden, representatives from seven of the country’s top artificial intelligence companies gathered at the White House to add their signatures to a pledge confirming their commitment to “advancing the safe, secure, and transparent evolution of AI technology.”

On Friday, the Biden-Harris administration secured voluntary commitments from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI, affirming their individual responsibility to assure safety, uphold the highest standards and “ensure that innovation doesn’t come at the expense of Americans’ rights and safety.”

“These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI — safety, security, and trust — and mark a critical step toward developing responsible AI,” the White House said in a statement. “As the pace of innovation continues to accelerate, the Biden-Harris administration will continue to remind these companies of their responsibilities and take decisive action to keep Americans safe.”

Beyond penning the commitments, the White House is also in the process of crafting an executive order they hope will position America at the forefront of conscientious technological innovation.

Overall, these commitments will likely necessitate changes in how these companies develop, test and communicate about their AI models. So, exactly what did these leading AI companies agree to? The pledge focuses on three key issues: safety, security and trust.

Let’s take a closer look...

Related Article: Microsoft, Google, OpenAI Respond to Biden's Call for AI Accountability

Safeguarding AI: Commitment to Thorough Safety Checks, Enhanced Transparency and Collaborative Standards

Under the "Safety" banner, the administration seeks commitments for comprehensive internal and external reviews (red teaming) of AI models. These reviews will focus on mitigating potential misuse like societal and national security threats, including bio, chemical and cyber vulnerabilities such as bias and discrimination. In effect, these companies may now find it necessary to involve independent domain experts in their red-teaming processes — or be compelled to reveal more about their safety procedures to the public.

Further, the pledge asserts a commitment to advance AI safety research, particularly in making AI decision-making more understandable. It also advocates for collaboration between companies and governments to share information about safety risks, emergent threats and attempts to bypass security measures. By signing, the companies promise to participate in (or establish) forums that allow the development, enhancement and adoption of shared standards to serve as platforms for sharing information about frontier capabilities, emerging risks and threats — and to engage with governments, civilians and academia, as needed.

Boosting AI Security: Investing in Cybersecurity Measures and Encouraging Third-Party Detection of Issues and Vulnerabilities

For AI models that have yet to be released, the companies have agreed to treat them like important secrets, maintaining limited access to the particulars. This means only those who need to know the specific details for their work will have access to them.

The companies have also agreed to create strong programs to spot threats from inside the company and must store and handle these details in a safe place to lower the chance of being leaked without permission. They also acknowledge that even after red-teaming, AI systems might still have flaws and weak spots. To handle this, they promise to set up reward systems like bounty programs, contests or prizes for those who responsibly point out these flaws or unsafe behaviors in their AI systems, promoting a culture of vigilance and proactive threat management.

Related Article: UK Authority Launches Initial Review of Artificial Intelligence Models

Trust in AI: Companies Pledge Collaboration and Transparency for AI-Generated Content

The White House believes it's important for the public to be able to tell the difference between content made by humans and content made by AI. In order to enhance trust, companies were asked to adhere to the following requirements.

  • Enabling users to discern AI-generated audio or visual content through robust mechanisms such as provenance and watermarking — including the development of tools that can confirm if content was generated by their system.
  • Regularly disclose details about their AI models or systems including capabilities, limitations and appropriate use scenarios — sharing reports that discuss societal risks like fairness and bias implications.
  • Prioritize research on societal risks associated with AI systems with a focus on harmful bias, discrimination and ensuring privacy protection.
  • Using frontier AI systems to tackle pressing societal challenges such as climate change, early cancer detection and combating cyber threats — including initiatives supporting AI education and understanding for the public.

“The companies developing these pioneering technologies have a profound obligation to behave responsibly and ensure their products are safe,” the White House said in a statement. “The voluntary commitments that several companies are making today are an important first step toward living up to that responsibility. These commitments — which the companies are making immediately — underscore three principles that must be fundamental to the future of AI: safety, security, and trust.”

About the Author
Jennifer Torres

Jennifer Torres, is a Florida-based journalist with more than two decades of experience covering a wide range of topics. Jennifer formerly served as a staff reporter at CMSWire, where she tackled subjects ranging from artificial intelligence and customer service & support to customer experience and user experience design. Jennifer is also the esteemed author of a collection of 10 mystery and suspense novels, and has formerly held the position of marketing officer at the prestigious Florida Institute of Technology. Connect with Jennifer Torres:

Main image: rrodrickbeiler on Adobe Stock Photo
Featured Research