In a significant move towards regulating artificial intelligence (AI), President Joe Biden signed an executive order on Oct. 30 that establishes new guidelines to ensure the “safe, secure and trustworthy” development of artificial intelligence. The initiative aims to balance the potential of advanced AI systems with rigorous security measures, before making these technologies accessible to the public.
Biden's Executive Order
The order's introductory statement outlines its ambitious scope, focusing on AI safety and security, Americans’ privacy, advances equity and civil rights, consumer and worker protections, innovation and maintaining American global leadership.
However, as an executive order its direct impact is limited. It instructs relevant government agencies to draft regulations that could evolve into enforceable laws.
The agencies in question, which include Homeland Security, Justice and Defense departments as well as the intelligence community among others, are also limited. While all can participate in formulating future policy, neither they, nor the White House itself, have received the authority to regulate AI from Congress.
So where does that leave AI and the guardrails that are envisaged in the order?
Related Article: Why Regulating AI Is Going to Be a Challenge
It's a Start
Despite the administrations proactive stance, the order's immediate efficacy is uncertain. Rohan Kulkarni of HFS Research consultancy calls the order a starting point, but not much more.
He says AI's potential is only limited by the imagination of its user — whether a good actor or a bad. The fact that the administration is acknowledging the risks and potential of AI and working to create a framework to guide development is positive.
“We should expect that the federal framework will motivate state and private enterprises to act with guidance. It signals to the world that our approach will be based on the values of our nation,” he said.
That said, he expects a lot more debate before the legislation is unveiled, including discussions on enforcement.
“The administration must remain focused on AI beyond a press event and framework, must attract talent to progress the ball, and must use this as a catalyst to trigger the next major era of innovation. It is possible, we should expect it, but time will tell if this administration can drive the AI train responsibly,” Kulkarni added.
Forward-Looking AI Regulation
After reading the full executive order, O'Reilly Media founder Tim O'Reilly was far less positive about the impact of what appeared to be the first steps towards a robust disclosure regime, a necessary precursor to effective regulation.
The order will have no impact on the operations of current AI services like ChatGPT, Bard and others under current development, since its requirements that model developers disclose the results of their “red teaming” only apply to future models trained with magnitudes-more compute power than any current model, he said. Red teaming is the developer practice of hacking, or simulating damage to a system in a way that emulates an actual attack, to identify a model's behaviors and risks.
In short, the AI companies have convinced the Biden administration that the only risks worth regulating are the science-fiction existential risks of far future AI rather than the clear and present risks in current models.
While O'Reilly acknowledged the work various agencies are doing to address current risks such as discrimination in hiring, criminal justice applications, and housing, as well as impacts on the job market, healthcare, education, and competition in the AI market, those efforts are in their infancy.
"The most important effects of the EO, in the end, turn out to be the call to increase hiring of AI talent into those agencies, and to increase their capabilities to deal with the issues raised by AI. Those effects may be quite significant over the long run, but they will have little short-term impact," O'Reilly said. "In short, the big AI companies have hit a home run in heading off any effective regulation for some years to come.”
Related Article: Can We Trust Tech Companies to Regulate Generative AI?
Content Authentication and Deepfakes
Not everyone is as pessimistic. While the order regulates foundation models, most organizations won't be training foundation models.
The provision will have minimal direct impact to most organizations, IANS Research faculty member Jake Williams said.
He does flag potential issues related to the provisions around the detection of and authentication of AI generated content. While it may appease some, as a practical matter generative technologies will always outpace those used for detection. Furthermore, many AI detection systems would require unacceptable levels of privacy intrusion.
Williams points to the dedicated funding for research into privacy preserving technologies with AI as the order's most significant move and lauds the emphasis on privacy and civil rights in AI use.
“The Biden EO makes it clear,” he said. "Privacy, equity and civil rights in AI will be regulated. In the startup world of move fast and break thing, where technology often outpaces regulation, this EO sends a clear message on the areas startups should expect more regulation in the AI space.”
The executive order also proposes the use of watermarking to establish the origins of text, audio and visual content to simplify the identification of AI creations. Watermarking would be at least a partial solution to Al-enabled problems such as deepfakes and disinformation, said Dana Simberkoff, chief risk, privacy and information security officer at AvePoint.
She also thinks the order is a good start, pointing specifically to the requirements for critical infrastructure, national security and healthcare, and financial institutions to manage AI-specific cybersecurity risk assessments and best practices. When used in combination with critical data governance and data quality management, it will be possible to tap the remarkable promise of AI, while maintaining the necessary guardrails for privacy and security.
A Roadmap, Not a Completed Strategy
Skillsoft SVP Asha Palmer is also optimistic, even if this is just the start of a long process. Organizations should treat the order as a roadmap rather than a completed strategy, and continue to focus on their own governance.
Business leaders should therefore be proactively working on their own governance structure, analyzing internal structures, defining objectives, identifying, and prioritizing, managing risks effectively and preparing for what lies ahead, she said. While governance should be the first step, Palmer was quick to add it wasn't the last and suggested AI training should quickly follow.
The work isn't done once training and governance is in place, he warned. Any guidelines created today will require ongoing improvements as more guidance emerges. "We are just at the beginning — what we put in place today will not be the same one year from now," she said.
Related Article: AWS's Diya Wynn: Embed Responsible AI Into How We Work
The US and Regulation
The order positions the U.S. as a leader in AI regulation and has the potential to be the most far-reaching regulation developed globally, said Markkula Center for Applied Ethics at Santa Clara University's Ann Skeet.
The mix of departments and agencies it distributes responsibility to, as well as the creation of new oversight boards and tools for research and innovation indicates the president is balancing the need for security with demands to continuously innovate.
The order's guardrails, coupled with the AI initiatives announced a few days later by Vice President Harris, go far towards assuring people that the government is looking out for their interests and not just the interests of Big Tech.