A dog chef wearing a chef's hat gets ready to prepare a dish with a great abundance of ingredients in a piece3 about President Joe Biden's executive order and "kitchen-sink" AI governance.
Editorial

Executive Order on AI: A Needed Step or Kitchen-Sink AI Governance?

8 minute read
Frank Palermo avatar
By
SAVED
President Biden's AI executive order attempts to provide a middle-of-the-road approach, but it lacks some needed details and practicalities.

The Gist

  • AI governance. The executive order is very broad, outlines standards for testing and reviewing AI technologies but does not provide details to vendors on what standards and the process will be to enforce these standards.
  • A little bit good, a little bit bad. While AI may need stronger governance as it matures, will early broad-based regulation hurt our global competitiveness in AI, causing us to fall behind China and other superpowers?
  • Setting a precedent? Will this regulation lead to broader regulation of innovation and software development as AI has become ubiquitous in all software platforms?

President Biden’s recent AI governance, the executive order on AI, seems to be, on the surface, in the best interest of the people as it promotes the safe, secure and trustworthy use of AI technology.

However, this is a very complex situation that requires a balance between early regulation and the ability to innovate quickly. We have to be careful that regulation doesn’t affect our ability to continue to lead the world in the discovery and applications of new technologies like AI.

On the contrary, we have to have some early guardrails to ensure that a powerful technology like AI matures without creating hazards and protects against misuse of the technology.

A red guardrail lines a road with building and trees and a blue sky with with clouds in the background in piece about President Joe Biden's AI executive order.
We have to have some early guardrails to ensure that a powerful technology like AI matures without creating hazards and protects against misuse of the technology.2ragon on Adobe Stock Photos

Artificial intelligence is still very much an evolving technology. It has the potential to be tremendously assistive to humans in their daily activities. Algorithms are unlikely to replace humans, but humans assisted by algorithms will likely rise ahead of those without. If used incorrectly it could also present harm.

The executive order attempts to provide a middle-of-the-road approach, but it lacks some of the details and practicalities of what it will take to operationalize these orders. The result could be inefficient regulatory processes, higher costs and the stifling of smaller innovative AI companies. More AI startups raised first-time capital in the United States last year than in the next seven countries combined. 

However, with too much regulation, it might be harder for those companies to gain traction, and we run the risk of AI technologies and platforms getting concentrated in the few large tech vendors we have today.

AI governance and safety are a paramount concern as the technology matures. However, we need to ensure we navigate this landscape in a balanced, carefully orchestrated manner.

AI Governance: What’s Actually in This Executive Order?

The executive order went into effect on Oct. 30, but there aren’t any specific actions for businesses yet. Those will come in the coming months, as government agencies fulfill the requirements of the order and set forth AI governance and specific guidelines.

None of this should come as much of a surprise since the White House has been messaging intentions to regulate AI since the launch of ChatGPT last November. Pressure continued to mount throughout the year for some government intervention.

In July they announced that they secured “voluntary commitment” for responsible and ethical use of AI from the CEOs of four American companies at the forefront of AI innovation — Google, Anthropic, Microsoft and OpenAI. The commitment underscores their responsibility and emphasizes the importance of driving responsible, trustworthy and ethical AI with safeguards that mitigate risks and potential harms to individuals and our society. 

The 111-page AI governance order covers eight key areas — safety and security, data privacy, civil rights, consumer rights and protection, workforce development, innovation and competition, leadership initiatives, and government use of AI. It calls on government agencies to set safety and security protocols to protect Americans against misuse of the technology. It protects against irresponsible uses of AI that can deepen discrimination, bias and other abuses in justice, healthcare and housing.

The National Institute of Standards and Technology (NIST) issued a comprehensive AI risk management framework in January 2023 that serves as the foundation of much of this executive order. It’s not clear which agencies will oversee the review and testing of models, but NIST will most likely play a role in coordinating with other governmental agencies.

There is also work that has been published in the AI Bill of Rights, which is focused on ensuring there is no algorithmic discrimination by making automated systems work for the American people.

Related Article: Biden's AI Executive Order: Balancing the Potential and the Pitfalls

What Does This Mean for Businesses?

For many of the large AI players like OpenAI, Google, Meta, Microsoft, IBM, and others this will undoubtedly mean a lot more formal processes, disclosures and testing of technology. It will also force many companies that historically have been closed about what’s under their technology hood, to be more transparent about their technology.

The executive order particularly highlights and probes on vulnerabilities of very large-scale general-purpose AI models (i.e. LLMs) trained on massive amounts of data, such as the models that power ChatGPT or DALL-E. The order requires companies that build large AI systems to notify the government and share the results of tests. The testing uses a process called “red teaming” that forces an adversarial relationship with an AI system or network. The goal is to make the AI system produce harmful results.

While the executive order provides some very extensive parameters on AI governance, it falls short of providing clear actionable steps to vendors. Vendors need a better understanding of exactly what the standards will be and how they will be enforced.

It's important to recognize that an executive order is not legislation. Executive orders are issued from the executive branch of the government, specifically the U.S. President. An executive order is not a law as it does not go through the legislative process (i.e. approved by Congress, etc.). It is not binding on everyone, only on employees of the executive branch.

Moreover, executive orders can be easily overturned by future administrations so much of the ideas within this executive order must be legislated for them to have lasting impact.

Related Article: Is AI Executive Order a Data Privacy Compass for Customer Experience?

Classifying AI May Not Be as Easy as You Think!

It might seem counterintuitive, but how exactly are we going to classify AI systems? Almost all software these days makes use of AI techniques in one way or another. Similar to how cloud computing is now ubiquitous, so is AI. How do we separate AI from traditional software development? If we are unable to do that, are we basically saying all software now needs to be regulated?

Much of the executive order is focused on regulating systems and methods rather than regulating outcomes and applications. Let’s take the example of “deep fakes.” The regulation could enforce that any software that creates or uses deep fakes will be against the law. This approach may be more scalable than creating numerous AI oversight functions within each government agency to regulate software companies and ask them to routinely submit code and models for review and testing.

The speed at which these AI models and specifically Large Language Models (LLMs) are maturing is staggering. It's only been since 2017 (five years ago) when the transformer approach became the standard for current LLMs. The executive order provides very detailed parameters for the size and types of models they expect to inspect and regulate. As an example, they are expecting companies that develop LLMs with a certain number of parameters to self-report to the government. However, in two to three years much of what is specified in the executive order will most likely no longer be valid as model development rapidly evolves.

And having AI algorithmic transparency is not a panacea. Knowing how an AI system works does not necessarily give you information on why it made a particular decision at a certain point in time.

Back in 2021, the European Union unveiled similar strict regulations called the AI Act of 2021 to govern the use of artificial intelligence. This was a first-of-its-kind policy that outlined how companies and governments can use AI technology. However, this was right before the new wave of generative AI platforms arrived which quickly made a majority of the act obsolete.

Learning Opportunities

Related Article: Safeguard Generative AI to Protect Customer Privacy

Will Marketing Ever Be the Same?

A big piece of the executive order is around the protection of consumer data and the transparency of AI algorithms. This has big implications for many commercial marketing tools that use customer behavioral data and predictive models to deliver highly targeted and personalized content.

This means marketers will need to proactively audit algorithms, minimize consumer data collection and provide transparency about the use of AI systems. There will also be a focus on mitigating bias in AI systems, which will lead to audits of marketing tools for discrimination in areas like ad delivery and dynamic pricing.

Data and AI models are currently at the heart of generating deep insights into customer behavior that routinely improve marketing campaigns and conversions. Much of this constitutes a “secret sauce” of how a marketer segments, targets and personalizes an experience with a brand. In the future will all of this be in the open, neutralizing the benefit marketers gain from their platforms and customer data?

With this AI governance, marketers should expect to see more monitoring of their data, models and AI marketing tools.

Related Article: AI in Marketing: More Personalization in the Next Decade

Haven’t Machines Altered Our Content for Years?

Another part of the executive order requires the labeling of synthetic AI content with watermarks or other equivalent labels to notify that AI was leveraged in the creation of the asset.

This is inconsistent as we’ve been using tools like Adobe Photoshop to alter images for over 30 years. Aren’t those images also being altered by software and in some cases AI?In music, applications like AutoTune are used to modify and alter a singer's voice. Do those now need to get labeled as well? Many top movies make use of CGI sequences which are computer-generated. Do these now also need to be regulated? 

More broadly, we need to better understand what constitutes AI content. Is it that 100% of the image content needs to be generated by AI? What if it’s 90% does that classify? What if only 1% is generated by AI does that content need to be watermarked as well? In the future will all content be watermarked?

Related Article: Staying Human While Using Generative AI Tools for Content Marketing

Is the Federal Software Commission (FSC) Next?

The future could unfold through the creation of a new government commission overseeing innovation, AI, and more broadly software development. Similar to the role the Federal Drug Administration (FDA) plays in ensuring drug and food safety, how the Federal Trade Commission (FTC) regulates trade, and how the Federal Communications Commission (FCC) regulates communications, we could be headed for the Federal Software Commission (FSC) to oversee the regulation of technology and AI related platforms.

While this may seem extreme, it could simplify the executive order direction where each agency needs to appoint its own AI Officer, which may make operationalizing reviews more burdensome for companies. A centralized approach could have benefits in streamlining the governance process.

This executive order is a good first step into AI governance, as it establishes some standards and guidelines while welcoming business participation to have a seat at the table. We just need to equally embrace the evolution of these technologies and their ability to unleash their power to do good in the world.

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Frank Palermo

Frank Palermo is currently Chief Operating Officer (COO) for NewRocket, a prominent ServiceNow partner and a leader in providing enterprise Agentic AI solutions. NewRocket is backed by Gryphon Investors, a leading middle-market private investment firm. Connect with Frank Palermo:

Main image: primopiano on Adobe Stock Photos
Featured Research