The Gist
- Regulatory vacuum. No comprehensive federal data privacy or AI regulations exist in the US.
- CEO support. Tech CEOs advocate for federal government intervention in AI regulation.
- EU leadership. The EU takes the lead in AI regulation, setting potential global standards.
One would think, with the endless array of data breaches due to poor corporate governance and security policies, the US federal government would want to ensure the smooth continuation of business by regulating data privacy and security on a federal level. Well, one would be wrong.
In fact, as far as anyone can tell, there are no such plans, nor even serious discussions, about data privacy regulation equivalent to the EU’s comprehensive General Data Protection Regulation (GDPR) at the US federal/national level. While companies may not like regulations at the outset and fight them tooth-and-nail, the reality is they are needed to ensure trust, a stable commerce marketplace, and a continued ongoing relationship between brand and consumer. Knowing the US federal government dropped the ball (actually they never even picked it up) on data privacy and data security, it may not seem shocking that they have yet to do any substantive regulation on artificial intelligence (AI).
This has led to an extraordinary reaction in which CEOs of technology companies themselves are demanding that the US government step up and, heavens forbid, do their job and define and implement standards and regulations around AI and AI-generated content. Wouldn’t that be a novel idea? That the government is responsible for proving the regulatory framework necessary for the safe and purposeful implementation of AI technology.
Crazy stuff, I know. Well, it may be happening, or it may amount to just some PR and scapegoating, only time will tell. So, let’s find out where we are right now and what is proposed for AI regulation going forward.
The Senate Subcommittee Hearing on AI
A three-hour long hearing of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law talked about the risks of generative AI, the effect it could have on jobs and employment, and what government regulations would be needed. This hearing is supposed to be the start of a series that will determine what regulations, if any, are needed around AI to address legal, ethical and security concerns.
The hearing kicked off with a recording from Sen. Richard Blumenthal from Connecticut, which was in fact a deepfake AI-generated audio recording to sound like him.
“Too often we have seen what happens when technology outpaces regulation. The unbridled exploitation of personal data, the proliferation of disinformation and the deepening of societal inequalities. We have seen how algorithmic biases can perpetuate discrimination and prejudice and how the lack of transparency can undermine public trust. This is not the future we want,” the AI-voice said.
In what hopefully promises to be a more public position from technology CEOs on government accountability to technology growth and development, OpenAI CEO Sam Altman told members of the Senate subcommittee hearing that government intervention would be required to mitigate the potential risks of AI technology.
“As this technology advances, we understand that people are anxious about how it could change the way we live. We are too,” said Altman. “My worst fears are that we cause, we the field, the technology, the industry, cause significant harm to the world. I think if this technology goes wrong, it can go quite wrong.”
And he is clear that the industry cannot alone self-regulate, but must rely on federal government regulation to ensure the sector grows and expands responsibly.
“We think that regulatory intervention by governments will be critical to mitigating the risks of increasingly powerful models,” Altman said.
Also testifying at the hearings was Christina Montgomery, vice president and chief privacy and trust officer for IBM, and Gary Marcus, a professor emeritus for New York University.
Montgomery told Congress it needs to “adopt a precision regulation approach to AI. This means establishing the rules to govern the deployment of AI in specific use cases, not regulating the technology itself.”
Interesting, but hardly sounds robust and inclusive enough to protect consumers. Maybe enough to protect corporate profits though.
Professor Marcus told the subcommittee it should consider a new federal agency that would review AI programs before they are released publicly.
“There are more genies to come from more bottles,” he said. “If you are going to introduce something to 100 million people, somebody has to have their eyeballs on it.”
So oversight. Makes sense, although not much in the way of details.
Related Article: Master Ethical AI: Provide Personalized Ecommerce Experiences While Protecting Privacy
The Impending Impact of AI on Jobs
Everyone agrees AI is going to impact jobs — and a lot of them. But what that will look like, and what jobs AI will create, is still to be determined.
“There will be an impact on jobs,” CEO Altman said. “We try to be very clear about that, and I think it’ll require partnership between industry and government, but mostly action by government, to figure out how we want to mitigate that. But I’m very optimistic about how great the jobs of the future will be,” he added.
IBM VP Montgomery said the “most important thing we need to do is prepare the workforce for AI-related skills” through training and education.
But the reality is all jobs, not just Hollywood writers, will be impacted by AI. Even corporate CEOs can easily be replaced by AI. No one is immune to its influence.
Related Article: The Legal Implications of Generative AI
EU Leads the Way with Landmark AI Regulation
As history has shown, Europeans seem to be on the cutting-edge of technology regulation, while the US is lagging behind, perhaps purposefully. The US seems more interested in protecting companies’ rights for profit as opposed to the rights of their citizens to be protected from potential threats.
Just recently, lawmakers in the European Parliament approved a draft of the EU AI Act. This is the first law on AI by a major regulator anywhere in the world. The law assigns AI applications to three primary risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.
Just like with the GDPR, it is possible the EU AI Act could become the world standard on which other regulatory efforts are based.
But is this even enough? At the very least, it seems that people have a right to know whether the material they are consuming is created or assisted by AI. In the late 19th century, a wide array of perilous substances of quackery flooded the market. These products contained substances such as morphine, cocaine and various other addictive drugs, claiming their miraculous curative properties.
When the US regulated this industry by forcing manufacturers to list their ingredients on the label, people stopped buying the addictive concoctions. Just the simple act of labeling and being transparent can see huge beneficial public behaviors.
This is what the AI industry needs, transparency. People overall do not trust AI, and building trust is the first step. And to be honest, I’m not sure that is even a real topic of discussion during these hearings.
OpenAI’s Stunning AI Regulation Reversal
After I’d written this story, OpenAI CEO Altman came out and made his position on AI regulation more clear. It turns out his speech to congress may have been feel-good PR after all, since once he was faced with dealing with actual AI regulation from the EU, he was very much opposed to a certain type of regulation that may impact his business model.
In fact, Sam Altman told Time that his company could cease operations in the EU if they had to comply with the EU AI Act. They would try, but aren’t sure if they can. Based on the language in the Act, OpenAI's generative AI chatbot ChatGPT could be designated a high-risk AI solution.
“If we can comply, we will, and if we can’t, we’ll cease operating… We will try. But there are technical limits to what’s possible,” he said.
Learn how you can join our contributor community.