US Capitol building on a sunny day
News Analysis

AI Insight Forum Raises Problems From the Start. One Is Closed Doors

5 minute read
David Barry avatar
By
SAVED
The AI Insight Forum gathered legislators, tech leaders and civic leaders to discuss a regulatory framework for AI. The problems started from the get-go.

On Sept. 13, the U.S. government took another step in the long, labyrinthine path towards regulating AI with the inaugural meeting of the AI Insight Forum.

Meta CEO Mark Zuckerberg, OpenAI CEO Sam Altman, Microsoft CEO Satya Nadella, Nvidia president Jensen Huang, Google CEO Sundar Pichai and X chair Elon Musk were among those who attended in response to an invitation from Senate Majority Leader Chuck Schumer (D-NY) to discus how the industry, the wider business community and civil society should respond to what is a largely untried technology. 

Talking About Openness Behind Closed Doors

Schumer outlined the challenges in his opening remarks. In a press statement released following the meeting, he argued that government needs to take a leading role in the process.

“Government must play a role in requiring these safeguards. Because even if individual companies promote safeguards, there will always be rogue actors, unscrupulous companies, and foreign adversaries that seek to harm us,” he said. "And on the transformational side, other governments, including adversaries like China, are investing huge resources to get ahead. We could fall behind to the detriment of our national security.”

In spite of much of the recent debate about AI being focused on openness and accountability, the session was held behind closed doors, with even the media excluded.

The irony wasn’t lost on everyone. Cited in Forbes, Sen. Elizabeth Warren (D-Mass.) told reporters all the senators in attendance showed up “to sit there and ask no questions.”

She added on Fox News Digital: "I do not understand why the press has been barred from this meeting …. What most of the people have said is we want innovation, but we have got to protect safety." 

OpenAI CEO Sam Altman told NBC he was surprised, given the format, by the broad agreement in the room on “the need to take this seriously and treat it with urgency.”

Related Article: AWS's Diya Wynn: Embed Responsible AI Into How We Work

What Leverage Does the Government Have for Regulation? 

The soundbites from participants show the wide range of opinions on the impact and approach of AI legislation, Hyperscience CTO Tony Lee told Reworked.

But in his opinion, safety and innovation are the two principal issues that need to be addressed, with two very different approaches on the table. One side, he said, promotes less legislation to impede innovation, particularly pointing to China as a competitive threat.

The other considers safety paramount and calls for an oversight body, presumably with some teeth. The approach taken in this initial legislation may profoundly affect the trajectory and impact of AI soon for the U.S. and U.S.-based workforces.

However, he said the issue is far more complex than simply reaching a consensus on both these issues. He points to past examples, such as cloud adoption, which has proven a wide range of risk tolerance across industries, even in the U.S.

“Many companies entered the cloud, while others still favor on-premises software deployments to manage privacy and security concerns,” he said. “It is somewhat self-organizing and self-regulating without direct government intervention."

In this context the question then becomes whether it is permissible — or even desirable — to allow AI adoption in the corporate environment to follow the same approach. The alternative is for Congress take a more prescriptive approach, such as that taken with the food and drug industry, where public health mandated stricter controls.

Though enforcement will be difficult, he said, it is possible, considering the need to compete in a global market. But rather than an unfettered private rush towards AI to remain competitive with China, it suggests perhaps more of a nationalized approach, comparable to the space race with the Soviet Union.

“It may come down to who consumers trust more: the US government or Big Tech,” he said.

Asked about Elon Musk’s suggestion that there should be some kind of regulatory body like the FCC, he's unconvinced that approach would work.

When you think of the FCC and the FAA, they regulate specific physical things that give them control — such as radio frequencies and the U.S. airspace. This creates controllable surfaces for those agencies to be relevant and force control.

“What would a Federal AI Agency have?" he asks. “Any company can create a model and deploy it. Customers can use it without knowing it. What are the points of control that a federal agency can manage? Perhaps certification like SOC2 or HIPPA? Would consumers care? Overall, it will be a tough hill to climb.”

Like the Fox Regulating the Henhouse

Opentensor Foundation co-founder and CEO Jacob Steeves sees regulation of this nature as problematic and will come with pitfalls. The Opentensor Foundation is committed to developing artificial intelligence technologies that are open and accessible to the public and created Bittensor, a peer-to-peer machine-learning protocol. 

"This is a call to regulation by those that effectively already hold the monopoly on AI,” he said. “Another way to look at it is they are closing the door behind them after they’ve created a powerful tool that can be integrated into almost any everyday task."

The result, he said, is it makes it much harder for smaller startups and entities to compete with them, instead likely forcing them to use the services of the larger companies instead of developing their own AI.

In a regulated landscape, for example, a small startup may opt to use OpenAI’s services rather than create its own AI solutions to circumvent the regulatory concerns around creating AI. This will increase the profit margins of these entities and stifle the amount of innovations in AI as a result, he said.

While he thinks regulating AI development is possible in theory, it raises problems. He compares it to music piracy in the early 2000s or jailbreaking iPhones. Regulation will create a slew of ‘illegal’ content including datasets, models and code to develop AI shared via underground internet channels.

“We are approaching a forking point for mankind; down one road is the centralization of power and resources, in large, regulated industries, who have entrenched and overbearing access to the best-in-class intelligence and computers,” he said. "Down the other road is the potential for sharing these resources through open protocols, via technological foundations, which enable global participation and ownership." 

Learning Opportunities

Related Article: Can We Trust Tech Companies to Regulate Generative AI?

Pockets of Progress, But Long-Term Success?

Success is not assured for the AI Insight Forum, said Kirk Sigmon, a lawyer with Banner & Witcoff Ltd. who specializes in artificial intelligence and the use of content for the building on AI models.

“It's very understandable that leaders are meeting in Washington, D.C. to discuss artificial intelligence and machine learning, as there have been many concerning developments regarding artificial intelligence in the last year or two that arguably counsel for legislation,” he said.

Here he noted the lack of clarity around whether using copyrighted material as input to train machine learning models is fair use, and points to some concerning developments in the world of deepfakes.

The AI Insight Forum could help set the standard for how many major players approach AI. One positive sign, he said, is that many companies have promised to watermark AI-generated content, as it suggests that they will endeavor to make it harder for users to pass off AI-generated content as authentic.

That might be cumbersome from a workplace perspective (as adding watermarks to everything generated with the help of AI would get annoying), but such an approach might be a necessary evil, he said.

However, he remains skeptical that U.S. legislators can do much to remedy the societal problems that AI poses. Even if there was an FCC for AI, and even with the right legislation, the proverbial dark corners of the internet will inevitably find ways to use AI for nefarious purposes. “Legislation cannot undo the fact that the necessary software for doing questionable things with AI is already available to the public,” he said.

About the Author
David Barry

David is a European-based journalist of 35 years who has spent the last 15 following the development of workplace technologies, from the early days of document management, enterprise content management and content services. Now, with the development of new remote and hybrid work models, he covers the evolution of technologies that enable collaboration, communications and work and has recently spent a great deal of time exploring the far reaches of AI, generative AI and General AI.

Main image: Andy Feliciotti | unsplash
Featured Research