The Senate Hall inside the Palace of Parliament
Feature

States vs. Washington: Who Regulates AI Now?

5 minute read
Sharon Fisher avatar
By
SAVED
Congress killed a 10-year state AI moratorium. See how the AI Action Plan, FCC/FTC reviews and funding leverage could reshape regulation.

How much regulation should the AI industry have, and who should be in charge of it?

That’s been the question circulating around the government prior to President Donald Trump taking office. Three days after his second inauguration, he announced an executive order intended to remove barriers to American leadership in AI. In addition, he rescinded an AI executive order that former President Joe Biden imposed on October 31, 2024.

At the time, David Sacks, whom Trump appointed as AI and cryptocurrency czar and head of the Presidential Council of Advisors for Science and Technology, characterized Biden’s order as a hundred pages of “burdensome regulations” the industry hated. “We’re going to replace it with something much better,” he said.

AI Czar David Sacks in the Oval Office with President Donald Trump and former government official Bo Hines
The White House

That happened on July 23, when Trump released his AI Action Plan, as well as three executive orders supporting it, which included several actions intended to reduce regulatory oversight on the AI industry. 

“AI is far too important to smother in bureaucracy at this early stage, whether at the state or federal level,” the plan noted. 

Congress Kills 10-Year Ban on State AI Rules 

“A 10-year moratorium when states are the ones grappling with the real harms of AI?” 

- Eleanor Gaetane

Director of Public Policy and VP, NCOSE

Trump released the AI Action Plan just two weeks after Congress ultimately voted down a 10-year moratorium on state AI regulations. The House had passed such a restriction on May 22 as part of the so-called Big Beautiful Bill, with Sen. Ted Cruz rewriting it for the Senate based on concerns it would violate the Byrd Rule, which means resolutions intended to be passed through reconciliation must be limited to budgetary issues. Cruz’s rewrite added a provision tying state AI regulations to federal broadband funding in an attempt at compliance.

However, many people opposed both versions of the bill, and the Senate voted in favor of an amendment to strip it from the Big Beautiful Bill on July 1 by an overwhelming 99-1 majority, with Sen. Tom Tillis the sole vote against it.

Proponents of such a moratorium said it was difficult for companies to comply with the increasing number of AI regulations imposed by the states. Opponents were concerned that such a moratorium would make it difficult to promote online safety, particularly for children. “There was a huge bipartisan collaboration” against the bill, said Gaia Bernstein, a law professor at Seton Hall Law School and visiting fellow at the Brookings Institute since July 2025. 

For example, under the moratorium, Texas would likely be unable to enforce its law regarding age verification, which the Supreme Court upheld in late June, said Eleanor Gaetane, director of public policy and vice president of the National Center on Sexual Exploitation. AI products have been launched with almost no testing of how they affect children, she added, noting that it took years for people to recognize the harms that social media can cause. 

The long time period was also a concern, especially with how quickly the AI industry is progressing, according to Gaetane. “A 10-year moratorium when states are the ones grappling with the real harms of AI?” 

Related Article: Who Is the US AI Czar David Sacks?

Will Congress Deliver a Federal AI Law?

 “If you can get federal regulation which is well thought out, I would be all for it, but from what I have seen so far, that has not happened.”

- Gaia Bernstein

Professor, Seton Hall Law School

Instead of a plethora of state AI regulations, the industry preferred a more uniform law on the federal level, Bernstein explained. “They were maybe hoping for a law that was less restrictive” and that would pre-empt the already-passed state laws. Major technology companies had initially been against such laws, but became more interested in a federal law after states such as California began passing AI restrictions, she said. 

“Any time you could have a universal standard for development of technology, that’s going to be welcome,” similar to consumer privacy laws, agreed Goli Mahdavi, partner and co-leader of the AI service line for the law firm of Bryan Cave Leighton Paisner.

Neither were sanguine that Congress could agree on such federal regulations. “If you can get federal regulation which is well thought out, I would be all for it, but from what I have seen so far, that has not happened,” Bernstein said. The Senate passed the Kids Online Safety Act in July 2024, but it has since been held up in the House.  

Voting down the moratorium was an acknowledgement that the likelihood of a broad federal AI law is “slim to none,” Mahdavi added. “It’s very hard to get consensus around.” If there had been such a federal law in the wings, the moratorium may have had more support. “Having an AI moratorium without any sort of federal initiative was unpalatable.”

How the White House AI Action Plan Could Pressure States

President Trump speaking at the White House AI Summit on July 23, 2025
President Trump speaking at the White House AI Summit on July 23, 2025

Exactly what the AI Action Plan will do about state and federal AI regulations is unclear, but it includes the following provisions:

  • Launch a Request for Information about federal regulations that hinder AI innovation and adoption, and work with federal agencies to take appropriate action.
  • Work with federal agencies to identify, revise or repeal regulations and other government actions that unnecessarily hinder AI development or deployment.
  • Work with federal agencies that have AI-related discretionary funding programs to ensure, consistent with applicable law, that they consider a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.
  • Evaluate whether state AI regulations interfere with the Federal Communications Commission’s (FCC) ability to carry out its obligations and authorities under the Communications Act of 1934.5.
  • Review all Federal Trade Commission investigations and other actions commenced under the previous administration to ensure they do not advance theories of liability that unduly burden AI innovation.

“I have significant concerns that [the AI Action Plan] would affect states’ ability to legislate to protect kids from excessive screen use and other online harms,” Bernstein wrote on LinkedIn.

While the AI Action Plan does not implement a moratorium on state legislation of AI, she has concerns about the provision limiting federal funding to states with burdensome AI regulations. Laws targeting social media, addictive features of online platforms and AI companions do regulate AI, she noted. “This could produce a chilling effect on states’ willingness to legislate." 

The operative aspect will be how the Office of Management and Budget, which is charged with implementing this portion of the plan, chooses to do so.

Related Article: Trump Unveils Massive AI Strategy: ‘We Will Not Allow Any Foreign Nation to Beat Us’

Is State-Led AI Regulation Coming? Here's What to Expect 

Proponents said that with the 10-year moratorium voted down, they expect to see more state-level AI regulations in the next couple of years. 

But even on a state level, Mahdavi expects see state laws that are more narrowly focused rather than broad due to how fast the industry is progressing. “How do you regulate something that is moving so quickly?” she asked.

Learning Opportunities

Even the potential penalties in the AI Action Plan might not stymie further regulations, Mahdavi explained. “I don’t know how much this threat of withholding funding or potential FCC investigation is going to influence behavior,” especially for large states such as Texas and California that have already passed such legislation. “There’s a real appetite at the state level to regulate AI.”

Bernstein expects the industry to continue challenging state AI regulations on either First Amendment grounds or under Section 230 of the Communications Decency Act, which precludes online providers and users from being held liable for information provided by a third party. “There’s a lot of experimentation on which laws can withstand these challenges,” she said. “We’ll keep seeing this until it gets to the Supreme Court.”

About the Author
Sharon Fisher

Sharon Fisher has written for magazines, newspapers and websites throughout the computer and business industry for more than 40 years and is also the author of "Riding the Internet Highway" as well as chapters in several other books. She holds a bachelor’s degree in computer science from Rensselaer Polytechnic Institute and a master’s degree in public administration from Boise State University. She has been a digital nomad since 2020 and lived in 18 countries so far. Connect with Sharon Fisher:

Main image: MoiraM on Adobe Stock
Featured Research