The US Capitol Building
Feature

AI Regulation in the US: A State-by-State Guide

12 minute read
Scott Clark avatar
By
SAVED
AI laws are changing fast — can your business keep up? Navigate the patchwork of state regulations and avoid compliance risks before it’s too late.

US states are stepping in to address AI’s biggest implications through legislation.

From comprehensive frameworks to targeted measures, state-level AI regulation is changing fast, creating a patchwork of rules that enterprises must navigate. For AI leaders, understanding these emerging laws isn’t just a compliance issue — it’s a strategic necessity.

This article examines key state-level AI regulatory efforts, analyzing their potential impacts, pros and cons and what this legislation reveals about the broader direction of AI governance in America.

Table of Contents

An Introduction to AI Regulation

The patchwork of AI regulation emerging across the United States is becoming a critical focus for both policymakers and enterprise leaders. While federal guidelines are still in development, states are moving swiftly to address concerns about AI's impact on privacy, employment and ethical decision-making. States including California, New York and Illinois have introduced legislation targeting specific applications of AI, such as data privacy, algorithmic bias and automated employment decisions.

Navigating this evolving regulatory environment requires staying ahead of legal mandates while aligning AI practices with ethical and consumer expectations.

Suriel Arellano, AI consultant, told CMSWire that as AI continues to transform industries throughout the US, state governments are taking the lead in dealing with its implications — in the same way they often do when Big Tech unveils new products for society.

"This is both an opportunity and a challenge for us, as the AI ecosystem — businesses, developers, policymakers and the artificial intelligence industry — must now contend with an evolving patchwork of state regulations that could very well shape the future of our industry," explained Arellano.

Related Article: Beyond Regulation: How to Prepare for Ethical and Legal AI Use

AI Laws by State

In 2024, US states introduced nearly 700 AI-related legislative proposals, marking a significant surge in regulatory activity compared to 191 bills in 2023.

While 113 of these bills were enacted into law, covering areas such as high-risk AI uses, digital replicas, deepfakes and government AI applications, another 77 advanced through at least one legislative chamber. With 45 states actively addressing AI legislation this year, there will most certainly be a more aggressive legislative push in 2025, particularly in Connecticut, Texas and California.

Craig Albright, SVP for US government relations at the Business Software Alliance, noted during a press call in October that while a unified model for comprehensive AI legislation has yet to emerge, states are providing strong examples through their individual efforts. As of the end of 2024, the majority of states have varying levels of engagement with AI legislation.

States with enacted AI legislation include:

California

The Bolstering Online Transparency (BOT) Act, enacted in July 2019, prohibits using undisclosed bots to interact with individuals online to drive sales, transactions or influence elections. A "bot" is defined as an automated account performing actions or posts without human involvement.

Colorado

In May 2024, the Colorado Artificial Intelligence Act (CAIA) was enacted, and will become effective February 2026. It mandates high-risk AI developers to prevent algorithmic discrimination and provide detailed documentation. Deployers must conduct annual impact assessments, disclose AI use and allow data correction and appeals. 

Delaware

Delaware has taken a proactive approach to AI regulation, establishing the Delaware Artificial Intelligence Commission in July 2024 to advise on AI use and safety while inventorying generative AI across state agencies to identify high-risk areas. 

In October 2024, the Delaware Supreme Court adopted an interim policy allowing cautious use of generative AI by judges and court staff, requiring administrative approval and ensuring human accountability for decisions.

Florida

In May 2024, Florida enacted legislation requiring political advertisements containing AI-generated content to include clear disclaimers, enhancing transparency in political communications. Additionally, the state criminalized the creation and distribution of AI-generated child sexual abuse material, closing previous legal loopholes and aligning with federal efforts to combat such content.

Illinois

Legislation was enacted to regulate the use of AI in employment decisions. Effective January 1, 2026, amendments to the Illinois Human Rights Act will prohibit employers from using AI in ways that result in discrimination based on protected characteristics or using zip codes as proxies for such classes. Employers are also required to notify employees when AI is used in employment-related decisions. 

Additionally, the 2019 Artificial Intelligence Video Interview Act mandates that employers inform job applicants and obtain consent before using AI to analyze video interviews. 

Indiana

In 2024, Indiana established an Artificial Intelligence Task Force to evaluate AI use in state agencies and recommend policies ensuring ethical and secure deployment. State agencies must conduct maturity assessments before launching AI projects, aligning with new policies from the Office of the Chief Data Officer. 

Additionally, Indiana enacted a law requiring disclaimers on political materials featuring AI-generated content to prevent voter deception.

Louisiana

In 2023, Louisiana tasked its Joint Legislative Committee on Technology and Cybersecurity with studying AI's impact on state operations, procurement and policy through Senate Concurrent Resolution No. 49 and House Resolution 66. These measures aim to assess AI’s implications and recommend regulations to ensure public safety and effective governance.

Maryland

Maryland requires the Department of Information Technology to establish policies and procedures for the development, procurement, deployment, use and assessment of AI systems by state government units.

New Hampshire

In 2024, New Hampshire enacted laws regulating AI use by state agencies, banning its application for manipulation or surveillance without a warrant and requiring human oversight in decisions. 

It also criminalized fraudulent deepfakes and adopted an AI Code of Ethics emphasizing fairness, transparency and accountability, ensuring ethical AI integration in government and public interactions.

Learning Opportunities

New Jersey

In 2024, New Jersey launched the Next New Jersey Program, offering $500 million in tax credits to attract AI businesses and data centers while partnering with Princeton University to create an AI hub. 

Governor Murphy’s Executive Order established an AI Task Force to study AI technologies and their societal impacts, positioning the state as a leader in ethical AI development and innovation.

New Mexico

In May 2024, New Mexico enacted House Bill 182, mandating that political advertisements disclose the use of AI-generated content to prevent voter deception. 

Additionally, House Bill 184, the Government Use of Artificial Intelligence Transparency Act, requires state agencies to inventory and assess their AI systems, promoting transparency and accountability in government AI applications.

Related Article: What Is the EU AI Act?

New York

New York City's Local Law 144 of 2021, which went into effect in July 2023, regulates the use of automated employment decision tools (AEDTs). The law requires employers and employment agencies to conduct independent audits of these tools to assess bias and prohibits their use unless the audit results are made publicly available. Additionally, employers must notify candidates when such tools are used and allow them to request alternative evaluation methods. 

Oregon

In 2024, Oregon enacted SB1571 requiring political campaigns to disclose the use of AI-generated content in advertisements to ensure transparency. 

Additionally, Governor Tina Kotek established the State Government Artificial Intelligence Advisory Council to develop an action plan guiding AI adoption in state government, emphasizing transparency, privacy and equity. 

Pennsylvania

In 2024, Pennsylvania enacted legislation mandating clear disclosures when AI-generated content is used in consumer-facing materials, aiming to enhance transparency and protect consumers from potential deception. 

Additionally, the state criminalized the creation and distribution of AI-generated child sexual abuse material, closing legal loopholes and aligning with federal efforts to combat such content. 

Puerto Rico

Senate Bill 1179 and House Bill 2027 mandate ethical AI deployment in state agencies by establishing an Artificial Intelligence Officer within the Innovation and Technology Service and creating policies for PRITS to oversee, evaluate and authorize AI use. These measures ensure transparency, accountability and responsible integration of AI in government operations.

South Dakota

Senate Bill 79 clarifies that possession of computer-generated child pornography is a Class 4 felony, updating statutes to address AI-generated content.

Tennessee

In March 2024, Tennessee enacted the Ensuring Likeness Voice and Image Security (ELVIS) Act, amending the state's Personal Rights Protection Act to prohibit the unauthorized use of AI to replicate an individual's voice or likeness without consent. This legislation aims to protect artists and performers from AI-generated impersonations, addressing concerns over voice cloning and deepfakes in the music industry.

Utah

In March 2024, Utah enacted S.B. 149, the Artificial Intelligence Policy Act, effective May 1, 2024, marking a significant step in AI regulation. This legislation requires businesses to disclose the use of generative AI in consumer interactions upon request and holds them accountable for any consumer protection violations resulting from AI-generated content. It also established the Office of Artificial Intelligence Policy to oversee AI-related programs within the state.

Virginia

In 2024, Virginia issued Executive Order 30 to guide ethical AI use in state agencies, emphasizing privacy, security and education. The state also formed an AI Task Force to advise on policy standards and assist with responsible AI adoption, highlighting its commitment to ethical and effective integration. 

Washington

In 2024, Washington enacted ESSB 5838, creating an AI Task Force to guide legislative recommendations, and Governor Inslee issued Executive Order 24-01, directing state agencies to adopt ethical AI guidelines, emphasizing transparency and public trust in government use of AI.

West Virginia

In 2024, West Virginia passed House Bill 5690, creating a task force to define AI for legislative purposes, develop public sector best practices and recommend policies protecting individual rights and data privacy. A report is expected by July 2025.

Wisconsin

In February 2024, Wisconsin enacted legislation requiring political candidates and committees to disclose the use of AI-generated content in campaign advertisements, with violations subject to fines up to $1,000 per infraction. Additionally, the state criminalized the creation and possession of AI-generated child sexual abuse material, aligning penalties with existing laws on such offenses. 

The flurry of AI-related legislative activity in 2024 highlights a growing recognition among state lawmakers of the profound impact AI technologies have on society. While states like Colorado and California have taken proactive steps to implement comprehensive AI regulations, others are experimenting with task forces, ethics codes and targeted measures addressing areas such as employment, political transparency and public safety. 

These diverse approaches reflect an evolving understanding of the opportunities and challenges AI presents, setting the stage for an even more robust legislative push in 2025 as states work to balance innovation with accountability and public trust.

Where Federal and State Legislation Intersect

The interplay between state and federal governance has created both opportunities and challenges for businesses and regulators.

According to Deniz Celikkaya, a technology lawyer and compliance officer specializing in AI governance with Atka Legal, “Many businesses are beginning to take proactive steps, like establishing AI ethics committees or hiring AI governance officers, even in the absence of enforceable regulations.” This trend emphasizes how state-level legislation is influencing organizational behavior, potentially paving the way for future federal policies. 

“As state-level AI laws currently fill what is a regulatory void, many ongoing discussions have emerged regarding the laws’ likely role in future AI governance,” said Arellano. “The most timely and relevant debate...concerns whether they will lead to the creation of a solid, federal AI regulatory framework or if those laws will instead keep us on a path toward decentralized governance.” Advocates for a federal approach argue that it would alleviate compliance challenges for businesses operating across multiple states, creating a more harmonized regulatory environment.

In the federal sphere, Senator John Hickenlooper  (D-Co.) has emphasized the need for transparency in AI, advocating for standardized methods to identify AI-generated content. Speaking with the Center for Strategic & International Studies in November 2024, he described the necessity of creating systems that can clearly delineate between real and computer-generated media: “What is AI? What are — when you see a video online, is that computer generated? Is that fake? Or is that the real person saying that? And have some sort of a watermark, or if it’s an audio you have a little bell, a little chime can go off.”

Hickenlooper argued against creating a new federal regulatory agency, suggesting instead that the National Institute of Standards and Technology (NIST) should lead the effort. “We have NIST. That’s the right place where the work should be done to make sure we set a set of standards, in conjunction with industry.”

Senator Edward J. Markey (D-Mass.), a member of the Senate Committee on Commerce, Science and Transportation, has championed the Artificial Intelligence Civil Rights Act, which aims to put guardrails on the use of algorithms in consequential decisions. Described as the most comprehensive AI civil rights legislation introduced in Congress, the bill seeks to eliminate bias, ensure fairness in algorithmic decision-making and bolster public trust in AI. Supported by 80 civil rights, labor, housing, LGBTQ+, disability and immigration organizations, along with AI experts, the legislation reflects growing concern over the societal impact of unchecked AI technologies.

During a November 2024 floor speech, Markey addressed the need for legislation to protect marginalized communities from AI’s adverse effects. “We cannot allow AI to stand for Accelerating Injustice in our country. We have a choice.” He argued that while AI innovation offers transformative potential, it must not come at the expense of vulnerable populations: “We can have an AI revolution, while also protecting the civil rights and liberties of everyday Americans.”

The interplay between state and federal AI regulation efforts was highlighted in August 2024 when Speaker Emerita Nancy Pelosi publicly opposed California Senate Bill 1047. While acknowledging the bill’s good intentions, Pelosi described it as “well-intentioned but ill-informed,” citing concerns shared by other prominent California lawmakers, including Representatives Zoe Lofgren, Anna Eshoo and Ro Khanna, who argued the legislation could inadvertently stifle innovation. Pelosi emphasized California’s pivotal role in AI development and the need for thoughtful legislation: “AI springs from California. We must have legislation that is a model for the nation and the world.” 

This debate highlights the challenges of balancing consumer protections with fostering an AI ecosystem that supports small entrepreneurs, academia and innovation, rather than exclusively benefiting large technology firms. Pelosi summarized the stakes, stating that "we have the opportunity and responsibility to enable small entrepreneurs and academia — not big tech — to dominate."

OpenAI, a prominent opponent of SB 1047, has argued that it could stifle growth and harm US competitiveness. In a letter to Senator Scott Wiener, OpenAI Chief Strategy Officer Jason Kwon wrote, “The AI revolution is only just beginning, and California’s unique status as the global leader in AI is fueling the state’s economic dynamism. SB 1047 would threaten that growth, slow the pace of innovation, and lead California’s world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere.”

Wiener has acknowledged that federal legislation would be ideal but expressed skepticism about Congress’s ability to act swiftly on the issue. 

“As I’ve stated repeatedly, I agree that ideally Congress would handle this. However, Congress has not done so, and we are skeptical Congress will do so,” Wiener said in a press release responding to OpenAI’s concerns. He defended SB 1047 as a necessary measure to address safety risks posed by large AI models, adding, “SB 1047 is a highly reasonable bill that asks large AI labs to do what they’ve already committed to doing, namely, test their large models for catastrophic safety risk.”

Related Article: Open Innovation: Fueling AI That's Responsibly Developed and Useful

Key Trends and Issues in State-Level AI Regulation

Navigating the varied requirements of state-level AI laws remains a significant challenge for businesses. 

Celikkaya highlighted the complexity, stating that “you can imagine many companies will have customers from different states and consequently they will have to juggle multiple regulations at the same time as they are doing with other regulations such as privacy laws.” This regulatory diversity requires adaptable governance frameworks, such as explainable AI policies, that address overlapping transparency requirements across states.

Celikkaya also stressed the growing focus on fairness and accountability, exemplified by Illinois’ AI Video Interview Act, which she says obliges companies to comply with transparency requirements and bias audits in AI hiring tools. “These laws encourage companies to perform algorithmic audits and implement fairness testing, ensuring their AI systems do not disproportionately harm protected stakeholder groups."

Celikkaya suggested that states could promote both innovation and compliance by creating regulatory sandboxes, where companies experiment with AI under supervision. These approaches, combined with clear and fair compliance requirements, could help bridge the gap between regulatory oversight and technological progress.

As businesses navigate the complex and fragmented regulatory mishmash of state-level AI laws, compliance has become a critical challenge, particularly for those operating across multiple jurisdictions. “The growing quilt of state AI laws presents serious compliance problems for businesses, especially those that serve multiple states,” said Arellano. “Each state's understanding of privacy and AI governance is different, which makes the legal landscape hard to navigate and likely raises our costs.” 

To address these challenges, Arellano advocates for strategies such as crafting robust compliance programs, enforcing strong data privacy and management policies, auditing algorithms regularly, and conducting "truth checks" to ensure AI aligns with ethical and legal standards. “Keeping a close eye on new developments in state and federal AI legislation” is also critical, he added, reflecting the proactive measures required to navigate this evolving regulatory environment.

Arellano underscored the importance of addressing bias and discrimination through robust regulatory frameworks. “AI laws being created at the state level are strongly focused on reducing bias and discrimination in AI systems. They do this in several ways. One is through impact assessments, which are mandated in the Colorado AI Act for ‘high-risk’ AI systems and which key AI experts have said should be a baseline for any law governing AI systems.” By mandating impact assessments, Colorado’s approach attempts to quantify potential risks, including occult bias, setting a standard for effective regulation.

More AI Legislation Is Ahead 

State-level AI regulation in the US showcases a proactive approach to addressing the ethical, societal and technological challenges posed by AI. 

With nearly 700 AI-related bills introduced in 2024 and a growing focus on transparency, bias prevention and privacy, states are laying the groundwork for a robust regulatory framework. As activity accelerates, these diverse efforts will not only shape local priorities but also influence future federal policies, aiming to balance innovation with accountability and ensure AI’s benefits are equitably distributed.

About the Author
Scott Clark

Scott Clark is a seasoned journalist based in Columbus, Ohio, who has made a name for himself covering the ever-evolving landscape of customer experience, marketing and technology. He has over 20 years of experience covering Information Technology and 27 years as a web developer. His coverage ranges across customer experience, AI, social media marketing, voice of customer, diversity & inclusion and more. Scott is a strong advocate for customer experience and corporate responsibility, bringing together statistics, facts, and insights from leading thought leaders to provide informative and thought-provoking articles. Connect with Scott Clark:

Main image: Paul Hakimata on Adobe Stock
Featured Research