State assembly meeting
Editorial

AI Safety Was the Story. Power Was the Result.

4 minute read
Emily Barnes avatar
By
SAVED
AI isn't being regulated because it causes harm. It's being regulated because it threatens powerful IP holders.

For years, artificial intelligence existed in a regulatory gray zone. The technology advanced quickly enough to reshape labor markets, education, media and public trust, yet not in ways that produced sustained political action.

Evidence of harm accumulated openly. Researchers documented algorithmic discrimination in hiring systems, credit scoring and predictive policing. Educators reworked assessment models midstream as generative AI tools entered classrooms. Artists watched distinctive styles absorbed into training data without consent or compensation. Policymakers acknowledged these developments, convened hearings, issued ethical principles and praised voluntary standards.

Very little of that activity translated into enforceable law.

That outcome did not reflect ignorance. The evidence was widely available. What was missing was incentive.

Table of Contents

The AI Ownership Question

The regulatory stalemate broke only when generative AI began to pose a credible threat to entrenched intellectual-property regimes. Between 2023 and 2025, copyright litigation, licensing negotiations and enforcement actions reframed artificial intelligence from an abstract social risk into a concrete asset problem.

Once generative systems demonstrated an ability to reproduce recognizable styles, narrative structures and culturally embedded creative patterns at scale, the question shifted. Artificial intelligence was no longer primarily about innovation or harm. It became about ownership.

The clearest public signal arrived when major rights holders began coordinating enforcement and access simultaneously. Disney's late-2025 licensing and investment agreement with OpenAI, governing the controlled use of Disney, Pixar, Marvel and Star Wars intellectual property in generative tools, coincided with intensified enforcement against unauthorized AI platforms. Disney mattered less as a cultural brand than as an economic actor with leverage. The implication was direct: generative access would occur through negotiation, not assumption.

That episode did not initiate the shift. It revealed one already underway.

Regulation became politically viable once corporations required predictability around licensing, liability and control. Artificial intelligence is being regulated because powerful interests now demand legal insulation. Public protections will emerge, but incidentally, as a consequence of corporate risk management rather than as a primary design objective. That sequence is familiar across modern regulatory history.

Related Article: AI Copyright Law: Latest US & Global Policy Moves

Regulation Follows Power, Not Harm

American political culture often treats regulation as a response to social injury. Historical practice suggests a different pattern.

Warnings about systemic risk preceded the 2008 financial crisis by years. Regulators and economists identified predatory lending, opaque derivatives and institutional fragility well before markets collapsed. Those warnings produced reports and guidance, not structural reform. Comprehensive regulation followed only after the failure of Lehman Brothers and the threat of cascading institutional collapse. Dodd–Frank was framed publicly as consumer protection, yet its central function was stabilizing financial institutions whose failure endangered the global economy.

Consumers absorbed harm first. Regulation arrived once capital itself was at risk.

Artificial intelligence has followed the same trajectory. For nearly a decade, peer-reviewed research and federal risk frameworks documented bias in AI. Those findings produced voluntary standards and best-practice guidance. What accelerated federal attention was not discrimination. It was liability.

When Getty Images sued Stability AI over unauthorized training on proprietary photo libraries, the issue moved from ethics to enforceable economic loss. When major publishers and media organizations initiated coordinated litigation against OpenAI and other model developers, congressional urgency followed. Governance shifted into courtrooms amid legislative paralysis. Regulation responded not to warning signs, but to exposure.

Intellectual Property Is Already Governing AI

Long before statutory intervention, artificial intelligence began operating under private governance.

The 2023 licensing agreement between OpenAI and Axel Springer illustrated the dynamic clearly. The arrangement granted access to proprietary journalism from outlets including Politico and Business Insider while imposing explicit constraints on attribution, usage and content surfacing. That agreement did not merely compensate publishers. It shaped system behavior. Certain outputs became permissible because access was licensed; others disappeared because leverage was absent.

The Getty Images litigation produced similar effects. Training disclosures tightened. Dataset sourcing narrowed. Licensing negotiations became standard practice. Years of academic debate failed to produce comparable change. Litigation succeeded quickly.

European competition authorities have since warned that licensing asymmetries already produce uneven capabilities across foundation models. Access to proprietary content increasingly determines performance and market position. Regulation is not forthcoming; it is already operating through contracts and courts. Legislation will formalize boundaries that power has already drawn.

Once governance occurs through enforcement, statutory law becomes less about deliberation and more about codification.

Child Safety as Political Cover

Major US technology regulations rarely advance without a moral narrative capable of surviving Congress. Child protection routinely supplies that narrative.

COPPA (the Children's Online Privacy Protection Rule) offers the template. Enacted in 1998, the statute was presented as a safeguard for children's privacy. Its operational effect was structural. Compliance demands favored large platforms with legal and technical capacity while discouraging smaller competitors. Market concentration followed enforcement.

The same pattern appears in the Kids Online Safety Act. Introduced after public revelations about social-media harm to adolescents, KOSA focused on duty-of-care standards, risk mitigation and content moderation — mechanisms already embedded within dominant platforms. Civil-liberties organizations warned that enforcement would consolidate power rather than alter incentives. The concern was well founded.

Artificial intelligence regulation now follows the same script. Child safety provides the political justification that renders intellectual-property enforcement urgent and palatable. It is not the motive. It is the wrapper.

What the Public Gains, Incidentally

None of this renders regulation meaningless for ordinary people. Benefits will exist. They will not be primary.

Learning Opportunities

The European Union's AI Act prioritizes systemic risk, foundation model governance and copyright compliance. Transparency and explanation rights appear largely as downstream obligations designed to support institutional accountability. Individuals benefit indirectly.

In the United States, litigation pressure has improved dataset traceability and documentation. Creators gain leverage. Users gain limited visibility. These outcomes matter, yet they emerge as side effects of legal exposure rather than ethical realignment.

Related Article: Grok’s Spicy Mode Turns AI Into a Weapon of Exploitation

The Question That Remains

Once intellectual property becomes the organizing principle of AI regulation, creativity itself becomes regulated infrastructure. Compliance costs determine who can build models. Licensing determines who can train them. Legal exposure determines who can experiment.

The central risk is not censorship. It is consolidation.

The debate did not end through consensus. It ended through implementation. Regulation will arrive framed as protection. Underneath that language sits a familiar sequence: power moved first; protection followed where convenient.

Years from now, this period will not be remembered as the moment artificial intelligence was regulated. It will be remembered as the moment authority over speech, knowledge, creativity and acceptable participation was quietly formalized. The story emphasized safety. The result redistributed power.

The only unresolved question is whether anyone noticed while the structure hardened.

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Emily Barnes

Dr. Emily Barnes is a leader and researcher with over 15 years in higher education who's focused on using technology, AI and ML to innovate education and support women in STEM and leadership, imparting her expertise by teaching and developing related curricula. Her academic research and operational strategies are informed by her educational background: a Ph.D. in artificial intelligence from Capitol Technology University, an Ed.D. in higher education administration from Maryville University, an M.L.I.S. from Indiana University Indianapolis and a B.A. in humanities and philosophy from Indiana University. Connect with Emily Barnes:

Main image: MoiraM | Adobe Stock
Featured Research