A robot hand holding the scales of justice
Feature

AI Copyright Law 2025: Latest US & Global Policy Moves

5 minute read
David Gordon avatar
By
SAVED
AI copyright laws are in flux as global courts define fair use, licensing and authorship in the generative AI era.

A courtroom in California fills with stacks of lyrics. Across the country in New York, piles of newsprint become exhibits. The defendants aren’t rival publishers or songwriters, but AI companies whose chatbots and search engines are rewriting the rules of authorship.

Anthropic faces music publishers who say its Claude model swallowed thousands of songs for training. In March 2025, Judge Eumi Lee declined to issue an injunction, calling the publishers’ request too broad and their evidence of harm too thin. Meanwhile, Dow Jones and the New York Post are pursuing Perplexity, arguing its “answer engine” lifts entire news articles and repackages them in a way that diverts readers and revenue. In August 2025, Perplexity failed to persuade a court to dismiss or move the case, leaving publishers with momentum.

These disputes offer a window into how copyright law is being reshaped, with training data, outputs and licensing models under scrutiny from judges and industries with billions at stake.

Table of Contents

AI Copyright Law in the US: Where Courts Are Drawing the Line

Out in San Francisco, engineers feed AI models with everything from songs to news articles to books. Creators hear uncanny echoes of their own voices, sometimes credited, but often without acknowledgment.

Generative AI writes headlines, ad copy, film scripts and even patent sketches. Enterprises see transformation in workflows, audience reach and creative volume. Creators sense the undercurrent, their work and style flowing into pipelines they never imagined.

Recent developments reveal how courts and agencies are beginning to draw sharper lines between fair use, licensing and outright infringement, especially in cases where generative AI systems rely on copyrighted material for training: 

  • The US Copyright Office released Part 3: Generative AI Training in May 2025. It maps how copyrighted works are gathered and used. It emphasized that fair use depends on purpose, market effect, lawful access and the degree of transformation.
  • Artists and publishers continue pressing for licensing, transparency and compensation as generative AI expands.
  • In Thomson Reuters v. Ross Intelligence, a court ruled that copying Westlaw headnotes to build a competing research tool failed under fair use. The ruling pointed to commercial purpose, similarity and market harm as decisive. 
  • In Andrea Bartz et al. v. Anthropic, Judge William Alsup held that training a large language model on copyrighted books — even when obtained legally — was “exceedingly transformative” and constituted fair use. Mike Keyes, Partner at Dorsey & Whitney, drew a crucial boundary: “This case is about inputs, not about outputs.”

Related Article: Inside Anthropic’s $1.5B Generative AI Copyright Lawsuit Settlement

AI Training vs. AI Output: Where Legal Risk Really Lies

Roanie Levy, Legal and Licensing Advisor for the Copyright Clearance Center, has followed these shifts from courtroom to policy memo, and warns that inputs and outputs are distinct in law but related in risk.

"US Courts view substitutive outputs as evidence that undermines fair use for training, and even without a smoking-gun output, mass commercial copying for training faces tough scrutiny," said Levy. "Licensing inputs helps, but it does not clear outputs, so clear both stages.”

Courtrooms are beginning to separate lawful ingestion from unlawful acquisition, a distinction that can decide whether an AI product scales or stalls. 

Global AI Copyright Laws: A Patchwork Still in Progress

Washington

In Washington, a line has been drawn around authorship. The Copyright Office insists that a work created entirely by a machine cannot carry protection, and federal courts have reinforced that stance. The Thaler v. Perlmutter decision reads almost like a reminder from another century. Copyright flows through human hands, even if those hands are typing prompts into a system that can mimic art, prose or photography at scale. 

Brussels

Brussels, often described as the capital of paperwork turned into art form, has chosen a path of obligations layered carefully and deliberately. The EU AI Act requires developers of general-purpose models to publish summaries of their training data, while the older text and data mining rules let rightsholders pull the handbrake through a machine-readable opt-out. Together these rules force builders to think about provenance in a way Silicon Valley has often postponed, and the penalties for ignoring them run high.

London

London sits in the middle of its own policy debate. A sweeping exception for text and data mining once seemed close to approval until publishers and artists raised loud objections. The government retreated and opened consultations that now drift toward the Brussels approach, with transparency and opt-outs taking the stage. For enterprises in the United Kingdom, the moment feels like a long intermission with everyone waiting for the curtain to rise on the next act.

Asia

Asia has chosen a looser path. Japan’s 2018 Copyright Act opened the door by allowing copyrighted works to be used for data analysis, which in practice stretches to AI training. Singapore followed with a clause for computational data analysis, giving developers a freer runway than their Western peers. The contrast is striking. A developer in Tokyo can train with relative ease, while a counterpart in Paris is expected to document every stage and publish a data summary.

Global AI Copyright Law Remains a Work in Progress 

The global map is jagged. For companies, the result is practical. Hiring a developer in Tokyo opens one set of freedoms, deploying a product in Paris brings a different set of reporting duties and the rules in Washington turn on whether a human guided the process. The story of AI copyright (and really all AI legislation) is still being drafted, and the ink has barely dried.

For companies operating across borders, the variability of legal standards creates a compliance matrix few can afford to guess wrong.

“Fair use is not a business model,” Levy said. “It is a narrow, fact driven defense that varies across datasets and use cases. At scale that variability becomes unmanageable risk, and competitive uses are far less likely to be fair use. For multinationals, the risk multiplies because exceptions differ by country and jurisdictions like the EU do not recognize fair use at all.”

The Corporate Response to AI Copyright Chaos

While lawmakers argue about statutes and precedent, companies are improvising their own score.

Licensing Deals 

Some of the biggest AI players have started buying peace through licensing. OpenAI signed with the Associated Press, pulling decades of reporting into its training vaults. Shutterstock now treats its photo archive like jet fuel for machine learning, cutting deals with developers who once scraped images for free. These contracts feel less like experiments and more like prototypes for an entire licensing economy.

Provenance Standards 

Others are trying to win the trust game with provenance. Adobe attaches “content credentials” to AI outputs, essentially a digital receipt that says who made what and how. Startups market “clean” datasets scrubbed of copyrighted work, betting that courts will favor AI models built on sanctioned material rather than pirated archives. Provenance becomes both a legal hedge and a sales pitch.

Stronger Attribution

Even the defendants in headline cases are bending their products around the courtroom spotlight. Perplexity, under fire from publishers, has promised stronger rules for attribution and source handling. Anthropic, facing the music industry, agreed to block Claude from coughing up entire song lyrics on demand. These are tactical moves, maybe even fig leaves, but they show how litigation is already steering product design.

Can the Old Media Model Save the New AI Economy?

Meanwhile, rights groups are sketching out collective licensing systems that look suspiciously like old media playbooks. The idea resembles the way radio stations pay royalties, pooling money that then flows to musicians. Whether this model takes root for AI training remains unclear, but the fact that collective schemes are even on the table suggests the market is rushing to invent its own infrastructure faster than lawmakers can.

Learning Opportunities

As Levy put it, “Creators should prepare for more transparency and more choice. Opt outs, registry tools and collective licenses will sit alongside direct deals. Those who make their terms clear, and preferably machine readable where possible, will shape the market.”

Related Article: Judge Backs Anthropic: AI Training on Legal Books Ruled Fair Use

Frequently Asked Questions

AI copyright law introduces new questions around authorship, ownership and fair use. Unlike traditional works, AI-generated content may lack a human author, which means it often doesn’t qualify for copyright protection under current US and EU rules. The law now must address both the training process (how data is used) and the output (what AI produces).
As of 2025, US law requires a “human author” for copyright protection. Works created entirely by AI — with no meaningful human contribution — are not protected. However, hybrid works, where AI is a tool guided by human input and creative decisions, may qualify for partial protection depending on the degree of human authorship.

That depends on several factors, including the purpose of the use, its commercial nature, the amount of material used and its impact on the original work’s market. Some courts have ruled training as “transformative,” while others see it as a substitute that harms creators. The legal consensus remains in flux, and fair use is evaluated case by case.

  • US: Focuses on case law and fair use, with rulings evolving through lawsuits.
  • EU: Emphasizes transparency and accountability, requiring developers to publish summaries of training data.
  • Asia: Generally more permissive; Japan and Singapore allow data mining for AI training under broad exemptions.

The result is a fragmented global landscape that complicates compliance for multinational companies.

Legal exposure. If an enterprise deploys generative AI trained on copyrighted material without proper licensing, it could face lawsuits or reputational damage. Companies should document data provenance, apply human oversight and align with jurisdiction-specific rules, especially when deploying AI across borders.
About the Author
David Gordon

David Gordon investigates executive leadership for a global investment firm, freelances in tech and media and writes long-form stories that ask more questions than they answer. He’s always chasing the narrative that undoes the easy version of the truth. Connect with David Gordon:

Main image: svetazi | Adobe Stock
Featured Research