UK High Court Rules in "Getty Images v Stability AI" Lawsuit

Cover Image for UK High Court Rules in "Getty Images v Stability AI" Lawsuit
AICU media
AICU media

The UK High Court has issued a judgment in the historic "Getty Images v Stability AI" lawsuit, dated November 4, 2025.

Neutral Citation Number: [2025] EWHC 2863 (Ch) Judgment by Judge Joanna Smith. https://www.judiciary.uk/judgments/getty-images-v-stability-ai/

Getty Images -v- Stability AI - Courts and Tribunals Judiciary Neutral Citation Number: [2025] EWHC 2863 (Ch) Case number: I www.judiciary.uk

The judgment was substantially in favor of the defendant, Stability AI. Getty's primary claim of copyright infringement (secondary infringement) was dismissed, and while trademark infringement was partially acknowledged, its scope was deemed "historic and extremely limited."

In summary, although Getty Images succeed (in part) in their Trade Mark Infringement Claim, my findings are both historic and extremely limited in scope. The Secondary Infringement Claim fails.

Below is a summary of the important points of the judgment.

1. Copyright Infringement (Secondary Infringement): Victory for Stability AI

Getty argued that the AI model (Stable Diffusion) itself was "made" by learning images that infringe copyright, and therefore the model itself constitutes an "infringing copy" ⭐︎ under UK law.

Judgment: This claim was dismissed.

Getty Images’ claim of secondary infringement of copyright is dismissed.

The judge concluded that the AI model (Stable Diffusion) does not constitute an "infringing copy." The main reasons are as follows:

AI models do not "store" or "reproduce" works

The court explicitly stated that "an AI model such as Stable Diffusion which does not store or reproduce any Copyright Works (and has never done so)" This means that the judiciary recognized the technical reality that the model does not "memorize" the training data, but rather learns statistical "patterns" from the data.

...an AI model such as Stable Diffusion which does not store or reproduce any Copyright Works (and has never done so) is not an “infringing copy”...(as is agreed by the Experts) Stable Diffusion does not itself store the data on which it was trained.

"Making Process" and "Final Product" are Separate

Getty argued that because copyright infringement (reproduction of images) occurred in the "making" process of the model (=training), the resulting model is also an infringing object.

However, the judge accepted Stability's argument that even if copies were used in the training process, the AI model itself (model weights), which is the final product, does not store or contain those copies, and therefore the model itself cannot be called an "infringing copy."

In my judgment, it is not an infringing copy. It is not enough, as it seems to me, that (in Getty Images’ words) “the time of making of the copies of the Copyright Works coincides with the making of the Model”... by the end of that process the Model itself does not store any of those Copyright Works; the model weights are not themselves an infringing copy and they do not store an infringing copy....in its final iteration Stable Diffusion does not store or reproduce any Copyright Works and nor has it ever done so.

2. Getty's "Abandonment" of Major Claims

Notably, Getty themselves abandoned many of their original claims just before the final arguments.

Claim of Training in the UK (Primary Infringement)

Getty acknowledged that there was no evidence that the training and development of Stable Diffusion took place in the UK, and withdrew this claim.

...it is now acknowledged by Getty Images that (i) there is no evidence that the training and development of Stable Diffusion took place in the United Kingdom (such that what has been called "the Training and Development Claim" has been abandoned);

Copyright Infringement Claim Regarding AI "Output"

The so-called "Outputs Claim," which argued that the images generated by AI (output) themselves infringe copyright, was also abandoned.

(ii) ...the relief to which Getty Images would have been entitled in respect of their allegations of primary infringement of copyright (referred to as "the Outputs Claim") has now been substantially achieved. Thus the Outputs Claim has also been abandoned;

Infringement of Database Rights

The claim for database rights infringement related to the above was also abandoned.

and (iii) given its inherent link to the Training and Development Claim and the Outputs Claim, a claim for database rights infringement ("the Database Rights Infringement Claim") can now no longer be advanced.

3. Trademark Infringement: Recognized as a Limited and "Past" Issue

Getty argued that Stable Diffusion sometimes generates distorted images (Signs, referred to as watermarks ⭐︎ in the judgment) similar to its trademarks (watermarks) such as "gettyimages" and "iStock," and that this constitutes trademark infringement.

Monster image that "strongly resembles" Getty Images watermark

Judgment: This claim was partially accepted.

However, the court recognized infringement only in the following older models:

  • v1.x models⭐︎ (iStock trademark infringement):

    1. Infringement was recognized under Section 10(1) (identical mark/identical goods and services) and Section 10(2) (likelihood of confusion).
  • v2.x models⭐︎ (Getty Images trademark infringement):

    1. Infringement was recognized under Section 10(2) (likelihood of confusion).

Important Limitations

The court rejected all trademark infringement claims regarding the newer models SDXL ⭐︎ and v1.6 ⭐︎, stating that there was no evidence that a general user in the UK had generated watermarked images using these models.

I do not consider there to be any evidence that one real life user in the UK has generated a watermark using either SD XL or v1.6. The Getty Watermark Experiments failed to produce a result for either of these Models...

In addition, all claims based on Section 10(3) (unfair advantage or damage to reputation) were rejected.

This suggests that Stability AI has made technical improvements (including an opt-out mechanism) since SDXL, resolving the problems in the older models, and the judgment treats this as a "historic" issue.

UK Ruling Acknowledges "Legitimacy of Technology" and Ends "Inefficient Conflict"

The judgment is substantially in favor of Stability AI. Getty's primary claim of copyright infringement (secondary infringement) was dismissed, and while trademark infringement was partially acknowledged, its scope was deemed "historic and extremely limited."

AICU has consistently argued that fair compensation for "creators" and the construction of an ecosystem are essential for the development of creative AI. From that perspective, this ruling is highly significant in that it legally rejects the simplistic criticism that "AI is an illegal copy" and opens the way for more constructive discussion.

Summary: AICU's Perspective and the Challenges of the "Creator Ecosystem"

This UK ruling made an important judgment regarding the legal status of AI technology: "AI models (weight files) are not 'replicas' of the original images used for training." This means that the judiciary has drawn a clear line against arguments that seek to regard AI models themselves as illegal pirated copies.

However, this ruling does not solve the more fundamental problems faced by creators.

  • Difference in Arguments While this ruling focused on the legal legitimacy of the "product" that is the AI model, many creators are questioning the transparency of the "learning process" (=AI's textbook) and economic fairness, as is also the case with legalization through opt-out after SDXL.

  • Remaining Issues The ruling stated that "the model is not a copy," but it does not answer the question of "whether it is ethical or economically fair to learn countless works without permission in order to create a model (did)." However, if that learning had taken place in Japan, it may have been "legal" in the first place.

  • Creator's Dilemma Many creators are suffering from a gray zone situation where "I want to use AI, but that AI may be exploiting the works of my peers without permission." The legality of the technology (this ruling) and the fairness of the creator ecosystem need to be discussed separately.

  • Limitations of "Opt-Out" Stability AI introduced opt-out after SDXL, but the opt-out method of "those who want to refuse must apply themselves" transfers the costs and responsibilities that AI developers should originally bear to the creators, and may not be a fundamental solution.

Decisive Difference from Japanese Law (Copyright Act Article 30-4)

In this UK lawsuit, Getty was forced to abandon the claim of "learning in the UK (primary infringement)" due to insufficient evidence. That is why the point of contention in the trial shifted to the more technical point of whether the "AI model (product) constitutes an infringing copy?" (secondary infringement).

However, if this learning had been conducted in Japan, it is highly likely that the AI side (Stability AI) would have argued that "the learning act itself is legal" based on Article 30-4 of the Japanese Copyright Act (Reproduction for Information Analysis).

In that case, Getty would have been forced to fight an even more difficult battle than in the UK. The "legality of the learning act," which was not even an issue in the UK, may have been recognized in Japan in a way that was advantageous to the AI side, and the core of the lawsuit may have been overturned.

Since there is a description that learning was done with AWS, it may be possible to clarify the legal issue by clarifying the region etc. if it becomes a legal issue.

However, Japan's Article 30-4 is not a panacea, and the following important points remain.

  1. Existence of "Proviso": Article 30-4 has a "proviso" that excludes legal use if it "unreasonably prejudices the interests of the copyright holder." Getty would have strongly argued that the fact that old models generate low-quality images with watermarks and directly compete with their licensing market falls under this "proviso."
  2. Trademark Rights are a "Separate Issue": Article 30-4 of the Copyright Act is merely an exception to "copyright." The problem of AI generating watermarks * (trademarks) is a "trademark infringement" and is not protected by copyright law. Therefore, even if learning is judged legal in Japan, it is highly likely that "trademark infringement" would have been separately disputed, as in this UK ruling.

For AICU's goal of "coexistence between AI and creators," it is essential to establish the legal legitimacy of technology like this UK ruling, as well as to build "transparency of learning data" and "fair compensation (consideration) mechanisms."

We should not take this ruling as a simplistic conclusion that "AI is legal," but rather as an opportunity to accelerate discussions on fair rules (such as AI taxes and radio wave usage fees) that allow AI development companies and creators to "co-create" as truly equal partners.

AICU media wants to continue to follow the latest lawsuits around the world from the perspective of "creating creators in the AI era."

Related Activities The "legality of technology" shown by this ruling is different from the problem of "fairness of the learning process" that we face. To solve the latter problem, AICU has launched the following signature campaign.

Commercial AI is profiting from Japanese works, but no compensation? ~Please oblige commercial AI models to "disclose learning data" and "fair compensation"~ https://www.change.org/AI-eco-for-creators

Your voice will be power Commercial AI is profiting from Japanese works, but no compensation? ~ Obligate commercial AI models to "disclose learning data" and "fair compensation" www.change.org

We are also conducting research on the creator economy. Creator Survey 2025.10 in the Generative AI Era [Deadline for the 1st round of responses] Until 23:59 on Tuesday, November 11, 2025 [Share Welcome] Shortened URL https://j.aicu.ai/R2511

Creator Survey 2025.10 in the Generative AI Era - Your creation becomes data that connects AI and society - This survey is for people who are creating or are interested in using generative AI j.aicu.ai

⭐︎Glossary

  • Infringing copy: A legal term under Section 27 of the UK Copyright, Designs and Patents Act (CDPA). Getty argued that the "weight files" of the AI model itself fall under this definition because they involve copyright infringement (copying of images) in the "making process."
  • Watermarks: In the judgment, Joanna Smith used this notation to distinguish between watermarks (Signs) synthetically generated by AI and real watermarks (Marks).
  • v1.x, v2.x, SD XL, v1.6: Versions of the Stable Diffusion model released by Stability AI. v1.x was released in cooperation with CompVis, and v2.x and later were released under the leadership of Stability AI. The judgment specifically acknowledged that there was no evidence of watermark generation in the newer models (SD XL, v1.6).