Why Getty Images v Stability AI Judgment Will Not Answer Our Key Questions
The recent High Court trial in the Getty Images v Stability AI case addressed some critical questions concerning the complex relationship between AI models and copyright law. But, keeping things relatively simple, let's unpack why we will not be receiving the answers that we most want.
Key claim dropped
Towards the end of the three week trial, the closely watched case took a somewhat surprising turn. Getty dropped its primary copyright infringement claim. One of its main allegations had been that Stability AI’s Stable Diffusion model was trained on millions of Getty photos without permission, and that the act of training involved copyright infringement. With that claim abandoned, the key question of infringement by training will not be answered in this trial.
Jurisdictional hurdles
Why did Getty withdraw this fundamental claim? In short: jurisdictional challenges. Most of the AI training seems to have occurred on US servers, making it hard to establish a sufficient connection to the UK for UK copyright law to apply. That law is territorial, generally only covering infringing acts that occur in the UK. Getty seems to have conceded that it was struggling to prove the alleged copying took place under UK jurisdiction, given the preponderance of evidence of showing that training occurred abroad. In other words, the heart of the primary infringement lay outside the High Court’s reach.
Another dropped claim: That the outputs infringe
Getty also gave up on its claim that Stable Diffusion’s outputs infringed copyright. It appears the photo agency giant realised it faced too much of a challenge in demonstrating that any AI-generated image replicated a “substantial part” of a Getty photo. And since Stability AI had already adjusted its system to stop the generation of Getty’s watermark in outputs, winning on that issue became less critical. In the face of these difficulties, and perhaps coupled with the additional task of demonstrating that any output infringement was attributable to Stability and not to the deployer or end user, Getty made what it called a “pragmatic decision” to focus on claims it saw as stronger.
For legal observers, the dropping of primary infringement claims for both inputs (or training), and outputs, was a blow: there would be no judgment addressing some truly fundamental issues. The trial continued however and some other very interesting questions will still be decided upon by the Honourable Mrs Justice Joanna Smith DBE.
Secondary infringement – A novel strategy
In the face of the problems with its primary infringement claims mentioned above, Getty was left relying on a secondary copyright infringement claim. Under UK law, even if the original copying happened outside the UK, the importation of an infringing article to the UK can amount to an infringement. As we understand it, Getty’s claim is that the Stable Diffusion model itself is for the purposes of the Copyright, Designs and Patents Act 1988, an “infringing article”, built from Getty’s protected images. In other words, Getty says that even though the training may have taken place abroad, offering the trained model for use in the UK amounts to bringing an unlawful copy into the country.
This is a creative legal strategy, but it comes with challenges. First, Getty needs to argue that an intangible AI model constitutes an “article” for the purpose of UK legislation and that there is no requirement for it to be a physical article. Secondly, the judge is asked to treat a machine-learning model – essentially a complex dataset of weights and code – like a collection of unlawful copies of images. Proving that the model encodes Getty’s photos in a way that infringes (and pinning liability on Stability for its distribution) is far from straightforward. Stability AI, for its part, rejects the idea that Stable Diffusion contains “copies” of Getty’s works and denies that its model infringes third party rights. We will just have to wait and see what the court makes of this secondary infringement theory. While not hopeless, it does feel difficult.
Trade mark infringement and passing off: The watermark issue
Aside from copyright, Getty is also suing for trade mark infringement (and the tort of passing off) over Stable Diffusion’s model occasionally reproducing the “Getty Images” watermark logo in outputs. Getty argues that these AI-generated watermarks could mislead people into thinking the images are from or are endorsed by Getty, thus infringing its trade mark and co-opting or damaging its goodwill and reputation. Stability AI strongly disputes this, arguing that no reasonable user would think an AI-generated picture with a stray Getty watermark is a real Getty image or an endorsement. The argument is that it is obviously a quirk of the AI, not a deliberate act of branding. In Stability’s view, the watermark artifact is not a “commercial message” or badge of origin from the company at all and the relevant public will understand that. This part of the case will test how trade mark law applies when an AI system replicates a brand’s logo. It is an interesting issue, albeit somewhat of a sideshow compared to the core copyright questions.
A missed opportunity
Dropping the main copyright claims means the biggest questions about AI and copyright remain unanswered. The secondary infringement and trade mark claims will keep the judge deeply occupied over the next few months preparing her judgment. The decision will certainly yield some fascinating insights – on whether an AI model can be treated as an infringing copy, or how trade marks are handled in AI outputs. But these issues are narrower proxies for the most important (in my view) questions: Can an AI’s training on unlicensed data violate copyright? And, perhaps almost as significantly, can the outputs from an AI system infringe copyright? Those questions will unfortunately not get a direct answer in this case.
We will look instead to see the UK government’s next move in addressing the question of AI and copyright through legislative changes. In particular, will we see an AI industry-friendly EU-style system whereby copyright works can be used for training unless the owner has expressly opted out? At the same time, eyes will turn across the pond to see how the courts handle the many AI-related cases currently underway in the US. In a couple of recent decisions from the Northern District in California, AI training was found to constitute fair use although we should be chary of reading too much into them as they are so fact specific. Meanwhile, in the EU there has also been a referral to the European Court of Justice asking it to decide on some of these complex issues. Progress towards greater judicial clarity on these tough and novel questions is being made.
(*Thanks are due to Rebecca Newman on LinkedIn who, together with her colleagues, helped keep the rest of us informed as the trial unfolded.)