• news-banner

    Expert Insights

Using Generative AI and staying on the right side of the law

A guide for Creative Digital Businesses and in house teams

 

Introduction

Creative Digital businesses, freelancers and in house marketing teams are seeing the benefits of a wide range of AI tools; and many are rapidly running ahead with adoption.  

In November 2023, Bristol Creative Industries, a UK focused trade group for the creative digital sector published the findings of its members survey. The survey showed the most common AI tools were Chat GPT, Microsoft CoPilot, Character AI for text and code; Midjourney, Stablefusion and Dalle3 for image generation; and Parker AI, Runway and Google Gemini for multi-models (which can combine text, images and video).  If they repolled their members now, just 6 months later, there would be a significant increase in the tools named – such is the mushroom effect of AI tools becoming available.  

Predicting this sector would be brave and early AI adopters (see 2024 predictions) the use cases within Creative Digital businesses and teams for generative AI is already vast. Whether to automate routine tasks (like testing software), generate ideas for pitches or just showcase creative opportunities at speed and scale; the myriad of opportunities needs to be rolled out with an understanding of the legal and ethical risks. 

This guide aims to provide an understanding of how generative AI can be used effectively and legally within creative digital businesses and creative teams; and will identify key risks to ensure users adopt best practices. 

Best practice tips for using Generative AI:

  1. Keep an eye on forthcoming AI regulation globally.
  2. Consider IPR infringement risks with prompts, tool use and outputs.
  3. Review and assess AI tools terms of use. 
  4. Draft an AI Use policy. 
  5. Review and adjust Customer Contracts and SOW’s.
  6. Amend Employee, Freelancer and Supplier terms.
  7. Check your insurance. 
  8. Repeat 1-7– the AI landscape will keep evolving you are going to need to keep adapting to new challenges and risks. 

1. AI Regulation Globally 

AI Regulation in the UK  

The UK is adopting a pro-innovation approach to AI. It has adopted a framework for regulating AI, relying on existing regulators such as ICO, IPO, FSA, and OFCOM to implement sector based guidance and apply existing laws. Read more here.

For some this does not go far enough and a UK private members bill, the Artificial Intelligence (Regulation) Bill has passed the second reading in the House of Lords and is currently proceeding to a third reading. Read more about the UK private members bill progress here. It is unclear if this private members bill will be adopted by the government but with an imminent election, a potential change of government and an increasing focus on security around AI worldwide; it is a possibility.  In tandem, the TUC has launched its own Artificial Intelligence (Regulation and Employment Rights) Bill which seeks to regulate the use of AI systems by employers.  This bill is yet to become a private members bill or be adopted as part of governmental policy. Details can be reviewed here: Artificial Intelligence (Regulation and Employment Rights) Bill.  

EU AI Regulation 

On 13 March 2024, the European Parliament formally adopted the EU AI Act, almost the final step in a long process which started back in April 2021.  Read more here.  Read the EU press release here. 

US AI Regulation 

The US, like the UK, has yet to formally adopt a federal AI Act, although there have been Executive Orders (in particular in October 2023) directing federal policy and requiring action in the area by federal agencies. The October Order tasked government departments and agencies to evaluate the safety and security of AI technology and to implement relevant processes and procedures around AI use. In addition, there have been several state proposed laws on various aspects of AI. 

The Bletchley Declaration 

Beyond the UK, EU and the US, a number of countries, including China and India have proposed AI regulation in some form.  

Recognising that local regulation will not be sufficient to manage a technology which has pan world reach, representatives of the UK, EU, US and 25 + countries including China signed the Bletchley Declaration. The Declaration identifies the opportunities and risks of AI along with proposals for collaboration on AI scientific research.   Crucially it seeks to build a global understanding and commitment around these risks.  The AI Seoul Summit in May 2024, will seek to build on the Bletchley Declaration.

Although future AI regulation is something to be aware of, as a user of AI tools now, you should focus on what is now - many of the legal issues around use of generative AI are already known within existing regulations, legislation, principles or common law.

2. Common AI Terms

A few common terms used in this guide:

  • Prompt – the initial request given to an AI tool. It's the input that triggers the model to generate a response.  The output can then be re prompted to create a subsequent output.
  • Output – The response generated by an AI tool in reaction to a prompt. The output is what the model predicts as the most appropriate continuation or answer to the prompt.
  • Blend– any technique used to create an output in the style of a prompt or a reference which assumes the AI tool has been trained on the reference (such as an artwork by a famous painter or a book created by a well-known author); the technique blends the initial prompt with the style of a subsequent prompt or reference, so the output looks like the requested style.  Blend techniques are used to create deep fakes. 

3. Intellectual Property Rights & AI (in tools used and in outputs and prompts)

One of the key considerations when using AI tools is the risk of Intellectual Property Rights (IPR) infringement.  Relevant IPR in this context is most likely to be copyright (films, sound recordings, written and artistic work etc) and trade marks (logos etc), but could also include other rights such as patents, database and design rights.  

AI & Copyright 

In respect of copyright this protects “original literary, dramatic, musical or artistic works”; “sound recordings, films or broadcasts”, and “typographical arrangements of published works”. An owner of a copyright protected work has the exclusive right to:

  • Copy or issue copies to the public;
  • Rent or lend the work;
  • Perform, show or play the work;
  • Communicate to the public;
  • Make adaptions or do any of the above to an adaption. 

AI & Trademarks Law

Trade marks can appear inadvertently in images generated from AI tools.  A simple imagine prompt to Midjourney for example to “generate an image of a female rower winning at the Olympics” can create an output which shows a female rower with a branded logo on her clothing. If this was then used in a marketing campaign, it may trigger trade mark or passing off claims from the brand/rights owners. 

AI & Moral Rights

A copyright owner has moral rights in certain works to object to derogatory treatment, and a right to be identified as the owner.

So copyright owners can prevent AI tools and users from copying, adapting and sharing their copyright protected work where they have not granted a licence or assigned their rights. Further, if their rights are not waived, owners can insist on a right to be identified or to object to derogatory treatment in certain situations. 

AI & IPR in prompts and outputs 

The risk of IPR infringement is at 3 key stages: (i) the prompt stage; (ii) whilst creating the output; and (iii) subsequent use of the output.  

In the context of prompting, use of any copyright protected work (e.g. an image, article or video) added to a prompt could infringe a third party’s IPR if there is no appropriate licence in place, exemption or defence (more below).   This can be easily fixed by ensuring you have a suitable licence to use the rights in the underlying material in any prompts for the purpose (including within the outputs).  Harder to manage is the risk of IPR infringement in works which may have been used to train the models on which the AI tools operate. These underlying works could likely be adapted, or included wholescale, in any output.  

In many cases, we have little transparency over what works have been used to train AI tools, and thereby whether outputs (especially when blending techniques are used) infringe third party IPR.  Many of the most popular tools (such as Midjourney) currently have no “in tool” ability to acquire a suitable licence of the underlying rights in the outputs and provide no IPR indemnities or commitments. 

There is also a misconception amongst users that taking content from channels such as YouTube with a licence, and using this as an initial prompt to create content via generative AI tools will be “safe”. The misconception is that by “diluting” such content with multiple prompts, users are eliminating any copyright infringement risks (because it no longer looks the same as the original).   Whilst this may make copyright infringement detection harder, it will not eradicate the risk.  

As detection tools increase, the sensible approach is to consider whether your use falls within one of the copyright defences or exemptions. These include exemptions for temporary copies, incidental inclusion and defences of research and private study, parody, comedy and pastiche. Tread carefully here as many of these exemptions won’t be applicable due to commercial and deliberate use; and where defences might be suitable, there are conditions you will need to meet.  For example, when relying on a defence of pastiche you will need to show there is no commercial purpose, the original work was publicly available, and your use was limited and no more than was needed.  

In such cases, sufficient acknowledgement to the author of the underlying work must be made – something which is often missed.  If you are going to rely on this type of defence, we’d strongly recommend you get legal advice on the nuisances of this area of law for your specific use. 

AI Generative Tools Terms & IPR Infringement 

Reviewing the generative AI tools terms of use is vital.  Here we can assess if the tool providers will offer some help and protection to users from the risk of IPR infringement claims. Typically, the terms of use for most tools do not offer enough protection for commercial use. The terms range dramatically too, Midjourney ** for example says users own all IPR in any outputs, except where ownership is subject to the terms of use and the rights of third parties. Midjourney make no warranties as to whether the output will infringe third party rights. Unsurprisingly, Midjourney tries to exclude all liability, and expects users to indemnify Midjourney from all IPR claims relating to outputs and use of the service.  This is a shift from the previous terms which stated Midjourney would “come and find you” if you have infringed someone's IPR, which costs Midjourney money.  

In comparison if we look at the terms of Open AI*, with effect from October 2023 paid business users of Copilot and Bing will get the benefit of a copyright commitment.  This commitment will cover most third party IPR claims (note it doesn’t extend to trademark or defamation claims) which arise from output created via the Copilot suite of tools.  The commitment excludes claims which relate to prompts, modifications to the outputs or situations where the user knows the output infringes the rights of others.  It’s also financially limited and excludes indirect and consequential damages.  If you are subject to an IPR claim which involves output created by Copilot - the copyright commitment, whilst helpful, will not fully indemnify you in the event of a claim; so you need to consider this risk before embarking on the use of outputs commercially. 

Other notable terms under the Open AI terms make it clear the user does own IPR in the output, except where the output is not unique. This emphasises to users that other users may obtain the same, or similar output.  Users represent and warrant that they have all rights needed to provide the prompts and that the prompts don’t infringe anyone's rights. This seeks to ensure users haven’t misused confidential data or personal data and have a suitable licence for the use of any underlying IPR in the prompts (all fair and reasonable).  

Like Midjourney and other tools, you must actively opt out of a right for Open AI to be able to use the outputs.  This could be easily misunderstood and essentially means that Open AI would like to push your outputs back into the learning model unless you opt out – clearly this could undermine any exclusive content you may have been seeking to create.  It also is a risk if you are using confidential information or personal data within prompts. 

Another easily overlooked condition is that Open AI, like many Generative AI tool providers require that you must make it clear that any output is not human generated. The safest option to avoid IPR infringement from generative tools, is to use tools trained on “home grown” content like Getty AI. The Getty AI generator terms* transparently promise to be trained on creative content and data which is supported by an uncapped IP indemnification from Getty. Rightly, Getty are proudly shouting that they have created a generative AI model which compensates content creators for the use of their work and protects users of the tools.  This model must be the way to go, balancing the financial investment in generative AI tools by such providers with the rights of content creators.  

We recommend a legal review of the terms of use for all AI tools before you embark on their use; so you fully understand and comply with the terms of use and flow down relevant conditions (more on this below) to freelancers, employees, suppliers, subcontractors, customers and clients. 

4. AI & Quality of Outputs & Relevance

Whilst generative AI will certainly speed up creative and design processes, it also enables less qualified creatives to expand the range of work they can produce. Graphic designers can now be content producers in minutes. Projects which would have taken 10 days to have created 5 years ago, can now be generated in a few hours. 

Given the range of users of these tools, it is important to remember that generative AI will be biased; it will discriminate; and it will also hallucinate. It is also inaccurate at times. Careful consideration is needed to QA outputs to avoid embarrassment and worse still negligence, discrimination, equality or defamation claims.

5. Create an AI Usage Policy

Creating an internal AI policy is a good starting point for any organisation. Getting input across divisions and teams on your ethical and moral position on generative AI is an essential starting point. This should apply to your internal use including use by employees and freelancers.  The policy should also consider where you act for clients, your use of generative AI in client deliverables and services.  

This policy will differ for different organisations. For some edgy brands they may want to use generative AI in a disruptive way, despite the legal risks. For others, avoiding an IPR claim in marketing and advertising campaigns is crucial. There may also be regulatory or other commercial considerations. 

Based on this position, you can build a process for what tools can be used and for what purpose and assess these risks.  What data and content can be used within tools which are permitted and what checks will you carry out?  What is your QA process on outputs?

Consider what training is required and who will have access and importantly what will you do if something goes wrong? 

Your policy should also set out your approvals process for generative AI tools within the business. 

Once you have prepared the internal AI policy, you should consider how this needs to apply to your external suppliers, freelancers and subcontractors.

6. AI & Contractual Obligations 

Review contracts with clients if you are providing services or deliverables and work with your legal team to make suitable adjustments based on your AI policy and the generative tools you plan to use. For example, if your client terms say your client will own the copyright in any deliverables you create for them, and you now have agreed that you can use Midjourney at the discovery stage of a project, you will need to adjust these terms. Equally you will need to reconsider warranty terms and liability caps accordingly. 

7. AI Employee & Freelancer Guidance, Contracts and Policies

Ensure your employee and freelancer contracts and any relevant supply contracts incorporate compliance with your external AI policy. Consider collaboration, training and how you will continue to check compliance and risk.

8. Confidentiality

Whilst you should have guardrails around use and protection of confidential information and data within your organisation, this should be reviewed in this era of generative AI.  

Many of the AI tools use prompts to train their models, some allow an opt out but this is not always obvious for users.  If confidential information is disclosed in this way to models and regenerated as outputs for other users, this potentially lifts the confidentiality wrapper on the information itself (meaning it is no longer protected by confidentiality).  If third party confidential information is disclosed in this way, you are highly likely to be in breach of confidentiality terms which are typically uncapped. Where this data involves personal data then there will be exposure to fines from regulators such as the ICO in the UK.

9. Insurance 

Finally, check your insurance policies so you know the level and scope of any cover in relation to generative AI use. Many policies are staying silent on AI use now, but this may well change especially in light of the increase of IPR claims.  

If you have any questions, please contact Rebecca Steer, rebecca.steer@crsblaw.com


* Up to date as of 1 May 2024. 
** Terms up to date as of 1 May 2024 review based on UK terms other terms may be applicable. 

Our thinking

  • IBA Annual Conference 2024

    Charlotte Ford

    Events

  • Is a Big Mac meat or chicken? Thoughts on the recent General Court decision

    Charlotte Duly

    Quick Reads

  • Tortious liability: Supreme Court brings relief for directors

    Olivia Gray

    Insights

  • Stephen Burns and Katie Bewick write for New Law Journal on shareholders’ rights after Zedra

    Stephen Burns

    In the Press

  • Rhys Novak writes for Solicitors Journal on what legal advisors need to know about dawn raids

    Rhys Novak

    In the Press

  • Employment Law & Worker Rights - The Conservative Party’s Manifesto

    Nick Hurley

    Insights

  • "Has anyone seen my cat?" - Pet-Nups and Pet Disputes between Unmarried Couples

    Jessie Davies

    Quick Reads

  • Employment Law & Worker Rights - The Liberal Democrats Manifesto

    Nick Hurley

    Insights

  • The Africa Debate: Africa’s role in a changing global order

    Matthew Hobbs

    Quick Reads

  • IFLR interviews Shirley Fu on her reasons for joining the Firm

    Shirley Fu

    In the Press

  • Re UKCloud: The importance of exercising control over a fixed charge asset

    Cara Whiffin

    Insights

  • Bloomberg quotes Dominic Lawrance on pledges to scrap preferential tax treatment for non-doms

    Dominic Lawrance

    In the Press

  • Consumer Duty Board Report

    Richard Ellis

    Insights

  • Standard of repair put to the test - Estates Gazette Q&A

    Emma Humphreys

    Insights

  • A Closer Look at the Current State of Artificial Intelligence Regulation in the Gulf

    Mark Hill

    Quick Reads

  • LIDW: Is arbitration an effective process for disputes involving state interests: a panel discussion of concerns raised in Nigeria v. P&IDL [2023] EWHC 2638

    Richard Kiddell

    Events

  • Is the horizon level? Current updates and predictions for Competition Law in the UAE

    William Reichert

    Insights

  • Injunctions against potential protesters - Estates Gazette Q&A

    Samuel Lear

    Insights

  • Michael Powner, Isobel Goodman and Hauwa Ottun write for Law 360 on the Tips Act

    Michael Powner

    In the Press

  • LIDW: An Era of Constant Change – an event to explore the General Counsel’s role in delivering sustainable growth whilst managing global ESG risks

    Caroline Greenwell

    Events

Back to top