• Sectors we work in banner(2)

    Quick Reads

Digital Deception: The Rise of Deepfakes

Deepfakes are manipulated audio, video or images that use artificial intelligence (AI) to create highly realistic content that can be difficult to distinguish from reality. The term “deepfakes” is derived from the use of deep learning techniques. Deep learning represents a subset of machine learning techniques which are themselves a subset of artificial intelligence.

In machine learning, a model uses training data to develop a model for a specific task. The more robust and complete the training data, the better the model gets. In deep learning, a model is able to automatically discover representations of features in the data that permit classification of the data. They are effectively trained at a “deeper” level. [1]

Deepfakes actually represent a subset of the general category of “synthetic media” or “synthetic content.” Synthetic media is defined as any media which has been created or modified through the use of AI or machine learning, especially if done in an automated fashion.

While this technology certainly has the potential for positive applications, the misuse of deepfakes present new and complex challenges for both individuals and businesses alike. 

Reputational Risks 

Businesses need to be aware of the potential of deepfakes to spread misinformation about a particular topic, industry, or person, or particular entity. Deepfake technology can be used to create convincing videos of CEO’s and other public figures saying or doing things that haven’t actually occurred, inflicting both serious financial and reputational damage. 

As we have seen in the recent case in Hong Kong, deepfakes are increasingly being used to commit financial crimes by impersonating individuals within a company in order to obtain sensitive information. An employee in a multinational firm’s Hong Kong office was duped into attending a video call with what he thought were several other members of staff, but all of whom were in fact deepfake recreations. Believing everyone else on the call was real, the worker agreed to remit a total of $200 million Hong Kong dollars (about $25.6 million) to the fraudsters. 

Claims for Defamation 

The increase in the creation and dissemination of malicious deepfake content will likely also lead to an increase in the number of defamation claims. However, the context in which this deepfake content is produced will likely play an integral part in the success of any claims.  For example, claims surrounding content that was intended as a parody would be unlikely to succeed. Although if the reasonable viewer is not aware of a video’s falsity, for example, it may be possible to bring a claim against the creator and/or publisher of the video, such as the host website.  

AI in Hollywood 

The Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA) strike in 2023 also highlighted some of the unresolved legal and reputational issues for talent which has been brought about by the increased use of AI in the entertainment industry.

The use of this technology has its benefits though the creation of realistic digital characters, the enhancement of special effects and even through creating entire story lines. However, this progress has also given rise to the creation of AI-generated content that blurs the lines between reality and fiction. Correspondingly, questions are raised around consent if, for instance, a production company unilaterally regenerates an actor’s likeness and around remedies for musicians if they (or their work) are recreated using AI technology without their permission. We are likely to see some interesting cases down the road as courts try to address these issues. 

Data protection

The implications of deepfake technology also extends into the realm of data protection. It is arguable that in processing the personal data required to create a deepfake the creator is a controller who is subject to strict obligations on how the source material is processed. In the absence of any lawful basis for processing an individual’s face and voice, the creator may be liable. 

Intellectual property

A deepfake may also breach intellectual property (IP) rights such as copyright, which may be relevant where other original works have been substantially copied in a deepfake creation. AI technology needs to be trained to know what the individual who is the subject of the deepfake looks like. It does this by combing the internet for photos, music or videos of the person it is copying. However, it is the owners of the copyright in the photos or video who will have a cause of action for infringement if their works are copied without their permission rather than the individual subject (unless they are the said copyright owners). 

Future Considerations 

The rise of deepfakes presents complex legal and operational issues for businesses that require a multifaceted approach. Science and technology are constantly advancing. Deepfakes, along with automated content creation and modification techniques, merely represent the latest mechanisms developed to alter or create visual, audio, and text content. The key difference they represent, however, is the ease with which they can be made – and made well. 

Businesses should consider conducting a review of their current policies and procedures and implement more robust policies and procedures to be able to verify the authenticity of audio, video, and other media content before relying on it for important decisions. Technological solutions, such as digital watermarking, and blockchain authentication, can also aid with the detection and prevention of the spread of deepfakes. By embedding these technologies into disseminated media content, it can become easier to track its origins and verify its authenticity. 

We have already started acting on projects involving the use of AI in aggregation tools and the AI replacement of primary talent in existing television commercials, for example. As the use of deepfakes looks set to continue to grow, it is important to take proactive steps to safeguard against their misuse. 

[1](United States Department of Homeland Security, 2023)

A finance worker at a multinational firm was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call.

Our thinking

  • Navigating AI in Dispute Resolution: Insights from LIDW's Core Conference

    Melanie Tomlin

    Insights

  • LCIA's 2024 Casework Report – Still Going Strong

    Dalal Alhouti

    Quick Reads

  • Delay of the new food and drink ads regulation & impact on live sports broadcasts

    Sarah Johnson

    Insights

  • Understanding the Data (Use and Access) Act 2025: Implications for UK Businesses

    Janine Regan

    Insights

  • ICC Arbitration Statistics 2024 – UAE Breaks into Top 5 Seats

    Dalal Alhouti

    Quick Reads

  • Why Getty Images v Stability AI Judgment Will Not Answer Our Key Questions

    Nick White

    Insights

  • Georgina Muskett and Laura Bushaway write for Property Week on whether drone use can become trespass

    Georgina Muskett

    In the Press

  • How does extradition work?

    Ghassan El Daye

    Insights

  • Extradition in the United Arab Emirates (UAE)

    Ghassan El Daye

    Insights

  • Charles Russell Speechlys adopts Harvey

    Joe Cohen

    News

  • Charles Russell Speechlys welcomes highly regarded regulatory and investigations litigator Richard Burger in London

    Richard Burger

    News

  • Liz Gifford, Janine Regan and Courtney Benard write for New Law Journal on an amendment to the Data (Use and Access) Bill which will allow UK charities to send direct marketing emails to supporters without prior opt-in consent

    Liz Gifford

    In the Press

  • Asset Tracing in England and Wales: Legal Tools and Public Resources

    Caroline Greenwell

    Insights

  • How to respond to a dawn raid: A guide to dawn raids in financial crime investigations

    Emilie Brammer

    Insights

  • Dawn raids in Switzerland: Best practices under the revised criminal procedure code

    Pierre Bydzovsky

    Insights

  • Extra Time: Legal and commercial insights into sponsorship agreements

    Anna Sowerby

    Podcasts

  • Bloomberg quotes Richard Davies on the relationships between football clubs and their investors and shareholders

    Richard Davies

    In the Press

  • Sparkling Opportunities: Unveiling the growth and tips for investment into the English sparkling wine industry

    Iwan Thomas

    Insights

  • Charles Russell Speechlys welcomes two new Dispute Resolution Partners in Singapore

    Stewart Hey

    News

  • Charles Russell Speechlys’ ‘Russell Up’ initiative wins at The Lawyer Awards 2025

    Joe Cohen

    News

Back to top