• Sectors we work in banner(2)

    Quick Reads

Digital Deception: The Rise of Deepfakes

Deepfakes are manipulated audio, video or images that use artificial intelligence (AI) to create highly realistic content that can be difficult to distinguish from reality. The term “deepfakes” is derived from the use of deep learning techniques. Deep learning represents a subset of machine learning techniques which are themselves a subset of artificial intelligence.

In machine learning, a model uses training data to develop a model for a specific task. The more robust and complete the training data, the better the model gets. In deep learning, a model is able to automatically discover representations of features in the data that permit classification of the data. They are effectively trained at a “deeper” level. [1]

Deepfakes actually represent a subset of the general category of “synthetic media” or “synthetic content.” Synthetic media is defined as any media which has been created or modified through the use of AI or machine learning, especially if done in an automated fashion.

While this technology certainly has the potential for positive applications, the misuse of deepfakes present new and complex challenges for both individuals and businesses alike. 

Reputational Risks 

Businesses need to be aware of the potential of deepfakes to spread misinformation about a particular topic, industry, or person, or particular entity. Deepfake technology can be used to create convincing videos of CEO’s and other public figures saying or doing things that haven’t actually occurred, inflicting both serious financial and reputational damage. 

As we have seen in the recent case in Hong Kong, deepfakes are increasingly being used to commit financial crimes by impersonating individuals within a company in order to obtain sensitive information. An employee in a multinational firm’s Hong Kong office was duped into attending a video call with what he thought were several other members of staff, but all of whom were in fact deepfake recreations. Believing everyone else on the call was real, the worker agreed to remit a total of $200 million Hong Kong dollars (about $25.6 million) to the fraudsters. 

Claims for Defamation 

The increase in the creation and dissemination of malicious deepfake content will likely also lead to an increase in the number of defamation claims. However, the context in which this deepfake content is produced will likely play an integral part in the success of any claims.  For example, claims surrounding content that was intended as a parody would be unlikely to succeed. Although if the reasonable viewer is not aware of a video’s falsity, for example, it may be possible to bring a claim against the creator and/or publisher of the video, such as the host website.  

AI in Hollywood 

The Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA) strike in 2023 also highlighted some of the unresolved legal and reputational issues for talent which has been brought about by the increased use of AI in the entertainment industry.

The use of this technology has its benefits though the creation of realistic digital characters, the enhancement of special effects and even through creating entire story lines. However, this progress has also given rise to the creation of AI-generated content that blurs the lines between reality and fiction. Correspondingly, questions are raised around consent if, for instance, a production company unilaterally regenerates an actor’s likeness and around remedies for musicians if they (or their work) are recreated using AI technology without their permission. We are likely to see some interesting cases down the road as courts try to address these issues. 

Data protection

The implications of deepfake technology also extends into the realm of data protection. It is arguable that in processing the personal data required to create a deepfake the creator is a controller who is subject to strict obligations on how the source material is processed. In the absence of any lawful basis for processing an individual’s face and voice, the creator may be liable. 

Intellectual property

A deepfake may also breach intellectual property (IP) rights such as copyright, which may be relevant where other original works have been substantially copied in a deepfake creation. AI technology needs to be trained to know what the individual who is the subject of the deepfake looks like. It does this by combing the internet for photos, music or videos of the person it is copying. However, it is the owners of the copyright in the photos or video who will have a cause of action for infringement if their works are copied without their permission rather than the individual subject (unless they are the said copyright owners). 

Future Considerations 

The rise of deepfakes presents complex legal and operational issues for businesses that require a multifaceted approach. Science and technology are constantly advancing. Deepfakes, along with automated content creation and modification techniques, merely represent the latest mechanisms developed to alter or create visual, audio, and text content. The key difference they represent, however, is the ease with which they can be made – and made well. 

Businesses should consider conducting a review of their current policies and procedures and implement more robust policies and procedures to be able to verify the authenticity of audio, video, and other media content before relying on it for important decisions. Technological solutions, such as digital watermarking, and blockchain authentication, can also aid with the detection and prevention of the spread of deepfakes. By embedding these technologies into disseminated media content, it can become easier to track its origins and verify its authenticity. 

We have already started acting on projects involving the use of AI in aggregation tools and the AI replacement of primary talent in existing television commercials, for example. As the use of deepfakes looks set to continue to grow, it is important to take proactive steps to safeguard against their misuse. 

[1](United States Department of Homeland Security, 2023)

A finance worker at a multinational firm was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call.

Our thinking

  • Infosecurity Magazine quotes Mark Bailey on the Cyber Security and Resilience Bill

    Mark Bailey

    In the Press

  • Drip Pricing and Enforcement: How the DMCC Act is Changing the Rules

    Mark Dewar

    Insights

  • eprivateclient quotes Harriet Betteridge, Hannah Catt, Gregoire Uldry and Alex Reid on 2026 predictions in the private wealth space

    Harriet Betteridge

    In the Press

  • Law 360 quotes Caroline Greenwell and Bella Henry on the Santander APP fraud case

    Caroline Greenwell

    In the Press

  • Fake Reviews Under Fire: How the Digital Markets, Competition and Consumers Act 2024 (DMCC Act) Targets Misleading Practices

    Dillon Ravikumar

    Insights

  • New Cryptoasset Reporting Framework (CARF) implemented - how might it affect you?

    Vadim Romanoff

    Quick Reads

  • Leon’s reset: a pragmatic step towards its core

    Iwan Thomas

    Quick Reads

  • The Digital Markets, Competition and Consumers Act 2024 (DMCC Act) and its implications on subscriptions

    Mark Dewar

    Insights

  • The Digital Markets, Competition and Consumers Act and Consumer Law

    Mark Dewar

    Quick Reads

  • Responsible Personal Data Use in Loyalty Programmes

    Bessie Chow

    Insights

  • Digital Markets, Competition and Consumers Act 2024 (DMCCA): What the UK’s new consumer rules now mean for consumer facing businesses

    Mark Dewar

    Insights

  • Charles Russell Speechlys advises longstanding client Puma Growth Partners on its investment in HubBox

    Ashwin Pillay

    News

  • UAE CCL Reforms: Introducing Multi-Class Shares, Drag / Tag Rights, Deadlock Solutions and Governance Continuity

    Mo Nawash

    Quick Reads

  • Dewdney Drew writes for the AI Journal on AI actors and the legal hurdles facing a digital revolution

    Dewdney William Drew

    In the Press

  • Disputes Over Donuts: AI in Arbitration - Innovation, Risk, and the Road Ahead

    Thomas R. Snider

    Podcasts

  • AI and Employment Law: Fairness, Transparency and Workplace Risk

    Emily Chalkley

    Insights

  • AI and NPPF changes

    Josh Risso-Gill

    Quick Reads

  • Artificial Intelligence in M&A: Efficiency, Due Diligence and Risk Management

    Daniel Rosenberg

    Insights

  • Data Centre Connection Woes and Wins

    Kevin Gibbs

    Insights

  • Claudine Morgan writes for The Law Society Gazette on Trump V BBC – what a UK defamation fight would really look like…

    Claudine Morgan

    In the Press

Back to top