• Sectors we work in banner(2)

    Quick Reads

Digital Deception: The Rise of Deepfakes

Deepfakes are manipulated audio, video or images that use artificial intelligence (AI) to create highly realistic content that can be difficult to distinguish from reality. The term “deepfakes” is derived from the use of deep learning techniques. Deep learning represents a subset of machine learning techniques which are themselves a subset of artificial intelligence.

In machine learning, a model uses training data to develop a model for a specific task. The more robust and complete the training data, the better the model gets. In deep learning, a model is able to automatically discover representations of features in the data that permit classification of the data. They are effectively trained at a “deeper” level. [1]

Deepfakes actually represent a subset of the general category of “synthetic media” or “synthetic content.” Synthetic media is defined as any media which has been created or modified through the use of AI or machine learning, especially if done in an automated fashion.

While this technology certainly has the potential for positive applications, the misuse of deepfakes present new and complex challenges for both individuals and businesses alike. 

Reputational Risks 

Businesses need to be aware of the potential of deepfakes to spread misinformation about a particular topic, industry, or person, or particular entity. Deepfake technology can be used to create convincing videos of CEO’s and other public figures saying or doing things that haven’t actually occurred, inflicting both serious financial and reputational damage. 

As we have seen in the recent case in Hong Kong, deepfakes are increasingly being used to commit financial crimes by impersonating individuals within a company in order to obtain sensitive information. An employee in a multinational firm’s Hong Kong office was duped into attending a video call with what he thought were several other members of staff, but all of whom were in fact deepfake recreations. Believing everyone else on the call was real, the worker agreed to remit a total of $200 million Hong Kong dollars (about $25.6 million) to the fraudsters. 

Claims for Defamation 

The increase in the creation and dissemination of malicious deepfake content will likely also lead to an increase in the number of defamation claims. However, the context in which this deepfake content is produced will likely play an integral part in the success of any claims.  For example, claims surrounding content that was intended as a parody would be unlikely to succeed. Although if the reasonable viewer is not aware of a video’s falsity, for example, it may be possible to bring a claim against the creator and/or publisher of the video, such as the host website.  

AI in Hollywood 

The Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA) strike in 2023 also highlighted some of the unresolved legal and reputational issues for talent which has been brought about by the increased use of AI in the entertainment industry.

The use of this technology has its benefits though the creation of realistic digital characters, the enhancement of special effects and even through creating entire story lines. However, this progress has also given rise to the creation of AI-generated content that blurs the lines between reality and fiction. Correspondingly, questions are raised around consent if, for instance, a production company unilaterally regenerates an actor’s likeness and around remedies for musicians if they (or their work) are recreated using AI technology without their permission. We are likely to see some interesting cases down the road as courts try to address these issues. 

Data protection

The implications of deepfake technology also extends into the realm of data protection. It is arguable that in processing the personal data required to create a deepfake the creator is a controller who is subject to strict obligations on how the source material is processed. In the absence of any lawful basis for processing an individual’s face and voice, the creator may be liable. 

Intellectual property

A deepfake may also breach intellectual property (IP) rights such as copyright, which may be relevant where other original works have been substantially copied in a deepfake creation. AI technology needs to be trained to know what the individual who is the subject of the deepfake looks like. It does this by combing the internet for photos, music or videos of the person it is copying. However, it is the owners of the copyright in the photos or video who will have a cause of action for infringement if their works are copied without their permission rather than the individual subject (unless they are the said copyright owners). 

Future Considerations 

The rise of deepfakes presents complex legal and operational issues for businesses that require a multifaceted approach. Science and technology are constantly advancing. Deepfakes, along with automated content creation and modification techniques, merely represent the latest mechanisms developed to alter or create visual, audio, and text content. The key difference they represent, however, is the ease with which they can be made – and made well. 

Businesses should consider conducting a review of their current policies and procedures and implement more robust policies and procedures to be able to verify the authenticity of audio, video, and other media content before relying on it for important decisions. Technological solutions, such as digital watermarking, and blockchain authentication, can also aid with the detection and prevention of the spread of deepfakes. By embedding these technologies into disseminated media content, it can become easier to track its origins and verify its authenticity. 

We have already started acting on projects involving the use of AI in aggregation tools and the AI replacement of primary talent in existing television commercials, for example. As the use of deepfakes looks set to continue to grow, it is important to take proactive steps to safeguard against their misuse. 

[1](United States Department of Homeland Security, 2023)

A finance worker at a multinational firm was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call.

Our thinking

  • Is a Big Mac meat or chicken? Thoughts on the recent General Court decision

    Charlotte Duly

    Quick Reads

  • Tortious liability: Supreme Court brings relief for directors

    Olivia Gray

    Insights

  • The Africa Debate: Africa’s role in a changing global order

    Matthew Hobbs

    Quick Reads

  • IFLR interviews Shirley Fu on her reasons for joining the Firm

    Shirley Fu

    In the Press

  • A Closer Look at the Current State of Artificial Intelligence Regulation in the Gulf

    Mark Hill

    Quick Reads

  • Is the horizon level? Current updates and predictions for Competition Law in the UAE

    William Reichert

    Insights

  • Emily Chalkley writes for The Times on how best to use employee influencers

    Emily Chalkley

    In the Press

  • The Digital Markets, Competition and Consumers Act receives Royal Assent! What can we expect to see in consumer law changes?

    Dillon Ravikumar

    Insights

  • Lights, Camera, Rebates: A Closer Look at Film Financing in the Gulf

    Mark Hill

    Quick Reads

  • Using Generative AI and staying on the right side of the law

    Rebecca Steer

    Insights

  • World Trademark Review quotes Charlotte Duly on a recent Supreme Court director liability ruling

    Charlotte Duly

    In the Press

  • Amendments to the Swiss Civil Procedure Code: Enhancing International Litigation and Streamlining Processes

    Remo Wagner

    Quick Reads

  • Copyright in the Age of AI

    Mark Hill

    Quick Reads

  • CDR Magazine quotes Charlotte Duly on the inter partes process for trade mark opposition

    Charlotte Duly

    In the Press

  • Charles Russell Speechlys advises CLA UK on the acquisition of Engine B

    Charlie Ring

    News

  • DIFC Courts Release 2023 Annual Report

    Peter Smith

    Quick Reads

  • Caroline Greenwell writes for The Law Society Gazette on the LIBOR scandal

    Caroline Greenwell

    In the Press

  • Qatar joins the Madrid Protocol

    Charlotte Duly

    Quick Reads

  • Nvidia faces class-action lawsuit for training AI model on ‘shadow library’

    Mark Hill

    Quick Reads

Back to top