• news-banner

    Expert Insights

EU AI Act: Key provisions now in force

The beginning of February, specifically 2 February 2025, marked the first of many deadlines to come imposed by the European Union’s (EU) Regulation (EU) 2024/1689, commonly known as the EU AI Act (the Act).

The first provisions of the Act to take effect include both a positive and a negative requirement:

  • positive, in that providers and deployers of AI Systems must proactively ensure that individuals, (which include but are not limited to their employees), have sufficient AI literacy to operate AI Systems; and
  • negative, in so far as those looking to leverage AI must not use AI Systems for certain AI practices deemed to pose unacceptable risk (Prohibited AI Systems).

Published two days later, on 4 February 2025, the EU Commission Guidelines, C(2025) 884 (the Guidelines), provide much-needed perimeter guidance as firms address the cost and complexity of complying with the Act. The Guidelines provide clarity pertaining to Prohibited AI Systems and touch upon instances where such systems may interact with the AI literacy requirements.

AI literacy requirements

Article 4 of the Act (Article 4) mandates that entities involved in the provision and deployment of AI Systems must take steps to ensure that those operating AI Systems possess a sufficient degree of AI literacy.

The Act defines AI literacy as having the requisite skills, knowledge, and understanding to appreciate the potential benefits and risks inherent in the use of AI, including any potential for harm.

When assessing the necessary level of AI literacy, firms must consider the experience, educational background and training of the relevant staff, as well as the context in which a given AI system will be used. Note that the obligation to ensure AI literacy potentially extends beyond staff (subject to the guidance below).

Article 4 now having come into effect means that AI literacy must be embedded into firms by design and by default across all aspects of a firm’s business model, rather than be limited to a select few such as a firm’s IT team or chief technology officer.

Firms’ compliance must also be comprehensively documented. Firms will need to have implemented robust AI governance policies and CPD AI training programs for all relevant staff.

Firms will do well to consider that merely having in place a generic AI governance policy is highly unlikely to satisfy the AI literacy requirement. As a minimum, any such policy would, in any event, have to be tailored to the firm's specific AI use cases.

While Article 4 does not prescribe specific penalties or fines for non-compliance, regulatory authorities are likely to consider violations of Article 4 when determining the appropriate sanctions for other infringements of an organisation's duties under the Act.

Prohibited AI Systems

Article 5 of the Act (Article 5) affects a ban on Prohibited AI Systems. This is predicated on the idea that certain uses for AI pose unacceptable risk, including:

  • AI Systems that implement social scoring to assess or categorise individuals or groups based on their conduct or actual, assumed, or anticipated traits;
  • AI Systems that use subliminal techniques to manipulate or deceive with a view to altering a person's or group's behaviour by impairing their capacity to make informed choices;
  • AI Systems that take advantage of the vulnerabilities of certain individuals or groups to alter their behaviour;
  • AI Systems that apply profiling or personality trait evaluations to predict criminal conduct;
  • AI Systems that generate or enlarge databases of facial recognition through untargeted scraping of facial images from the internet or from CCTV;
  • AI Systems that decipher someone’s emotions in a workplace or educational setting;
  • Biometric identification systems in real-time that are used in publicly accessible areas for law enforcement purposes; and
  • Biometric classification AI Systems that use biometric data to infer characteristics such as political views, race, trade union membership, beliefs, or sexual orientation.

To contextualise Article 5, these Prohibited AI Systems are deemed to be in conflict with European Union values, the rule of law, and fundamental rights.

Exceptions to certain Prohibited AI Systems are available in limited circumstances – more on this below.

The Act provisions setting out the punishment for breaches of Article 5 are set to come into force on 2 August 2025. Infringements of Article 5 could cost firms either €35 million or 7% of their total global annual turnover for the previous year, whichever is greater. Compounding this will be, of course, the potentially damning reputational damage of being amongst the cohort of non-compliant firms.

Interaction between Articles 4 and 5 of the Act

The Guidelines illustrate instances the nuanced interplay between the Articles detailed above.

For example, in the context of real-time Remote Biometric Identification (RBI) systems (the use of which may be prohibited under Article 5(1)(h) of the Act), the Guidelines confirm that no decision that would adversely affect an individual may be taken solely based on the output of the real-time RBI system. Here, the firm seeking to deploy the real-time RBI system should, in its Fundamental Right Impact Assessment (FRIA), clarify the role of a human agent in verifying and interpreting the output and provide training to operate the system. The Guidelines further prescribe that the individual in charge of human oversight must have “sufficient AI literacy, training and authority to understand how the system functions and when it underperforms or malfunctions.”

Emotional recognition AI Systems

The Guidelines provide the following clarifications in relation to the Article 5(1)(f) prohibition on emotional recognition AI Systems in workplaces or educational settings:

  • Firms looking to use AI Systems to decipher an individual’s emotions in a work or educational context for medical or safety reasons will be able to do so, subject to the other requirements of the Act. However, this exception is caveated; the Guidelines explicitly confine the definition of “safety reasons” within this exception to “the protection of life and health and not to protect other interests, for example property against theft or fraud.”
  • Firms with employees likely to be exposed to specific heightened risks, such as professional pilots or drivers, will benefit from Recital 18 of the AI Act, which excludes physical states such as pain or fatigue from the definition of emotion recognition AI Systems.
  • Even if an emotion recognition system does not fulfil the conditions for it to be subject to the prohibition in Article 5(1)(f) of the Act, such AI Systems will be categorised as high-risk AI Systems (HRAI Systems) under Article 6(2) and Annex III, point (1)(c) of the Act. Firms using emotional recognition AI Systems classified as high risk will be subject to enhanced compliance obligations in respect of, inter alia, data governance, transparency and human oversight.

Subliminal techniques

The Guidelines also provide further detail on the ban on the use of subliminal techniques found in Article 5(1)(a) of the Act. This Article does not represent a blanket ban on the use of all subliminal techniques, with the Guidelines confirming that such techniques are not necessarily prohibited unless all other conditions listed in Article 5(1)(a) are met. The Guidelines provide the following non-exhaustive examples of subliminal techniques:

Subliminal Visual Messaging

AI can subtly flash images or text during videos – such messaging is too quick for individuals to consciously register it but is still capable of influencing attitudes and behaviours.

Subliminal Audio Messaging

AI might introduce sounds or spoken words quietly or amidst other noises, affecting listeners subconsciously.

Subliminal Tactile Feedback

AI could induce barely noticeable physical sensations, influencing emotions or actions without conscious perception.

Undetectable Cues

AI could present visual or auditory stimuli, such as ultra-fast flashes or ultra-low volumes, that escape normal human detection.

Embedded Imagery

AI can embed images within visuals that elude conscious recognition but can still subconsciously impact behaviour.

Misdirection

AI may focus user attention on certain elements in order to obscure others, leveraging cognitive biases and vulnerabilities impacting individuals’ attention; and

Temporal Manipulation

AI may modify the perceived flow of time during interactions, potentially affecting user behaviour, fostering either impatience or reliance.

Whether these techniques will be subject to the prohibition will be largely fact dependent. The potential for perimeter uncertainty here is exacerbated by the lack of a definition of AI Systems deploying ‘purposefully manipulative techniques.’ In addressing this point, a distinction is drawn in the Guidelines between non-AI-enabled AI Systems which may seek to manipulate or influence human behaviour, and AI Systems which are able to adapt and respond well to individuals’ individual circumstances and vulnerabilities; the rationale being that it is this adaptability that heightens AI’s potential for harm in this context.

Article 5(1)(a) is particularly pertinent to advertisers. The Guidelines give the example of a “chatbot that is designed… to use subliminal messaging techniques, such as flashing brief visual cues and embedding inaudible auditory signals or to exploit emotional dependency or specific vulnerabilities of users in advertisements.” Here, the Guidelines confirm that if the other conditions in Article 5(1)(a) of the Act were fulfilled, in particular regarding the significant harm, this system would be likely to fall within the scope of the prohibition.

Firms seeking to create efficiencies via the use of AI chatbots as part of, for example, the sales pipeline, ought to familiarise themselves with the Guidelines, particularly as there is no one-size-fits all assessment prescribed. This emphasises the need for a case-by-case approach throughout its examples.

For further information, please see our Beginners Guide’ to the Act here.

Let’s talk

If you would like to understand how your firm may be affected by the EU AI Act, please contact Partners, Racheal Muldoon and Mark Bailey.  

Our thinking

  • Alumni Drinks Reception

    Events

  • Women in Leadership: Prima Facie

    Events

  • Token2049 week - what's on the horizon?

    Racheal Muldoon

    Quick Reads

  • My “15 Minutes of fame”, Eddie Redmayne and The Theory of Everything...

    Charlotte Posnansky

    Quick Reads

  • PISCES – HMRC release technical note on the interaction of PISCES on share schemes and incentives

    Tim Edgar

    Insights

  • Computing quotes Gareth Mills on a major antitrust case involving Google

    Gareth Mills

    In the Press

  • Michael O'Connor and Lauren Fraser write for Property Week on the impact of the Building Safety Act on residential property management

    Michael O'Connor

    In the Press

  • Martyn’s Law receives Royal Assent – what do property owners and occupiers need to do now?

    Ben Butterworth

    Quick Reads

  • From Double Helix to the Courtroom – A Look Down The Microscope into DNA Testing in Family Law

    James Elliott-Hughes

    Insights

  • The path to paradise or the road to ruin? The Pathfinder pilot in Children Act cases

    Ben Haynes

    Quick Reads

  • Can Labour deliver 1.5m new homes?

    David Savage

    Insights

  • Setting Standards: The Ciarb Guideline on AI Use in Arbitration

    Dalal Alhouti

    Insights

  • Risky Business: Lessons in clearing up Contractual Confusion in John Sisk and Son Ltd v Capital & Centric (Rose) Ltd

    Murron McKeiver

    Insights

  • TCC decision on validity of payment and payless notices served simultaneously

    Johnathon Grasso

    Insights

  • Investors' Chronicle quotes Natalie Butler on how to pass on your digital assets

    Natalie Butler

    In the Press

  • Charles Russell Speechlys advises long standing client Puma Growth Partners on its investment in LOVE CORN

    Ashwin Pillay

    News

  • Startups Magazine quotes Daniel Rosenberg on the use of AI and technology in M&A

    Daniel Rosenberg

    In the Press

  • Relocation to Portugal: The Portuguese Tax Incentive Regime for Scientific Research and Innovation (NHR 2.0)

    Julia Mauricio

    Quick Reads

  • Estates Gazette quotes Lynsey Inglis on trends in life sciences real estate investment

    Lynsey Inglis

    In the Press

  • Global Insight quotes Shirley Fu, Tom Wong and Victoria Younghusband on trends in corporate activity in China

    Shirley Fu

    In the Press

Back to top