Regulating AI – the impact of two key recent proposals: the UK’s National AI Strategy and the EU’s proposed Artificial Intelligence Regulation
With the hype surrounding artificial intelligence continuing to gather pace, it is useful to pause and consider some of the proposed regulatory changes that may affect the UK, particularly focussing on the UK’s National AI Strategy and the EU’s proposed Artificial Intelligence Regulation.
A recap of the key issues
Before we do so, it is useful to recap in brief terms some of the key legal issues affecting AI as this is key to understanding how potential AI regulation could shape these interests and issues:
- How to define AI. As yet, there is no universally accepted definition and, as such, there is a degree of variance. Whilst most definitions share some common ground (for example, making complex decisions autonomously (without human oversight), learning new tasks without being explicitly programmed to perform said task or simply mimicking intelligent human behaviour), the absence of a definition makes it difficult to determine what is, or isn’t, being regulated.
- Data protection, transparency and algorithmic bias. This is one of the key issues regarding AI. If an AI application is self-learning, it must learn from the data it processes, and if such data comprises personal data, data protection law will apply. Whilst the underlying questions may be the same as where data is used in any technology (Is the data being used fairly, lawfully and transparently? Do people understand how their data is being used? How is data kept secure?), the stakes are raised in complex AI systems. Moreover, if systems are trained on real-world data sets, there is potential for any human bias baked into that data set to be magnified. This is obviously problematic, with the potential to case real harm, but also difficult to spot where a black-box takes ‘data set A’ and produces ‘result B’, without anyone including the developer necessarily understanding how it reached the conclusion it did.
- Contract practices and product liability. Who is at fault if an AI system fails to perform or, even worse, causes damage (think the driverless car that causes a road traffic accident)? The owner, the developer, the system itself (with distinct legal personality)? These are key concerns and existing contractual practices may need to adapt in order to address them.
For completeness, there are of course many other issues to consider, some of which are omitted simply in the interest of column space (for example, intellectual property issues) but others because they are not yet fully known by anyone.
Starting with the most recent development to affect the UK landscape, on 22 September 2021, the UK government published its National AI Strategy (the “UK AI Strategy”). Obviously, with any government strategy document, the focus is policy driven rather than regulatory. The AI Strategy’s core purpose is to set out how the UK will become a world leader in AI in the coming years, based around three pillars: investment, ensuring AI benefits all sectors and regions and governance. As such, it is a broad document covering such areas as investment, re-skilling and diversity. However, the third pillar, governance, is of interest from a legal perspective.
By contrast, on 21 April 2021, the European Commission published a far more concrete proposal, comprising a proposal for a draft Artificial Intelligence Regulation (the “EU AI Regulation”). Whilst this will of course not be directly applicable in a post-Brexit UK, the proximity and alignment of the UK and EU digital strategy mean that this will likely be an influential initiative for the UK, regardless of how it develops. The EU AI Regulation takes the form of fully formed draft regulation, more narrowly focussed than the UK AI Strategy but also more precise, primarily seeking to govern the use of high-risk artificial intelligence systems and ensuring the safety and fundamental rights of EU citizens are protected in their development and deployment.
So what do these key documents have to say on the key issues identified above?
How to define AI
Unsurprisingly, the UK AI Strategy largely ducks the challenge of defining AI, noting that the “clarity needed for legislation” is not required at this stage. As such, it alludes to a broad definition of AI, referring to mimicking facets of human intelligence and the ability to learn: “Machines that perform tasks normally requiring human intelligence, especially when the machines learn from data how to do those tasks.”
The EU AI Regulation, however, is a legislative proposal and as such requires more clarity. AI is defined as “software that is developed with one or more of approaches and techniques [including machine learning, logic and knowledge-based approaches and statistical approaches]. . . and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” It is a broad and yet technical definition. However, a key sub-set of this definition is the definition of high-risk AI, which includes AI systems that are intended to be used as a safety component of a product (i.e. a collision avoidance system), but also stand-alone systems that generally have the capacity to harm or significantly prejudice a person.
Data protection, transparency and algorithmic bias
Whilst the UK AI Strategy specifically addresses the issue of determining the role of data protection in wider AI governance and cross refers to the DCMS ‘Data: a new direct consultation’ (closing on 19 November 2021) and how it may impact future AI regulation, it does not provide any unequivocal direction. There is more of a ‘wait and see’ approach to data governance.
By contrast, the more narrow yet focussed approach of the EU AI Regulation is also mirrored in its impact on privacy. In short, given that the GDPR is considered technology neutral, privacy is not really what the regulation is about. That said, high-risk AI systems are to be heavily regulated and AI may be high risk in light of its impact on privacy (for example, deploying subliminal techniques to distort behaviour; or exploiting vulnerabilities of a specific group of individuals).
Contract practices and product liability
The UK AI Strategy does not directly deal with liability issues. This is perhaps one of the areas where the market will be left to shape developments as much as regulation. The proposal does note however that the UK is taking a global approach to shaping technical standards, stating that “the government has established a strategic coordination initiative with the British Standards Institution (BSI) and the National Physical Laboratory to explore ways to step up the UK’s engagement in global standards developing organisations.”
Likewise, the EU AI Regulation does not specifically address these issues, but instead indirectly includes provisions that may shape liability and contract practices. This is through the specific requirements for developers and for users of high-risk AI systems. For example, the user of a high-risk system must abide by the “provider’s instructions on the use of the system, and take all technical and organizational measures indicated by the provider to address residual risks of using the high-risk AI system”. A failure to address such measures may impact judgments of fault and liability.
In terms of next steps, for the UK AI Strategy the UK government will publish a White Paper in early 2022 providing proposals on governing and regulating AI, which should provide further clarity. With respect to the EU AI Regulation, the draft proposals will now go through the EU legislative process, involving a further 12 months of debate and consultation.
In the meantime, it is worth noting that whilst the contrast between the UK AI Proposal and the EU AI Regulation is clearly shaped by the differing form of the initiatives (policy v. legislation) it perhaps also reflects the different priorities of the UK (seeking to project an post-Brexit, ‘open for business’ and encouraging innovation vibe) and the EU (with a more confident but prescriptive approach, buoyed by the knowledge that the GDPR has already become a quasi-global standard for other jurisdictions to consider). This is certainly an area where developments should be closely watched.