Top 5 tips for businesses to mitigate legal and regulatory risks associated with AI
Artificial intelligence is playing a pivotal role in today’s rapidly evolving business landscape. The development of generative AI, in particular, holds promise in revolutionising business operations. However, the potential for transformation is not without significant challenges.
In our AI Business Guide, we dissect the complex legal and regulatory risks that accompany the advancements in AI technology.
But what can businesses do to start mitigating those risks? Here are our top 5 tips:
1. Prepare and implement an AI acceptable use policy (AI AUP)
Your AI AUP should reflect what your organisation is, and is not, comfortable with when employees use AI tools in their roles. It should explain the guardrails in place to ensure that any use of AI is used responsibly, ethically and legally. But it’s also important that the policy is flexible so that it can reflect the changing regulatory landscape, evolving business needs and risk appetite. This could be done by establishing a list of prohibited and pre-approved types of AI in a “live list” that is updated periodically.
2. Embed an AI Risk Assessment Process (AI RAP)
An AI RAP should build on your organisations’ existing risk management processes to:
- help identify how and where regulatory, legal and ethical risks arise in the AI life cycle
- identify possible controls to help reduce risk
- identify when a data protection impact assessment may be required
- assess the residual risk after appropriate controls have been implemented
- ensure that AI risks are regularly reviewed
3. Ensure that the new and specific risks associated with AI are reflected in contracts
There are a plethora of contractual considerations that need to be taken into account when procuring or supplying an AI system, in particular regarding explainability, unlawful discrimination, data protection (especially important in relation to training data) and cybersecurity. Another very important consideration is how to address changes that will be required as a result of the EU AI Act (which has not yet been finalised and so still a moving feast) and any other applicable changes in law.
4. Ensure your Supplier Code of Conduct reflects the positions in your AI AUP
Accountability and governance are key components of the UK’s non-statutory, principles-based approach to AI. Therefore, ensuring that your supply chain uses AI in a responsible manner will help demonstrate good AI governance.
5. Increase AI literacy and training amongst your staff
Educate your employees on how AI works as well as how to identify AI risks and understand their personal responsibilities under the AI AUP. Whilst this will inevitably result in more queries for you from the business, it should help reduce overall risk and enable you to understand which areas of your organisations are most impacted by the rise in AI.
If you wish to delve deeper into the legal implications of AI, we invite you to explore our AI Business Guide. This resource is a collaborative effort, bringing together insights from experts across various practice areas within our firm to offer a holistic view of the intricate legal challenges posed by AI.
If you have any questions or concerns regarding AI, please get in touch.