AI and Dispute Resolution: Managing Legal Risk in an Evolving Landscape
The use of AI by businesses in their services, operations and products has increased exponentially over the past few years, but as businesses seek to unlock the benefits of AI there exists a potential litigation risk that parties should be aware of.
The complexities of AI systems and the commercial agreements that govern their usage mean that the potential for disputes is enhanced: be those disputes between contracting parties over the use of AI systems or legal liabilities that utilisation of the system could create in relation to third parties.
Contractual disputes arising between AI suppliers and customers
Disputes between providers of AI services and their customers are likely to increase as the utilisation of technology becomes more embedded in the day to day operations of businesses. Contractual disputes between customers and the suppliers of the AI systems can be complicated by the following factors:
- difficulties pursuing claims against a supplier for breach of contract due to uncertainties over deliverables;
- uncertainties as to what constitutes a breach of the agreement (for example failure to conform with specification / objectives / standards agreed for the system itself);
- alternative breach such as fault with the data set on which the AI is trained rather than fault with AI system itself;
- complex issues of causation and loss;
- risk of reliance on implied terms (for example software is not a ‘good’ with the associated
- implied terms unless supplied in its stored medium); and
- the extent of indemnities given as to losses caused to third parties.
These issues and measures that could be taken to obliviate the underlying concerns are considered further below.
Liability for AI errors will of course depend on context, but the tort of negligence and implied statutory terms may be likely to give rise to further causes of action, unless and until specific AI liability legislation is introduced.
It is worth noting that the European Commission has considered this issue at length as part of reform to its product liability regime. However, having proposed AI specific legislation adapting non-contractual faultbased civil liability rules to AI, it has now indicated that this will not proceed.
Care should be taken when drafting agreements to keep abreast of legislative changes to ensure commercial agreements remain up to date.
Liability to third parties arising from the use of AI systems
As well as pure inter-party disputes, there is also potential that utilisation of an AI system could (in certain circumstances) create liabilities to third parties. This has been touched on elsewhere in this Guide but may include areas such as:
- IP infringement;
- Data Protection breaches;
- Discrimination (employment);
- Product liability;
- Regulatory breaches / concerns.
In the area of product liability, the EU has recently passed a revised Product Liability Directive which lays down common rules on the liability of economic operators for damage suffered by natural persons caused by defective products, including certain software and AI enabled products.
These disputes will all be, of course, fact dependant but before implementing any AI system care must be taken to consider how the system will be used and where the risk for that utilisation lies.
Minimising contractual and operational risk in AI projects
Tips to minimise risks at the contract drafting stage include: • agree correct forum in the contract for the AI system;
- agree the standards / specification / objectives relating to the AI system with which the AI system should conform / perform;
- good governance and thorough review by the customer;
- maintain clear records and engage with relevant processes (relevant to causation and loss); and
- agree robust warranties and indemnities in the contract to minimise impact on customer/supplier (as the case may be) and deal with potential liabilities to third parties.
Resolving AI disputes through arbitration and ADR
In terms of agreements concerning the use of AI, care should be taken as to what forum is being selected for the resolution of disputes.
Resolving disputes via arbitration versus national courts should be considered as the confidentiality of arbitral proceedings may be beneficial in terms of keeping the usage of AI by businesses out of the public court system.
Increasingly arbitral institutions are bringing in bespoke rules that deal with AI disputes (see for example the JAMS AI Disputes 2024) that may help with resolving the disputes expeditiously. These rules often look at keeping business sensitive data as confidential as possible whilst allowing tribunals to rule on the matters in dispute.
Other forms of ADR (alternative dispute resolution) should also be considered. UK courts will be keen to establish that there has been an attempt to resolve a dispute before resorting to the court system. At some point we may also need to be consideration whether the dispute itself should be decided by AI
Managing AI-related dispute risk
Managing AI-related legal risks requires a proactive, multi-layered approach as UK regulations evolve. Key concerns include data protection, IP rights, contractual liability, and algorithmic bias. Organisations must comply with laws like UK GDPR and the Data Protection Act 2018, especially when AI handles personal data or makes automated decisions.
IP disputes are increasing, particularly around copyrighted content used in AI training. Clear contracts are vital to define responsibilities when AI systems fail or produce biased results. Legal teams should adopt governance frameworks, conduct fairness audits, and participate in regulatory sandboxes. As AI grows more autonomous, assigning accountability remains a major legal challenge.
What’s next for AI and Dispute Resolution law?
AI is set to transform dispute resolution law, blending tech innovation with evolving UK regulations. As tools like generative AI assist with drafting, research, and arbitration, legal professionals face challenges around accountability, transparency, and fairness.
UK firms are cautiously adopting AI to boost efficiency while maintaining human oversight to prevent errors. The future points to hybrid models where AI supports—but doesn’t replace—human judgment. Regulators are expected to introduce clearer guidelines on ethical use, data integrity, and admissibility of AI-generated evidence. The key challenge remains balancing innovation with justice and due process.