• news-banner

    Expert Insights

AI and Employment Law: Fairness, Transparency and Workplace Risk


AI tools are used in various aspects of the employment relationship and have been for some time. From job advertising and initial sifts in the recruitment process, to managing absences and determining rotas, automated decision-making is playing a part, and this will only increase as further AI tools are deployed. Despite the opportunity for enhanced business productivity and time-saving , over-reliance on AI tools, without careful management, risks undermining the personal nature of the employment relationship and the nuanced decision making often required to manage a workplace empathetically at best, and discrimination and bias at worst.

What is the impact of AI on recruitment and hiring decisions?

Employers are increasingly using AI tools to sift initial applications and CVs and search social media profiles for key terms. This can lead to automated decision-making where applications are rejected without any direct human involvement. 2025 data collected by DemandSage indicated that 87% of companies now use AI in recruitment. These numbers will no doubt continue to increase. Under data protection legislation there is a right (in some circumstances) to a human review of a decision that has been made by a fully automated decision-making process, but this limited right does not provide sufficient protection against potential bias in the algorithms used.

Should employers be concerned about bias and discrimination in AI decisions?

AI bias is often a product of the way the tool has been modelled and the type of data fed into the tool when training and developing it. In a machine learning context, the potential problems were highlighted by Amazon’s use of automated CV screening several years ago. Using the data from Amazon’s historic recruitment data, the algorithm, through machine learning, “taught” itself that male candidates were preferable to female candidates. Amazon abandoned the use of the tool, but it is a warning of the potential discrimination that may arise. Where AI tools are developing as they receive information it becomes more difficult to know what the underlying algorithm is basing its decisions on, making it difficult for employers to be able to justify their decision-making process as the process becomes more ambiguous.

Issues can even arise before application stage when job advertising. In 2023, Global Witness conducted research which revealed clear gender bias in Meta’s Facebook algorithm, with a receptionist role advertised to female Facebook users in 97% of cases and a mechanic role advertised to male Facebook users in 96% of cases in the Netherlands, with similar statistics in France. In February this year, the Netherlands Institute for Human Rights found that Meta was not fulfilling its duty of care to its Dutch users by using algorithms which have a discriminatory effect. Similarly, the French equalities regulator said in a ruling last month, that Meta’s algorithms are sexist and in breach of France’s anti-discrimination laws, giving the company 3 months to provide measures to rectify this. This sets a European precedent for treating algorithmic bias as discrimination in law, expanding the reach of equality regulation and making it clear that the continued application of transparency, fairness and human supervision is required when deploying such systems.

Uber has also faced criticism in recent times over its facial identification AI software which it required its drivers to use when logging on to its driver app. The software employed a photo comparison tool to verify drivers’ identities by matching their pictures with those stored in its database upon app login. The app reportedly struggled to accurately recognise individuals with darker skin tones. This issue led to numerous workers being unable to access the app and secure employment. Testing revealed that the software has a failure rate of 20.8% for females with darker skin and 6% for males. A tribunal claim brought by one such affected driver, who was removed from the app due to "continued mismatches" in the photos he was submitting, settled before its 17-day hearing listed late last year. 

Other AI tools also have the potential to discriminate against employees with disabilities e.g. automated shift allocation tools utilising AI to assess data on workers’ past availability and productivity may result in offering fewer shifts, and consequently reduced pay, to an employee whose availability or productivity is affected by a disability. Employers must therefore be alive to these sorts of issues to avoid discrimination claims which can be brought as one or a number of: direct discrimination, indirect discrimination, harassment, discrimination arising from disability or failure to make reasonable adjustments (in a disability claim).

How can employers ensure compliance with employment laws and regulations when using AI?

The use of AI technology in, for example, a redundancy process, would make it much more difficult for an employee to understand if a decision to dismiss is rational and fair unless the AI model deploys appropriate transparency and explainability. The laws protecting against unfair dismissal and discrimination require an employer to act fairly and appropriately. Without a transparent and explainable understanding of the underlying model, employers will find it difficult to defend claims, leaving them exposed. From both employee relationship and risk management perspectives, employers need to be certain they can show that the decisions they make are objective and non-discriminatory.  

Key tips for employers using AI when making employment decisions

  • Conduct a thorough risk assessment to identify potential biases in AI systems and their impact on recruitment and employment processes.
  • Perform due diligence on third-party AI providers to understand their algorithms and data sources.
  • Request transparency from vendors about how their AI systems make decisions and any measures they have in place to mitigate bias.
  • Offer training to hiring managers on the ethical use of AI in recruitment, focusing on recognising and mitigating bias.
  • Ensure hiring managers understand the limitations of AI tools and the importance of human oversight in decision-making.
  • Encourage hiring managers to provide feedback on AI systems to improve their accuracy and fairness.
  • Continuously monitor AI systems for signs of bias or unfair outcomes, using metrics and audits to assess their performance.
  • Establish a process for reviewing AI-driven decisions to ensure they align with organisational values and legal requirements.
  • Be prepared to make reasonable adjustments for candidates who may be disadvantaged by AI tools, ensuring equal opportunities.
  • Consider alternative assessment methods for candidates who request adjustments due to disability or other factors.
  • Regularly review adjustment policies to ensure they meet the needs of diverse candidates and comply with legal obligations.

What’s next for AI and Employment Law?

As highlighted in our recent Insight on AI and Regulation and Ethics,  there are currently no specific UK AI laws in force. However, other frameworks are gathering momentum in the employment space. In April 2024, the Trades Union Congress published a draft Artificial Intelligence (Employment and Regulation) Bill, setting out a potential framework for regulating AI in the workplace. Employers in the EU will already be facing some AI related obligations arising from the new EU AI Act - some provisions of which are already in force.

With the recent decision of the French equalities regulator, there is growing recognition for the need for supervision, open-scrutiny and accountability of automated decision-making. Greater regulation and an influx of AI-related case law, appears likely. 

Our thinking

  • Key Developments in International Arbitration for 2026

    Dalal Alhouti

    Quick Reads

  • Agricultural policy review 2025: Key changes and what to expect in 2026

    Maddie Dunn

    Insights

  • Leasehold and Freehold Reform Act 2024: Government launches consultation to switch on provisions relating to estate management charges

    Laura Bushaway

    Quick Reads

  • M&A in UK financial services - will mega-deals in 2025 lead to more mid-market activity in 2026?

    Mike Barrington

    Quick Reads

  • A new prospectus regime and other developments impacting UK Equity Capital Markets in 2026

    Andrew Collins

    Insights

  • The Introduction of Aquis Support Services – 19 January 2026

    Emily Dobson

    Insights

  • POATR - What type of securities does the new regime apply to?

    Emily Dobson

    Quick Reads

  • Infosecurity Magazine quotes Mark Bailey on the Cyber Security and Resilience Bill

    Mark Bailey

    In the Press

  • Hannah Catt writes for Tax Adviser on the implications of the newly introduced high value council tax surcharge in the UK

    Hannah Catt

    In the Press

  • eprivateclient quotes Dominic Lawrance on rumours surrounding potential UK government plans to attract HNW investors

    Dominic Lawrance

    In the Press

  • UK Living Sector 2026: Regulatory pressures, new trading platforms and more accessible public markets

    Sarah Wigington

    Insights

  • A Family Lawyer’s guide to five of the top most Googled Family Law questions in England and Wales relating to children

    Hannah Owen

    Quick Reads

  • Drip Pricing and Enforcement: How the DMCC Act is Changing the Rules

    Mark Dewar

    Insights

  • The Standard quotes William Marriott on the impact of the newly introduced 'mansion tax' in the UK

    William Marriott

    In the Press

  • Amenity Space in UK Office Buildings: Why It Matters and What Tenants Need to Consider

    Lynsey Inglis

    Insights

  • UK Hotels Sector 2026: Renovations, AI and Experience‑Led Stays

    James Broadhurst

    Insights

  • Charles Russell Speechlys grows Real Estate team with the appointment of UK and Italian market expert Chiara Del Frate

    Robin Grove MIoL

    News

  • Investment Week quotes Greg Stonefield on whether 2026 will be the year of London IPOs

    Greg Stonefield

    In the Press

  • Compliance Week quotes Abigail Rushton on the UK’s anti-corruption strategy and compliance lessons for companies and advisors

    Abigail Rushton

    In the Press

  • When Saying “No” to Mediation Is Reasonable: Guidance from Grijns v Grijns

    Bella Preece

    Quick Reads

Back to top