• news-banner

    Expert Insights

We asked ChatGPT about harnessing AI in the workplace – this is what it said

When tasked with writing a Passle about the rise of AI in the workplace, where better to turn to than ChatGPT itself? After a recommendation of the infamous program as a time-poor lawyer’s new best friend in getting the ball rolling on blog posts, I signed up and generated this article, Transforming the Workplace: The Expanding Role of AI, in a matter of moments.

So far so good but was it me or ChatGPT? This leads to the interesting question – what counts as ‘cheating’ if an employee is asked to do a task and uses AI? The use of tools like ChatGPT by individual employees hasn’t been fully examined from a workplace perspective in terms of where to draw the line between a useful resource and an employee not doing their job properly.

Knowledge vs Information

In particular, employers want to ensure employees have ‘knowledge’ (which includes understanding and familiarity through experience) and don’t just rely on getting ‘information’ from using AI. This is particularly important as the information given by AI tools, while it may sound convincing, is often inaccurate. Therefore, it is vital that employees treat ChatGPT as a fallible resource, much like Google, and review any work or solutions it gives. With that in mind, ChatGPT can be a useful tool, particularly for HR professionals. Lucy Heath, HR and People Consultant at Seedfield HR Solutions, notes: “If you feed in the right cocktail of requests, you can receive well-formed templates for policies and processes as well as objective outcomes to employee issues. I've received great "advice" from ChatGPT on motivating teams and I know of a colleague who requested that ChatGPT design a candidate scoring matrix for them; while it wasn't perfect, it provided a good starting point.”

Reviewing and assessing ChatGPT’s work will also ensure employees engage with and understand the work they produce, which is vital for effective learning and development. Therefore, companies need to think now about issuing guidance and protocols on the use of AI to their staff to draw a clear line on what’s acceptable. At the same time, employers should include staff in their wider AI plans, with HR Consultant Geoff Smith commenting: “Recent press coverage may create a fear of AI in society, and in the workplace, which will need to be addressed by business leaders. The workforce will need to be taken on a journey to ensure that the process of selecting and adopting AI-based tools is collaborative and consultative.”

Dangers of Discrimination

Just as employers will want to ensure their workers aren’t overly reliant on AI, they should be aware of the dangers of over-reliance themselves. In particular, a growing use of AI in the workplace is in the recruitment of new staff. It may be tempting to streamline the recruitment process, for example using automation/AI to filter out certain applicants at the first hurdle. However, employers should be careful about the criteria they set or how the AI is trained to avoid an unintentional breach of discrimination laws. For example, deciding to filter out all applicants who are not based in the UK could amount to indirect race discrimination when, from an employment law perspective, it is purely the objective assessment of skills and experience that should be applied when making the initial filter on any new appointment.   

Employers should also be aware that AI may teach itself to filter on certain criteria. For example, Reuters reported that Amazon’s AI recruitment tool taught itself that male candidates were preferable, based on its training centred on resume patterns – due to the much higher numbers of men’s CVs than women’s in the tech industries. The tool was subsequently scrapped.

There is also the risk of applicants themselves using AI, with Lucy Heath again noting: “I recently had a candidate who applied for a role with a cover letter which had obviously been written by feeding the job description into a bot. Therefore from a recruitment perspective, my team and I remain cognisant that the covering letter may not have come from the candidate themselves - which is why interviewing the candidate and having a thorough selection criteria remains a must!”

Benefits of AI in Recruitment 

However, there are of course positive reasons for using AI in recruitment as long as employers are aware of possible issues and constantly monitor to safeguard against possible discrimination. For example, Unilever reported in 2019 that its use of AI had saved approximately 100,000 hours in interviewing time and nearly £1m annually with their processes. In addition, the logic of machine learning can have its advantages. Research in the US last year found that when human interviewers interviewed five candidates for a role, all of whom were completely inappropriate, they still ended up selecting one – perhaps due to social pressures or the sunk cost fallacy. An AI interviewer on the other hand had no issues dismissing all five candidates. In addition, while much has been made of the discriminatory tendencies of certain AI technologies, humans can be prone to the same biases. AI is generally trained on past patterns, so tends to replicate the biases found in humans – blaming AI for discrimination may therefore be looking at things the wrong way around.

Legislation and Regulations

New York City is now looking to regulate the use of AI recruitment tools to counteract potential human-driven biases. Local Law 144, which is due to start being enforced on 5 July 2023, would require employers to conduct bias audits on automated employment decision tools, including those harnessing artificial intelligence and similar technologies. This appears to be the start of a new surge of legislation attempting to regulate AI in recruitment, with states such as California and Washington considering or enacting new rules. This side of the pond, the EU’s proposed Artificial Intelligence Act will also attempt to regulate AI systems, with high-risk applications, like CV-scanning tools, facing requirements such as transparency with users and adequate risk assessment and mitigation measures. Italy has already banned further development of ChatGPT.

So far there are no similar laws in the UK, with the government looking to adopt ‘light touch’ regulations due to the challenges of legislating in such a fast-moving area. It recently published a white paper setting out this approach and is now running a consultation to seek feedback on these proposals. The white paper acknowledges that ‘AI can have a significant impact on people’s lives, including … recruitment outcomes’ and that ‘AI systems should not produce discriminatory outcomes’. It notes that the EHRC, ICC and EASI will be encouraged to work together alongside other similar organisations to issue joint guidance around the use of AI systems in recruitment or employment.

This light touch approach may make sense given so much is still unknown and new issues will no doubt arise, with Geoff Smith flagging some possible developments: “Employers will need to be careful how they manage employees whose role has been eliminated due to use of AI. I wonder if trade unions and creative lawyers will develop arguments to suggest that the AI tool cannot properly and adequately performs the role of the employee? And will the AI tools be capable of using language which amounts to bullying and harassment? If so, companies will need to deploy new resolution processes in the workplace.” In the meantime, employers should be mindful of reliance on any AI tools and ensure there is still some human review or input. As Lucy Heath comments: “I am reluctant to get swept up in the tsunami of change; particularly in HR, there remains a need to approach the workplace on a personal level to allow for empathy, accountability and compromise to inform the employee experience alongside best practice and workplace policies.” Geoff Smith adds: “The use of data more widely and openly will give rise to privacy concerns. This is just one argument for companies needing to ensure that AI-generated work has the appropriate human checks and balances.”

This time last year few people had heard of ChatGPT, and it is now on everyone’s lips and top of the legislative agenda in many countries worldwide. That alone shows how fast-paced the rise of AI is, and the importance of employers drafting clear guidance for workers and monitoring their own use closely – whilst ensuring they are flexible enough to adapt to the next big change around the corner.

For more on how to reduce the risks associated with AI, read this article. And for ChatGPT’s take on these issues, see Transforming the Workplace: The Expanding Role of AI.

Our thinking

  • Michael Powner and Sophie Rothwell write for Law360 on anti-bias protection

    Michael Powner

    In the Press

  • France: Employment and Labour Law Comparative Guide

    Kim Campion

    Insights

  • Sex discrimination at work

    Michael Powner

    Insights

  • Thomas Snider, Reem Faqihi and Dalal Alhouti discuss the impact of technology on the arbitration landscape for Legal Community MENA

    Thomas R. Snider

    In the Press

  • Breaking Barriers: The Tech Revolution in Arbitration

    Thomas R. Snider

    Insights

  • Charles Russell Speechlys grows its rankings in The Legal 500 EMEA directory

    Frédéric Jeannin

    News

  • Forbes quotes Gareth Mills on the US government’s antitrust lawsuit against Apple

    Gareth Mills

    In the Press

  • The Financial Times quotes Nicola Thorpe on the importance of improving digital hygiene in the fight against cyber crime

    Nicola Thorpe

    In the Press

  • Embracing AI's potential in arbitration

    Thomas R. Snider

    Insights

  • Thomas Snider, Patrick Gearon and Dalal Alhouti discuss the impact of AI on international arbitration for Legal Community MENA

    Thomas R. Snider

    In the Press

  • Employment Podcast: Mental Health in the Workplace

    Anne-Marie Balfour

    Podcasts

  • Sara Wilson and Francesca Heath-Clarke write for People Management on new flexible working request rights

    Francesca Heath-Clarke

    In the Press

  • Landmark European AI Act Passed By The European Parliament

    Louise Zafer

    Insights

  • Race discrimination in the workplace

    Michael Powner

    Insights

  • Use of biometric data and monitoring in the employment context

    Sophie Rothwell

    Insights

  • A Modern Marriage: How AI Powered By Blockchain Could Protect IP Rights

    Shennind Awat-Ranai

    Insights

  • Digital assets consultation by the Law Commission

    Cheryl Tham

    Insights

  • The Daily Telegraph quotes Nick Hurley on the legalities of asking for childcare employment in lieu of rent

    Nick Hurley

    In the Press

  • Pregnancy and maternity discrimination in the workplace

    Michael Powner

    Insights

  • Thomas Snider and Dalal Alhouti write for New Law Journal on international arbitration trends

    Thomas R. Snider

    In the Press

Back to top