Skip to main content

AI and recruitment: An invaluable tool, or source of discrimination?

The Topline

There are clear advantages to employers in using AI to streamline and facilitate recruitment processes. With an increasing number of UK employers (currently 1 in 5) using AI for this very purpose, the trend for employers using AI recruitment tools appears to be here to stay.”

Laura Oxley, Director, Employment & Immigration

Laura Oxley B&W

An image of a set of wooden dominoes, and what appears to be a robot hand blocking it. A visual metaphor for the topic of this post AI and recruitment: An invaluable tool, or source of discrimination.

AI in this context can, however, have a darker side. It is important that employers are aware of the possible discrimination implications of its use and how to mitigate against the effects of these.

Stage 1: Advertising a role

From the very outset of the recruitment process, employers should be cautious to avoid discriminating against potential candidates for the role. Under the Equality Act 2010, employees and job applicants alike are protected from discrimination in respect of the nine ‘protected characteristics’ (age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex and sexual orientation).

Whilst AI chatbots can produce detailed job descriptions almost instantly, a level of human oversight should be maintained to ensure the details are accurate and non-discriminatory. In particular, job adverts should not show a predetermined bias and/or dictate requirements that are not necessary to the role but could deter particular groups of people from applying. For example, recruiting for a “Barmaid” would suggest a pre-determined intention to recruit a woman rather than a man, and stating that a vacancy “would suit candidates in the first five years of their career” could be discriminatory on the basis of age [1].

Employers should also be cautious when placing job advertisements online, as third-party websites may use discriminatory algorithms. In 2021, it was discovered that Facebook’s algorithms targeted advertisements at certain users based on the data collated, with men accounting for 96% of the people shown an advertisement for mechanic jobs, and women comprising 95% of those shown an advertisement for nursery nurse jobs [2]. It is also worth remembering that online advertisements may not be easily accessible for older or disabled applicants – and should ideally be supplemented with more traditional forms of advertising, such as industry magazines, to encourage a more diverse range of applicants.

Stage 2: The application process

It is important for employers to consider the applicant experience during the application process as there can be a number of pitfalls, besides those that come from use of AI.

A common area of complexity for employers is the duty to make reasonable adjustments for job applicants with a disability.  This duty arises when a disabled person either applies for employment or notifies the employer that they may apply. Whilst online application processes are increasingly becoming the norm, employers should be mindful that these will not always be accessible to applicants with a disability.

The recent decision of AECOM v Mallon [3] has demonstrated that an employer does not need to know the specifics of a person’s disability to be required to make reasonable adjustments in the application process. In this case, a job applicant requested a telephone interview to supplement his online application as a reasonable adjustment due to difficulties he would experience in completing the online application as a result of his dyspraxia diagnosis. The applicant did not specify the particular difficulties he would experience, and did not respond to emails from the employer asking him to elaborate. The tribunal considered that the employer had failed to make reasonable enquiries of the applicant in the circumstances, finding that the reasonable employer would have telephoned an applicant with a dyspraxia diagnosis to understand more, and the employer was therefore found to have had constructive knowledge of his substantial disadvantage. This case demonstrates the lengths to which employers are expected to go to understand the disadvantages that a disabled employee may experience during an application process (in order that measures can then be taken to mitigate against such effects). In this case, it was held that the employer did not go far enough to understand the individual’s needs.

Analogies can be drawn with AI here. If an AI-driven application process is utilised which places an individual with a disability at a disadvantage, a duty to make reasonable adjustments is likely to arise. For example, requiring someone with learning difficulties to complete an assessment within the same timeframe as candidates without learning difficulties, could place them at a substantial disadvantage and consequently give rise to a duty on the employer to make reasonable adjustments (such as providing them with additional time). To ensure that reasonable adjustments are made where necessary, employers should ensure that there is a level of human oversight built into the AI-driven recruitment process. Whilst the duty to make reasonable adjustments arises only in relation to individuals with a disability, any candidate with a protected characteristic could have cause to claim discrimination if they are disadvantaged during the recruitment process.

Stage 3: Shortlisting and interviewing

As many employers already appreciate, AI can be a valuable tool in identifying and shortlisting the top talent, but it is not without its risks. AI algorithms that are trained using biased data will perpetuate that bias, as it can struggle to distinguish between causation and correlation. This was demonstrated by Amazon’s CV screening software algorithm that was trained using its own historic recruitment data. By using Amazon’s historic data, the algorithm taught itself that male candidates were preferable candidates and therefore penalised CVs that included the word ‘women’s’, and downgraded graduates of certain all-women’s colleges [4]. Amazon eventually scrapped this AI recruitment tool as it was ultimately too flawed to rectify, even with human intervention.

Some employers are also deploying AI into the interview stage of their recruitment processes, asking candidates to respond to a number of pre-determined questions within a set amount of time via video. The AI tool then assesses and scores candidates using various visual, verbal and vocal criteria, such as the candidates’ facial expressions, vocabulary and speech tone. As has been made clear by the Uber case [5], facial recognition technology can experience difficulties in identifying black and other ethnic minority individuals, which could result in lower scores being given to such applicants. Further, adaptations to the process would also be required for candidates with a disability impacting their communication.

Employers therefore need to exercise caution when using such AI tools. For example, the technology should be audited and tested regularly to ensure accessibility and fairness. If it appears that there is bias, the AI tool should be refined immediately. Again, retaining a level of human oversight over the AI tool is key help to spot and remedy any mistakes and/or biases.

Future Regulation

Whilst the use of AI evidently comes with its risks, there is currently no legislative framework in the UK specifically regulating the use of AI by employers, although there are tangential requirements as outlined above and any use will be subject to compliance with Data Protection legislation. Unlike the EU, the UK government is currently of the view that there is no need to regulate AI because current regulators like ICO and FCA are best equipped to manage AI risk in their domains. This has led to some commentators saying that HR aspects may fall between the cracks because there is no “Employment Regulator”.

The government has just hosted the AI Safety Summit at Bletchley Park and its approach to regulation appears unchanged. Read more about our conclusions from the summit here.

This approach can be considered in stark contrast to that being taken by the EU, which is in the process of finalising the ‘EU AI Act’, a legally binding framework that will classify AI systems according to the risk they pose, with each risk level attracting a different degree of regulation. Currently any use of AI for recruitment or progression would be considered a “high risk system” requiring various processes and criteria to be maintained in relation to that system.

Practical considerations

Despite the current lack of AI-specific legislation in the UK, it is likely that employers can be liable for discrimination arising out of its use of AI tools under existing provisions. To help avoid any such claims, employers should be careful to ensure:

  • Ensure full compliance with data protection laws.
  • A level of human oversight is retained over recruitment decisions, to pick up on any obvious biases or mistakes produced by the technology.
  • There is transparency around how the AI tool is using data and making decisions, whilst outside the remit of this article, data protection considerations are key here too.
  • The recruitment process is accessible to all.


Find out more about our Employment & Immigration team, and how they can support you here.


[1] Rainbow v Milton Keynes Council 1200104/2007

[2] Global Witness calls on EHRC and ICO to investigate Facebook for breaking anti-discrimination and data protection laws | Global Witness

[3] AECOM Ltd v Mr C Mallon [2023] EAT 104

[4] Amazon scraps secret AI recruiting tool that showed bias against women | Reuters

[5] Manjang v Uber Eats Ltd and others ET/3206212/21

Our people



Employment & Immigration

Laura's contact details

Email me


de la Fuente


Employment & Immigration

Philippa's contact details

Email me




Head of Technology & Digital

Sally's contact details

Email me