Employers large and small are increasingly turning to AI systems to assist in talent acquisition. In a 2020 report, Sage (the British software company – not the Scientific Advisory Group for Emergencies – not everything is Covid-19 related!) found that 24 percent of businesses are currently using AI for recruitment and that number will likely double in the next 12 months, as 56 percent plan to adopt it within the next year. The Covid-19 pandemic (ok, most things right now are Covid-19 related…) it seems will only expedite this process as companies fast-track digital transformations, lockdown rules require candidates to conduct interviews remotely and more people lose jobs, meaning more people apply for limited vacancies.

AI in recruitment is not a new thing. Already in 2014 Amazon started using an algorithm to review CVs for the appointment of top talent (it did subsequently stop using that particular AI system when it appeared to be sexist in its recommendations of applicants). Today there are dedicated companies providing a variety of tailored video interview software solutions and platforms using AI to help select the ‘best’ candidates.  This short episode of Moving Upstream from the Wall Street Journal provides a great insight into how one such company, HireVue works. Candidates are interviewed by an AI robot and human behaviour – tone of voice, cluster of words and micro expressions (smiling, frowning etc) – is assessed during the course of the interview and then quantified against a desired list of attributes.

How to do it right

With using AI to make these life-changing decisions comes significant risk that algorithms can exacerbate issues of fairness and inequality. The UK Information Commissioner’s Office has been exploring the use of algorithms and automated decision-making and the risks and opportunities they pose in an employment context. It has highlighted six key points to consider when using AI in recruitment. We summarise each point below and extract the key respective ICO recommendation.

1. Bias and discrimination are a problem in human decision-making, so it is a problem in AI decision making. An AI model is only as good as the data it is fed – as an early IBM programmer pithily summarised: “garbage in, garbage out”. Programmers must be mindful of the bias that might be reflected in past data, as using that data to train an AI system will ultimately propagate that past unfairness into the future. The ICO says that “AI is not currently at a stage where it can effectively predict social outcomes or weed out discrimination in the data sets or decisions”. ICO recommendation: employers should assess whether AI is a necessary and proportionate solution to the problem. (The UK’s Centre for Data Ethics and Innovation (CDEI) has published a report reviewing bias in algorithmic decision-making. For a summary of the CDEI’s recommendations, see our earlier blog).

2. It is hard to build fairness into an algorithm. Any AI system must comply with the data protection principle of fairness. Fairness under UK law is not prescriptive. ICO Recommendation: At the very beginning of the AI lifecycle, determine and document how you’ll sufficiently mitigate bias and discrimination as part of your data protection impact assessment. Then put in place the appropriate safeguards and technical measures during the design and build phase. For international employers – consider whether an algorithm trained to comply with one jurisdiction’s requirements of fairness will meet another jurisdiction’s standards. 

3. The advancement of big data and machine learning algorithms is making it harder to detect bias and discrimination. Machine learning AI systems leveraging big data can develop patterns that are unintuitive and hard to detect. Some of these patterns might result in correlations that discriminate against groups of people. ICO recommendation: monitor changes and invest time and resources to ensure you continue to follow best practice in this area and that your staff remain appropriately trained.

4. You must consider data protection law AND equalities law when developing AI systems. There are a wide range of laws which could make AI decision-making unlawful. While there is overlap between obligations under the different legislation, there are differences. Data protection law, in particular, prescribes various actions that any employer must undertake to address unjust discrimination; this includes using appropriate technical and organisational measures to prevent discrimination when processing personal data for profiling and automated decision-making. ICO recommendation: organisations must consider their obligations under both areas of law separately. Compliance with one will not guarantee compliance with the other.

5. Using solely automated decisions for private sector hiring is likely to be illegal under the GDPR. The GDPR prohibits solely automated decision-making that has a legal or similarly significant effect; there are exemptions, but they’re unlikely to be appropriate in the case of private sector hiring. ICO recommendation: AI in recruitment is better used as a supplementary tool to enhance human decisions – not to make decisions on its own.

6. Algorithms and automation can also be used to address the problems of bias and discrimination. By using AI led video interviews, decision-makers are able to conduct the same video interview, with the same questions, to every candidate – thereby helping to eliminate bias. In addition, algorithms are themselves being developed to detect bias and discrimination in the early stages of a system’s lifecycle. ICO recommendation: whilst we may never be able to remove the most ingrained human biases, automation can improve how we make decisions.