This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 2 minutes read

Biased algorithms

On 16 October, I had the pleasure to attend the “Big Data in the workplace” conference at the University of St. Gallen (Switzerland) and speak about the legal pitfalls surrounding workforce analytics and the discrimination risks of electronic recruitment tools. The aim of the conference was to give participants (data protection officers, compliance officers, corporate lawyers faced with HR issues, as well as HR managers) an overview of the current legal practice and case law on workforce analytics.

The use of people analytics, algorithms and machine learning in human resources management creates new opportunities for performance and behavioural monitoring, and for efficient recruiting. Software can search through countless applications in a short period of time and select the best candidate, reducing HR departments’ workload and eliminating personal impressions or unconscious prejudices. However, there may be other issues raised specifically by the use of algorithms and analytics. A few weeks ago for instance, Amazon had to stop a recruiting project because it discriminated against women, albeit unintentionally. Amazon had developed a software for the project, which was designed to select the best from hundreds of incoming applications by checking them against keywords that were critical to the job. The software was fed and trained with successful applications from existing employees. However - as is common in the technology industry - the Amazon’s workforce is mainly composed by males. The software recognized this and concluded that men were better suited, resulting in applications from women to be filtered out.

Discrimination by technology is not always as evident and easily recognizable as in the Amazon case. In another case, the algorithm of a recruiting tool had established that employees with long journeys to work would quit faster, as soon as they found a closer workplace. Consequently, based on the data that was entered, it sorted out applicants who had a long journey to work. People belonging to ethnic minorities are often based in suburbs of big cities so this algorithm could unconsciously discriminate against an entire group of applicants, even though it had a completely different goal.

Employers should be in charge of preventing discrimination in the first place. They need to be aware of the risks associated with the use of a specific tool and, together with tech providers, ensure that they are not discriminating. In addition, a few test runs should be performed before using a software. This would allow to identify and tackle any issues in time.

Discrimination is not the only challenge that organisations engaging in algorithms and analytics must be aware of. Another major legal risk area employers should look at is processing of big data and data privacy issues. To find out more about legal implications and issues in using these tools, please read our briefing.

Tags

ai, automotive, intellectual property