This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 3 minutes read

Profiling and prejudice

Fresh out of college, you decide that the next step of your adult life is getting a job. You draft your resume and (after a thorough read) submit it on the website of a big multinational corporate. You are sure that getting this dream job will be just a matter of time.

After two weeks of hearing nothing, you decide to pick up the phone and call the HR department. Employers must be crazy not to hire you, what’s wrong? You are told that your resume does not fit any of the jobs on offer. They tell you that, by using an algorithm, all resumes received are checked to find the perfect candidate. You are simply not selected. You are speechless…. How can a computer decide on whether you’re fit for the job or not?

Automated decision making

Under the GDPR, as a rule, there is a general prohibition on fully automated individual decision-making, including profiling that has a legal or similarly significant effect. What is profiling? Profiling is defined as ‘any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person (…)’. As an exception to this rule, a data subject can be made subject to automated decision making if (i) the data subject has given its explicit consent, or (ii) the automated decision making is necessary for the entering into (or performance) of a contract. If none of these exceptions apply, the data controller is prohibited to use automated decision making.

The rationale behind this is that an algorithm, programmed by human, is fundamentally a set of instructions required to complete a certain task. Once the algorithm is designed, it will run its course without intermediate checks and balances. If this set of instructions contain a flaw or bias, the data subject will be the victim of these flaws. In the example of the job application, the algorithm will probably use data from last couple of years to determine who is the ‘perfect’ candidate for the job. However, if by coincidence, during the last years only men got selected for a particular job, the algorithm will never select women going forward. In other words, by using a combination of data and a (self-learning) algorithm, decisions are not based solely on the merits of the individuals, but on other factors that should not be of any importance.

Only theory?

The above discussion might sound theoretical. In fact, it is not. In October of this year, Amazon scrapped its ‘sexist AI’ recruiting tool that showed bias against women. On the other side of the world, the Chinese Ping An (a financial conglomerate) uses AI camera’s to read the facial expressions of their potential lenders to make sure they are not lying during their credit application and to make a decision. In the future, we will surely be confronted with more automated decisions in our everyday life.

The human touch

Ok, so the big corporate is prohibited to make its applicants subject to automated decision making. However, this corporate does not have the manpower to go through all the resumes they receive. Now what?

The two exceptions (consent and execution of a contract) mentioned above may not work. First, even if the job applicant gives consent when submitting its resume online, this might very well prove not to be freely given. How free will the data subject be to give its consent if it wants to get the job? Secondly, one could argue that, in normal cases, the automated decision making is not necessary for the entering into or the performance of a contract. However, the guidelines state the example of a business receiving tens of thousands of applications for a job and that in such a situation it is practically impossible to identify fitting candidates without first using automated means to sift out irrelevant ones. So what if the big corporate only received 1 thousand applications? Where do you draw the line?

Does this mean that the big corporate can only use automated decision making on a case by case basis? No, it is not as simple as that. To optimise compliance with the GDPR, the corporate could request for consent (even though this may prove to be not given freely) and should add something else to the process: human intervention.

This is because the prohibition under the GDPR only applies to decisions based solely on automated processing. If human intervention is added to the process, the prohibition will therefore not apply. To qualify as human intervention, the corporate must ensure that such human intervention is ‘meaningful, rather than just a token gesture’. Within the big corporate, a person should oversee the decisions made by the algorithm and should have the authority and competence to change the decision. In The Netherlands, the Data Protection Authority is of the strong view that, in the event that an organisation is unable to prove that one of the exceptions applies, a new decision will have to be made that involves human intervention. By adding a little bit of human touch (just enough) to the process, this discussion can be pre-empted. Finally, all legal arguments aside, isn't it only wise to ensure enough human touch to our job applicant? After all, the job applicant will be working with human colleagues, not with the algorithm who selected him.

In the future, we will surely be confronted with more automated decisions in our everyday life.

Tags

automotive, intellectual property