An employee working on a laptop

Six things to consider when using algorithms for employment decisions

When developed and used responsibly, algorithms can transform society for the better. But there is also significant risk that algorithms can exacerbate issues of fairness and inequality. This often impacts the most vulnerable or marginalised people. 

Algorithms do not just impact society; society also impacts the use of algorithms. This year two significant global events could lead to important changes in the use of algorithms for employment-based decision-making.

First, with many people losing their jobs due to the COVID-19 pandemic, more people will be applying for limited vacancies. This could see employers looking at algorithms to ease the burden on HR departments.

Second, the Black Lives Matter movement has led to employers looking for ways to address racism and other forms of bias in their workplace. Algorithms may be one of the tools used to address discriminatory hiring practices.

The ICO has been exploring the use of algorithms and automated decision-making and the risks and opportunities they pose in an employment context.

It has highlighted six key points that organisations must consider before implementing algorithms for hiring purposes.

1. Bias and discrimination are a problem in human decision-making, so it is a problem in AI decision-making

German researchers sent out job applications with pictures of three fictitious female characters with identical qualifications. One applicant had a German name, one a Turkish name, and the third had a Turkish name and wore a headscarf.

The applicant with the German name was called back 19% of the time, the one with a Turkish name 14% and the candidate with a Turkish name who wore a headscarf, just 4%.

The study showed that human bias and discrimination are a problem in recruitment.

The training data that is fed into AI systems has been influenced by the results of human decision making and is therefore rife with our prejudices. 

AI is not currently at a stage where it can effectively predict social outcomes or weed out discrimination in the data sets or decisions. Therefore, to pick the “best candidate” using just an algorithm is to reinforce the existing problem.

So, you must assess whether AI is a necessary and proportionate solution to a problem before you start processing. This assessment should form part of your data protection impact assessment.

The ICO has written about what you need to consider when undertaking data protection impact assessments for AI in its guidance on AI and data protection.

2. It is hard to build fairness into an algorithm

All algorithms need to comply with the data protection principle of fairness. This means if you process data in an AI system, it must not have any unjustified, adverse effects on an individual. This includes discrimination against people who have a protected characteristic. 

In the US, there is a hard-metric some organisations have used to measure fairness in an employment context. In some contexts, US legal guidelines maintain that discrimination has occurred if the employment selection rate for a protected group is less than 80% of the selection rate for the majority group.

UK equalities law is less prescriptive than this. The Equalities Act 2010 states that indirect discrimination can be justified as long as it is proportionate. This suggests fairness needs to be assessed on a case-by-case basis.

To do that, you should start at the very beginning of the AI lifecycle. You must determine and document how you’ll sufficiently mitigate bias and discrimination as part of your data protection impact assessment. You can then put in place the appropriate safeguards and technical measures during the design and build phase.

The ICO has produced guidance on how you should address risks of bias and discrimination in its guidance on AI and data protection.

UK-based organisations also need to remember there is no guarantee that an algorithm, designed to meet US standards, will meet UK fairness standards.

3. The advancement of big data and machine learning algorithms is making it harder to detect bias and discrimination

Big data and machine learning (ML) algorithms are increasingly being used to make automated decisions that significantly impact many aspects of people’s lives, including decisions related to employment.  There are valid concerns that the decisions being made with these technologies are unfair because the AI systems make certain correlations that discriminate against groups of people. These patterns tend to be unintuitive and therefore harder to detect.

For example, when Amazon discovered its AI recruiting tool showed bias against women, they reprogrammed it to ignore explicitly gendered words like “women’s”. However, the revised system still picked up on implicitly gendered words – such as verbs that were highly correlated with men over women – and used those to make decisions.

This is an area where best practice and technical approaches continue to develop. You should monitor changes and invest time and resources to ensure you continue to follow best practice and your staff remain appropriately trained.

4. You must consider data protection law AND equalities law when developing AI systems.

In several ways, data protection addresses unjust discrimination:

  • Under the fairness principle AI systems must process personal data in ways an individual would reasonably expect.
     
  • The fairness principle requires any adverse impact on individuals to be justified.
     
  • The law provides aims to protect individuals’ rights and freedoms with regard to the processing of their personal data. This includes the right to privacy but also the right to non-discrimination.
     
  • The law states businesses must use appropriate technical and organisational measures to prevent discrimination when processing personal data for profiling and automated decision-making.
     
  • Organisations must undertake a data protection impact assessment when processing data in this way and ensure they build in data protection by design. These accountability mechanisms force organisations to consider how their processing might infringe on people’s rights and freedoms, including through discrimination and bias.

 

These aspects of data protection law go some way to addressing some elements of equality law (notably the UK Equality Act 2010), but not fully. For example, a DPIA makes you assess the risk of discrimination and mitigate it. But it does not require you to eliminate the risk completely. This is different in equalities law.

So, although both address unjust discrimination, organisations must consider their obligations under both laws separately. Compliance with one will not guarantee compliance with the other.

5. Using solely automated decisions for private sector hiring purposes is likely to be illegal under the GDPR

Solely automated decision-making that has a legal or similarly significant effect is illegal under the General Data Protection Regulation (GDPR). There are three exemptions to this:

  • You have had explicit consent from the individual;
     
  • The decision is necessary to enter into a contract; or
     
  • It is authorised by union or member state law.

 

However, these are unlikely to be appropriate in the case of private sector hiring. This is because:

  • Consent is unlikely to be freely given due to the imbalance of power between the employer and the job candidate;
     
  • It is unlikely that any solely automated decision-making couldn’t be replaced with a human decision-making process; and
     
  • The exemption allowing authorisation by union or member state law is not applicable to private business.

 

Organisations should therefore consider how they can bring a human element into an AI assisted decision-making process.

6. Algorithms and automation can also be used to address the problems of bias and discrimination

Automation bias is a residual problem with no easy fix. However, the same technology may also be part of the solution.

New uses of algorithms and automation are being developed to address some of problems of bias and discrimination in employment automation. For example, algorithms can be used to detect bias and discrimination in the early stages of a system’s lifecycle.

So, whilst we may never be able to remove the most ingrained human biases, using automation, we can improve how we make decisions.

 

Alister Pearson is Senior Policy Officer, Technology and Innovation Service, at ICO.