AI Bias and Discrimination in Employment: Employer Liability
Algorithmic bias can produce unlawful discrimination under the Equality Act 2010. UK employers are liable for discriminatory AI outcomes, even when using third-party tools.
AI bias in employment is not a theoretical concern. Multiple documented cases internationally have shown AI recruitment and performance tools producing systematically discriminatory outcomes. UK employers using these tools carry legal liability under the Equality Act 2010, and this liability cannot be contracted away by blaming the vendor.
How Algorithmic Bias Arises
Algorithmic bias has several root causes, all of which are relevant to employment AI:
Historical data bias: Machine learning systems learn from past data. In employment contexts, this typically means historical hiring decisions, performance ratings, or promotion records. If those historical decisions were influenced by conscious or unconscious bias against certain groups, the AI learns to replicate that bias.
Proxy discrimination: AI systems often identify correlations in data that serve as proxies for protected characteristics. Postcode can correlate with race. Name can correlate with ethnicity. Career gaps correlate with maternity leave. The AI is not explicitly discriminating by protected characteristic, but the proxy produces the same outcome.
Training data composition: If an AI tool was trained on data from companies with non-diverse workforces, it learns that success looks like the majority group and scores others lower.
Feature selection bias: The person who chooses which data features the AI uses to make predictions may inadvertently include features that correlate with protected characteristics, or exclude features that would reduce bias.
Feedback loop amplification: When AI outputs are used to make decisions, those decisions generate new data. If the AI produces biased shortlists, the people hired from those shortlists reinforce the "successful" profile in the training data, amplifying the bias over time.
Protected Characteristics Under the Equality Act 2010
The Equality Act 2010 protects nine characteristics. AI bias can produce discrimination on any of them:
| Protected characteristic | AI bias mechanism |
|---|---|
| Age | CV filtering by years of experience; graduation year inference |
| Disability | Penalising career gaps; output-based performance metrics |
| Sex | Historical male-dominated success data; language bias in job descriptions |
| Race | Name-based bias; postcode filtering; university preference |
| Religion or belief | Availability patterns; cultural name signals |
| Sexual orientation | Lifestyle inference from social media analysis |
| Pregnancy and maternity | Penalising career breaks; availability assumptions |
| Marriage and civil partnership | Lifestyle inference |
| Gender reassignment | Name changes; employment history gaps |
Types of Discrimination That AI Bias Can Produce
Indirect discrimination: The most common form. An AI tool applies what appears to be a neutral criterion (years of experience, specific university attended, no gaps in employment history) that in practice puts people with a protected characteristic at a particular disadvantage.
Direct discrimination: Less common but possible where an AI tool explicitly uses a protected characteristic as an input, or where the bias is so closely correlated with a characteristic as to be effectively the same thing.
Harassment: AI-powered communication analysis tools that flag communications as problematic based on characteristics of the sender could contribute to a hostile environment.
Victimisation: Automated systems that track employee complaints or grievances and factor them into performance or risk scores could constitute victimisation.
Employer Liability for Third-Party AI Tools
The Equality Act 2010 does not provide a defence based on the fact that a discriminatory outcome was produced by a third-party system you purchased. You are responsible for the employment decisions made using your name, for processes you put in place, and for outcomes that affect candidates and employees.
The Equality and Human Rights Commission (EHRC) has made clear that employers must take an active approach to preventing discrimination, not simply a passive approach of not intending it.
Practically, this means:
- You cannot defend a discrimination claim by pointing to your AI vendor's terms of service
- You cannot disclaim responsibility for a discriminatory shortlist by saying the algorithm produced it
- You cannot avoid liability by arguing you were not aware the tool was biased
You can, however, demonstrate that you took reasonable steps to identify and mitigate bias. This is relevant to the remedies available but does not automatically provide a complete defence.
What Reasonable Steps to Mitigate Bias Look Like
To demonstrate that you took reasonable steps to prevent AI discrimination, you need:
Pre-deployment due diligence: Before using an AI recruitment or performance tool, ask the vendor:
- Has the tool been audited for bias across protected characteristics?
- What training data was used?
- Can you provide statistical output data broken down by demographic group?
- What is your data processing agreement and where is data processed?
Bias testing before go-live: Run historical applications or performance data through the tool before using it live. Check outputs for patterns across protected characteristic groups. If you cannot access appropriate historical data, use synthetic datasets.
Ongoing monitoring: Regularly audit the tool's outputs for discriminatory patterns. This should be at least annual, and after any significant update to the tool.
Human review of consequential decisions: Ensure a human manager reviews and takes responsibility for every significant decision that has been informed by AI output.
Documentation: Keep records of your bias assessments, the steps you took to mitigate identified risks, and the basis for consequential decisions. This is evidence that you took reasonable steps.
Vendor contracts: Include contractual protections in agreements with AI vendors: bias audit rights, notification obligations if bias is discovered, and data processing terms that comply with UK GDPR.
When the Equality and Human Rights Commission Gets Involved
The EHRC has the power to investigate employers where there is evidence of systematic discrimination. An AI tool that produces demonstrably discriminatory recruitment outcomes at scale is exactly the kind of situation that can attract EHRC attention. EHRC investigations are time-consuming, public, and reputationally damaging, and can result in binding agreements that dictate how you must operate in future.
Steps to Take if You Suspect Your AI Tool Is Biased
If you have reason to suspect your AI tool is producing biased outputs:
- Stop using the tool for consequential decisions while you investigate
- Audit recent decisions made using the tool for discriminatory patterns
- Contact the vendor for bias audit information
- Take advice from an employment solicitor on your exposure from past decisions
- Consider whether past decisions need to be reviewed or reversed
- Document all steps taken
Do not simply continue using a tool you suspect is biased in the hope that nothing happens. Continuing to use a biased tool once you are aware of the issue significantly worsens your legal position.
This is guidance, not legal advice. Discrimination law is complex and the application to AI systems is an evolving area of UK law. If you are concerned about bias in AI tools you are using, take specialist advice from an employment solicitor or equality specialist.
Related answers
AI in Performance Management: Employer's Legal Guide
Using AI for performance tracking and automated scoring in the UK triggers UK GDPR Article 22 rights. Understand your obligations before automating performance management.
AI in Recruitment Screening: Legal Risks for UK Employers
Using AI to screen CVs and shortlist candidates carries real legal risks. Understand your obligations under the Equality Act 2010 and UK GDPR before automating recruitment.
Equality Act 2010: Employer's Guide
Understanding the Equality Act for employers. Protected characteristics, types of discrimination, reasonable adjustments, and avoiding claims.
Frequently Asked Questions
- Can I be held liable for discrimination caused by an AI tool I bought from a vendor?
- Yes. Under the Equality Act 2010, you are responsible for decisions and processes that produce discriminatory outcomes, regardless of whether a third-party tool generated those outcomes. Using an AI tool that produces racially biased shortlists, for example, is not a defence to a discrimination claim. You have a duty to take reasonable steps to ensure the tools you use are not producing unlawful results.
- How does algorithmic bias cause discrimination?
- AI systems learn from historical data. If that data reflects past discriminatory patterns - for example, if your previous workforce was predominantly male, or if historically successful candidates all came from certain universities - the AI will replicate and potentially amplify those patterns. This produces indirect discrimination: a practice that appears neutral but disproportionately disadvantages people with a protected characteristic.
- What is indirect discrimination in the context of AI?
- Indirect discrimination under the Equality Act 2010 occurs when you apply a provision, criterion, or practice that appears neutral but puts people sharing a protected characteristic at a particular disadvantage, and you cannot objectively justify it. An AI screening tool that systematically downgrades applications from women, older candidates, or candidates from ethnic minorities - even without intending to - can constitute indirect discrimination.