AI in Performance Management: Employer's Legal Guide
Using AI for performance tracking and automated scoring in the UK triggers UK GDPR Article 22 rights. Understand your obligations before automating performance management.
AI performance management tools - ranging from productivity dashboards to tools that generate automated performance scores - are increasingly used in UK workplaces. Used correctly, they can provide useful data. Used incorrectly, they create significant legal exposure under UK GDPR, the Equality Act 2010, and employment law generally.
What AI Performance Management Includes
AI tools are now used at multiple stages of the performance management cycle:
- Continuous productivity tracking: Real-time dashboards measuring output, activity, and performance metrics
- Automated performance scoring: Systems that aggregate data to produce an overall performance score or rating
- AI-generated performance review drafts: Tools that produce written performance assessment text based on data inputs
- Objective-setting tools: AI that suggests or sets performance targets based on role and historical data
- Sentiment and engagement analysis: Tools that analyse communication patterns to infer employee engagement
- Absence pattern flagging: Systems that identify and flag absence trends
Each of these raises different legal questions.
UK GDPR Article 22: The Core Legal Constraint
Article 22 of UK GDPR is the primary legal constraint on automated performance management. It provides that individuals have the right not to be subject to a decision based solely on automated processing that produces a legal or similarly significant effect concerning them.
In the performance management context, "legal or similarly significant effect" includes:
- Pay decisions (salary increases, bonus allocation, pay cuts)
- Promotion or demotion
- Disciplinary action including written warnings
- Dismissal
- Denial of development opportunities
Placing an employee on a performance improvement plan based solely on an AI-generated performance score, without any genuine human review of the underlying data and circumstances, is likely to engage Article 22.
What "solely automated" means: Article 22 applies when the automated system makes the decision. If a human manager receives an AI-generated performance score and then genuinely reviews it, considers the context, discusses it with the employee, and takes responsibility for the final decision, Article 22 is not engaged - because the decision is not "solely" automated. The key word is "genuinely": a rubber-stamping process where a human simply approves whatever the AI recommends does not satisfy this requirement.
Required Safeguards Under Article 22
Where automated performance decisions do apply (even where Article 22 is not fully engaged, these safeguards represent good practice), you must:
- Inform employees: Your privacy notice and performance management policy must explain that AI tools are used, what data they process, and how the output is used in performance decisions
- Allow human review: Employees must be able to request that a human reviews any AI-generated performance assessment that affects them
- Allow challenge: Employees must be able to challenge the accuracy of data used and the conclusions drawn
- Provide an explanation: You must be able to explain, in terms the employee can understand, why an AI tool produced a particular score or assessment
Fairness Obligations Under Employment Law
Beyond UK GDPR, UK employment law imposes fairness obligations on performance management that sit uncomfortably with fully automated systems.
Unfair dismissal protection: Employees with two or more years' service have the right not to be unfairly dismissed. Dismissal for poor performance requires a genuine investigation, a fair procedure, and evidence that the employee was made aware of the concerns and given an opportunity to improve. An AI-generated performance score is data, not a fair procedure.
Disability discrimination: If an employee's performance is affected by a disability, the Equality Act 2010 requires you to make reasonable adjustments before initiating performance action. An AI performance system that flags an employee based on output metrics without any awareness of disability-related context can produce a discriminatory outcome.
ACAS guidance: The ACAS Code of Practice on Disciplinary and Grievance Procedures sets out procedural requirements that apply to capability dismissals. AI performance data does not replace the need to follow the ACAS Code.
The Bias Problem in AI Performance Tools
AI performance management tools are not neutral. They are trained on historical data that reflects the performance patterns of employees who succeeded in your organisation under previous conditions. This creates bias risk:
- Part-time and flexible workers: Many productivity metrics disadvantage employees who work non-standard hours
- Disabled employees: Output-based metrics do not account for periods of reduced capacity
- Employees returning from maternity or parental leave: Career breaks and adjusted duties affect metrics in ways that penalise returners
- Protected characteristics correlated with output styles: Communication style analysis tools have been shown to produce racially biased results
Under the Equality Act 2010, you are liable for discriminatory outcomes produced by tools you choose to use, even if the bias is in the tool rather than in your explicit decision-making.
Practical Framework for Using AI Performance Tools Fairly
If you use AI performance tools, build these safeguards into your process:
- Document the purpose: Define specifically what business problem the tool is solving and why AI is the appropriate solution
- Carry out a DPIA: Required for high-risk processing; AI performance management almost certainly qualifies
- Configure tools thoughtfully: Many tools have settings that allow you to exclude certain data types or apply weightings - use these to reduce bias risk
- Train managers: Managers who receive AI performance data must understand what it does and does not show, and must be able to make genuine independent judgements
- Build in human review checkpoints: Every performance decision with significant consequences (written warning, PIP, dismissal) must involve documented human review
- Give employees access to their data: Be prepared to provide performance data in response to subject access requests and to explain how it was generated
- Monitor for discriminatory patterns: Regularly audit performance scores across protected characteristic groups
What Performance Conversations Must Look Like
When using AI performance data in a performance review conversation:
- Share the data with the employee before the meeting, not during it
- Explain clearly how the data was generated and what it does and does not measure
- Give the employee a genuine opportunity to challenge the data or provide context
- Document the employee's response and take it into account in your assessment
- Record your decision as a human decision, supported by data, not as an AI determination
- Never tell an employee "the system says you are underperforming" without being able to explain and take responsibility for that conclusion
This is guidance, not legal advice. If you are considering implementing AI performance management tools or are facing a challenge to an AI-driven performance decision, take advice from an employment solicitor.
Related answers
Right to Explanation: AI Decisions in the Workplace
UK GDPR Article 22 gives employees the right to meaningful information about AI decisions that affect them. Understand what counts as a solely automated decision and how to provide compliant explanations.
Employee Rights When AI Makes HR Decisions
UK GDPR Article 22 gives employees the right not to be subject to solely automated HR decisions. Understand what rights apply, when, and what employers must do to comply.
Disciplinary Procedure Steps UK
A step-by-step guide to running a fair disciplinary procedure in the UK. Follow these steps to stay ACAS-compliant and reduce your tribunal risk.
Frequently Asked Questions
- Can I use AI to generate performance reviews for employees?
- You can use AI as a drafting aid, but you cannot rely on AI-generated performance assessments as the sole basis for significant HR decisions. Under UK GDPR Article 22, employees have the right not to be subject to decisions based solely on automated processing that significantly affects them. Performance reviews leading to pay decisions, promotions, or disciplinary action meet this threshold. A human manager must be genuinely involved in reviewing and taking responsibility for the assessment.
- What is UK GDPR Article 22 and when does it apply to performance management?
- Article 22 of UK GDPR gives individuals the right not to be subject to solely automated decisions that produce a legal or similarly significant effect. In performance management, this applies when AI output directly drives decisions about pay, promotion, demotion, or disciplinary action. Simply using AI as a tool that informs a human decision does not trigger Article 22, provided the human genuinely reviews and can override the AI output.
- Can I dismiss an employee based on AI performance data?
- No. Dismissal requires a fair reason, a genuine investigation, and a fair procedure under the Employment Rights Act 1996. Using AI-generated performance data as the primary evidence without a genuine human review process, without giving the employee the chance to challenge the data, and without following a fair procedure is likely to result in an unfair dismissal finding at tribunal.