Right to Explanation: AI Decisions in the Workplace
UK GDPR Article 22 gives employees the right to meaningful information about AI decisions that affect them. Understand what counts as a solely automated decision and how to provide compliant explanations.
The right to explanation for AI decisions is one of the most practically important data protection rights in the employment context. As AI tools take on more roles in HR - from screening candidates to generating performance scores - the ability to explain automated decisions to employees and candidates is both a legal requirement and a practical necessity for maintaining fair workplaces.
The Legal Framework
UK GDPR Article 22 provides that individuals shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces a legal or similarly significant effect concerning them.
Where such processing does take place, the data controller must:
- Implement suitable measures to safeguard the individual's rights and freedoms
- At minimum, ensure the right to obtain human intervention, to express a point of view, and to contest the decision
- Provide meaningful information about the logic involved in the automated decision
The right to meaningful information about the logic is the right to explanation. It is a concrete legal right, not an aspiration.
The UK data protection framework (as updated by the Data Protection Act 2018) also requires, in the context of automated decisions, that individuals be able to obtain an explanation of the decision reached after meaningful human review.
What "Solely Automated" Actually Means
The definition of "solely automated" is critical because Article 22 only applies when the decision is solely automated. Understanding this boundary determines when the right to explanation is engaged.
Solely automated: The AI system makes the decision with no meaningful human involvement. Examples:
- An ATS system that auto-rejects applications below a score threshold with no human reviewing individual rejections
- An AI performance system that generates annual ratings that are fed directly into pay review without individual manager review
- An automated attendance system that triggers formal warnings based on pattern rules
Not solely automated (where Article 22 does not apply, but good practice still requires transparency):
- A manager who receives an AI-generated shortlist but reviews each candidate individually and can override the recommendation
- A performance system that generates suggested ratings which managers review, adjust, and take responsibility for
- An AI tool that flags attendance concerns for a manager to then investigate and make a decision about
The grey area: The most difficult cases are where human review is nominal rather than genuine. A manager who approves AI recommendations for all 50 candidates without reading their individual files is not providing meaningful human involvement even though they clicked "approve". Courts and tribunals will look at the substance of the process, not its form.
What a Meaningful Explanation Must Include
The ICO guidance on explaining AI decisions sets out that a meaningful explanation must include:
The logic of the decision: What the system was designed to do, what factors it was programmed to consider, and how it weighted them. You do not need to disclose proprietary algorithms, but "the system considered various factors" is not sufficient.
The significance of the decision: What the automated decision means for the individual - what the output was and what it resulted in.
The data used: What personal data was input into the automated system to produce the output about this individual.
The envisaged consequences: The anticipated effect of the processing on the individual.
For practical employment contexts, this means you need to be able to tell an employee:
- "The productivity system scored you based on your output volume, quality ratings from our system, and response times. Your score of X placed you in the lower quartile, which is why you were selected for a performance discussion."
- "The recruitment screening tool assessed your application against the job requirements. It scored you highly on technical skills but flagged that you did not meet the minimum years of experience required. Your application was therefore not shortlisted."
Explanations Must Be Understandable
The ICO is clear that an explanation must be in plain language that the individual can understand. Technical jargon, references to model architectures, or statistical language that a non-specialist cannot follow does not satisfy the requirement.
Test your explanations against this standard: if you read the explanation to a person with no data science background, would they understand what happened and why? If not, the explanation needs to be simplified.
Implementing the Right to Explanation in Practice
Document how each AI tool works: Before deploying any AI HR tool, document in plain terms what data it uses, what it is designed to produce, and how its outputs are used. This documentation is the foundation of any explanation you give to employees.
Train managers to explain AI outputs: Managers who use AI tools to inform HR decisions must understand those tools well enough to explain them. "The system said so" is not an explanation. Managers should be able to describe the factors the system considered and why those factors were assessed as they were.
Build explanation templates: For common AI-assisted HR decisions (performance ratings, absence flags, shortlisting outcomes), create template explanations that managers can customise for individual cases. This ensures consistency and completeness.
Establish a request process: Employees need to know how to request an explanation of an automated decision affecting them. Include this in your employee privacy notice and staff handbook.
Timescales: Explanation requests should be handled promptly. While UK GDPR does not set a specific timescale for explanations (as distinct from subject access requests, which must be responded to within one calendar month), unreasonable delay itself becomes an issue.
When the System Is a Black Box
Some AI tools genuinely cannot provide explanations of their outputs because the model is an opaque neural network. If the vendor cannot tell you how the model reaches its outputs, you have a significant compliance problem.
Using a black-box AI tool for decisions with legal or significant effects on employees is not UK GDPR compliant. You cannot satisfy the right to explanation if you do not understand how the tool works.
When evaluating AI HR tools, explicitly ask the vendor:
- Can you explain how the model reaches its outputs?
- Can you provide a plain-English description of the factors the model considers?
- What explainability features does the tool provide (feature importance, decision trees, etc.)?
If the vendor cannot answer these questions, the tool is not suitable for consequential HR decisions.
The Right to Human Review After Explanation
Providing an explanation is not the end of the process. Having received an explanation, an employee can:
- Contest the accuracy of the data used as input
- Argue that the factors considered were not appropriate
- Provide context that the automated system could not have considered
- Request that a human review the decision in light of this additional information
Your process for handling these challenges must be documented and genuine. A human reviewer who simply confirms the AI's output without considering the employee's points is not providing meaningful review.
Practical Scenarios
Scenario 1 - Performance rating: An employee receives a performance rating generated by an AI performance management system. They request an explanation. You must be able to explain what metrics were used (output volume, quality scores, meeting attendance, 360 feedback), how each was weighted, what their individual scores were on each metric, and what the combined score was. The employee can then challenge the accuracy of any of these inputs.
Scenario 2 - Recruitment rejection: A candidate is rejected after automated screening. They request an explanation and a human review. You must explain what criteria were assessed, how the candidate scored on each, and why the overall score did not meet the threshold. A human must then genuinely review whether the automated assessment was fair in this individual's case.
Scenario 3 - Absence management flag: An AI absence management system flags an employee's absence record and recommends a return-to-work meeting. The employee asks why they were flagged. You explain that the system assessed the number of absences in a rolling 12-month period, the pattern (Monday/Friday absences), and the total days lost. The employee explains that their Monday absences were related to a medical appointment schedule agreed with their previous manager. A human must review this in light of the disability discrimination implications.
This is guidance, not legal advice. The right to explanation for AI decisions is a developing area of UK data protection law. If you are uncertain whether your AI HR processes meet the right to explanation requirements, take advice from a data protection specialist.
Related answers
AI in Performance Management: Employer's Legal Guide
Using AI for performance tracking and automated scoring in the UK triggers UK GDPR Article 22 rights. Understand your obligations before automating performance management.
AI in Recruitment Screening: Legal Risks for UK Employers
Using AI to screen CVs and shortlist candidates carries real legal risks. Understand your obligations under the Equality Act 2010 and UK GDPR before automating recruitment.
Employee Rights When AI Makes HR Decisions
UK GDPR Article 22 gives employees the right not to be subject to solely automated HR decisions. Understand what rights apply, when, and what employers must do to comply.
Frequently Asked Questions
- What is the right to explanation for AI decisions at work?
- Under UK GDPR Article 22, employees subject to decisions based solely on automated processing that produce a legal or similarly significant effect have the right to obtain meaningful information about the logic involved in that decision. This does not mean you must disclose the full algorithm, but you must be able to explain what data was used, what factors the system considered, and why the output was what it was, in terms the employee can understand.
- What counts as a 'solely automated decision' in HR?
- A decision is solely automated when there is no meaningful human involvement in reaching it. If a manager receives an AI recommendation and genuinely reviews it, considers context the AI does not have, and takes responsibility for the final decision, it is not solely automated. If the manager automatically approves whatever the AI produces without real scrutiny, it is effectively solely automated regardless of the human being nominally involved.
- Can employees ask for an explanation of an AI recruitment rejection?
- Yes. Candidates have the same UK GDPR rights as employees in this context. If an AI tool played a significant role in a recruitment decision, the candidate can request the information about the logic involved. They can also make a subject access request for their personal data held by the employer. If the AI tool shortlisted or rejected them without meaningful human review, Article 22 protections apply.