AI Employee Monitoring: What's Legal in the UK
AI-powered monitoring tools including keystroke loggers, webcam tracking and sentiment analysis carry specific legal requirements. Here's what UK employers can and cannot do.
AI-powered employee monitoring goes significantly further than traditional monitoring. Tools that use machine learning to analyse patterns, flag anomalies, or score employee behaviour in real time require a higher level of compliance scrutiny than conventional oversight.
What AI Employee Monitoring Includes
AI monitoring tools for the workplace fall into several categories:
Activity and productivity tracking
- Keystroke logging and mouse movement analysis
- Application and website usage monitoring
- Screenshot capture at intervals or continuously
- Idle time detection and productivity scoring
Communication analysis
- Email content scanning for keywords or sentiment
- Instant message analysis
- Meeting transcription and analysis tools
Physical and video monitoring
- AI-powered CCTV with behaviour analysis
- Webcam monitoring for remote workers
- Attendance and movement tracking in premises
Wellbeing and sentiment tools
- Sentiment analysis of written communications
- Tools that attempt to infer employee mood or stress
- Absence pattern prediction systems
Each category carries different legal risks. The more invasive and continuous the monitoring, and the more it relies on inferred data rather than observable facts, the greater the compliance burden.
The Proportionality Test
The ICO's Employment Practices Code and its 2023 guidance on monitoring workers set out a proportionality test that applies to all workplace monitoring. AI monitoring must satisfy all three elements:
- Legitimate purpose: You have a specific, documented reason for the monitoring (not vague rationales like "productivity")
- Necessity: The monitoring is actually required to achieve that purpose - less intrusive alternatives would not work
- Proportionality: The intrusion on employee privacy is proportionate to the benefit you are seeking
Continuous keystroke logging of all employees to ensure remote workers are working is unlikely to pass this test unless you have specific, documented reasons why that level of oversight is necessary. Monitoring specific employees under an existing performance improvement plan, with their knowledge, is more likely to be proportionate.
UK GDPR Requirements for AI Monitoring
AI monitoring tools process employee personal data. As the employer and data controller, you must:
Establish a lawful basis: For employee monitoring, this is typically Article 6(1)(f) - legitimate interests. You must document your Legitimate Interests Assessment, weighing your business interest against the employee's right to privacy. Consent is not appropriate as the basis for workplace monitoring because employees cannot freely refuse when there is a power imbalance.
Carry out a DPIA: Processing that is likely to result in high risk requires a Data Protection Impact Assessment before you begin. AI monitoring of employees is specifically listed by the ICO as a high-risk activity. Your DPIA must identify risks and document how you will mitigate them.
Be transparent: Employees must be informed, in writing, before monitoring begins. Your monitoring policy, staff handbook, or a specific notice must set out:
- What is being monitored
- How the AI tool works and what it analyses
- What data is collected and retained
- How long data is kept
- Who can access monitoring data and for what purposes
- How the data may be used in disciplinary or performance processes
Respect data subject rights: Employees have the right to access their monitoring data (subject access requests), the right to object to processing, and if the AI makes significant automated decisions about them, the right to human review under Article 22.
What the ICO Says About AI-Specific Monitoring
The ICO has taken a specific interest in AI-powered monitoring tools, noting that they can:
- Process significantly more data than traditional monitoring
- Produce inferred data (mood, intent, loyalty) from observable behaviour that may be unreliable
- Create a chilling effect on employees that damages workplace relations
- Generate discriminatory outcomes if the underlying model is biased
The ICO expects employers using AI monitoring tools to understand how those tools work, not simply rely on vendor assurances. If you cannot explain to an employee why they received a particular productivity score, you are not meeting your transparency obligations.
Tools That Are Particularly High Risk
Sentiment analysis of communications: Tools that infer employee mood or attitude from emails and messages process special category data in effect (data revealing psychological health or mental state). This requires explicit justification and a higher legal threshold.
Webcam monitoring of remote workers: Continuous or near-continuous video monitoring of home workers is almost certainly disproportionate. Intermittent screenshots tied to specific performance concerns, with full transparency, may be justifiable in limited circumstances.
Keystroke logging and idle time tracking: These are highly intrusive. The ICO guidance is clear that employers should not use such tools routinely across all staff without specific justification.
Behaviour prediction tools: Tools that claim to predict employee flight risk, disengagement, or misconduct based on communication patterns are ethically problematic and legally risky. Inferred data is unreliable, and acting on it without the employee's knowledge may constitute unfair treatment.
What You Can Do Lawfully
AI monitoring is not blanket prohibited. Lawful uses include:
- Network security monitoring to detect data breaches or unauthorised access
- Monitoring of call centre staff who have been informed of call recording
- Access logging to sensitive systems for security and audit purposes
- Checking on output and deliverables rather than tracking process in real time
- Performance management with agreed, transparent metrics
The key is transparency, proportionality, and documentation.
Steps to Implement AI Monitoring Compliantly
If you have a legitimate need for AI monitoring, follow these steps:
- Define the specific purpose before choosing a tool
- Carry out a DPIA documenting risks and mitigations
- Select the least intrusive tool that meets your purpose
- Conduct due diligence on the AI vendor (where data is processed, retention, security)
- Update your employment contracts and staff handbook to include monitoring provisions
- Brief all affected employees in writing before monitoring begins
- Establish a retention and deletion schedule for monitoring data
- Document who can access monitoring data and under what circumstances
- Review the monitoring arrangement regularly to confirm it remains necessary and proportionate
Consequences of Non-Compliance
Unlawful monitoring can result in:
- ICO enforcement action and fines up to £17.5 million or 4% of global turnover
- Employment tribunal claims for breach of the implied term of mutual trust and confidence (potentially constructive dismissal)
- Exclusion of monitoring evidence from disciplinary or performance proceedings, undermining your case
- Damage to employee relations and retention
This is guidance, not legal advice. If you are considering implementing AI monitoring tools, take advice from an employment solicitor and data protection specialist before proceeding.
Related answers
AI and HR Data: UK GDPR Compliance Guide
Using AI tools that process employee data triggers specific UK GDPR obligations. Understand data controller duties, DPIAs, employee transparency requirements, and international transfer risks.
Employee Monitoring: What Employers Can and Cannot Do
Legal rules on monitoring employees. CCTV, email monitoring, GPS tracking, keystroke logging, and balancing business needs with privacy rights.
Remote Working Policies: Employer's Guide
Implementing remote and hybrid working. Policy requirements, equipment, health and safety, data protection, and managing remote teams.
Frequently Asked Questions
- Can I use AI monitoring tools on my employees without telling them?
- No. Covert monitoring is unlawful in almost all circumstances. The ICO guidance on employee monitoring is clear: you must inform employees about what monitoring takes place, why, and how the data is used, before the monitoring begins. Failure to disclose AI monitoring breaches UK GDPR transparency requirements and can amount to an unfair labour practice.
- Is it legal to use AI to analyse employee productivity remotely?
- It can be, if you meet the proportionality test: the monitoring must be for a legitimate purpose, necessary to achieve it, and the least intrusive means available. You must inform employees in writing before monitoring begins, carry out a DPIA, and have a clear data retention policy. AI productivity monitoring of remote workers is a high-risk activity that requires careful compliance steps.
- Can an employee refuse AI monitoring?
- Employees cannot simply opt out of monitoring that is a legitimate and proportionate business requirement set out in their employment contract or company policy. However, if monitoring is excessive or unlawful, employees can raise a grievance, make a subject access request for data collected about them, and complain to the ICO. Any disciplinary action based on unlawfully obtained monitoring data is likely to be procedurally unfair.