Using AI in Disciplinary Investigations: What Employers Must Know
AI can assist with gathering evidence and drafting investigation reports, but disciplinary fairness requires human judgment. Understand what's permissible and what creates unfair dismissal risk.
AI tools are increasingly being used to assist disciplinary investigations: analysing communication logs, searching email archives, producing summaries of evidence, and drafting investigation reports. Used carefully and within a sound procedural framework, they can make investigations more thorough. Used improperly, they create unfair dismissal risk and potentially worse.
What the ACAS Code of Practice Requires
The starting point for any disciplinary investigation is the ACAS Code of Practice on Disciplinary and Grievance Procedures. Employment tribunals must take this Code into account when assessing the fairness of disciplinary decisions. If you fail to follow it without good reason, any award against you can be increased by up to 25%.
The Code requires:
- A fair investigation before any disciplinary action is taken
- The employee is informed of the allegations in writing
- The employee is provided with copies of the evidence relied upon
- The employee has a reasonable opportunity to respond to the evidence
- The employee has the right to be accompanied to any disciplinary hearing
- The decision is made by a manager who is impartial (not the investigator)
- The employee has a right of appeal
These requirements apply regardless of whether AI was used in the investigation. AI can help you gather and organise evidence, but it cannot replace the procedural requirements.
Where AI Can Legitimately Assist
Evidence gathering from digital sources: If you have lawfully implemented monitoring of emails, communications, or computer activity, and employees have been informed of this monitoring, AI tools can assist in searching and analysing large volumes of communications to identify relevant evidence. This can be significantly faster than manual review.
Pattern identification: AI can identify patterns across communications or activity logs that would be difficult to spot manually - frequency of contact, timing patterns, volume of activity.
Document summarisation: AI can summarise large volumes of documentary evidence to help the investigator understand the material more quickly.
Investigation report drafting: Once the investigator has gathered and assessed evidence, AI can assist in drafting the investigation report. The investigator must review and take ownership of the report - it cannot simply be an AI output.
Transcription of investigation interviews: AI transcription tools can produce accurate records of investigation interviews, reducing the risk of disputed accounts of what was said.
Where AI Creates Risk
Evidence not disclosed to the employee: If AI analysis is used to build a case against an employee but the specific AI outputs are not shared with the employee as part of the evidence pack, the employee cannot meaningfully respond. This is a fundamental procedural failure.
Evidence from unlawful monitoring: If AI is used to analyse communications that were not subject to disclosed monitoring, or where the monitoring was disproportionate, the evidence may be inadmissible and the investigation may be tainted.
AI replacing the investigator's judgment: The investigator must assess the credibility of witnesses and the weight of evidence as a human. If an AI tool produces conclusions about what the evidence shows and the investigator simply adopts them without genuinely reviewing the evidence themselves, the investigation is not independent or fair.
Algorithmic analysis of behaviour patterns: Using AI to conclude that an employee's behaviour pattern indicates misconduct (without specific evidence of specific acts) is risky. A tribunal will want to see evidence of specific acts, not statistical patterns.
Failure to consider the employee's explanation: If AI analysis points to misconduct but the employee has provided an explanation that the AI has not been programmed to assess, the human investigator must genuinely consider that explanation. An AI tool that produces a conclusion cannot weigh the employee's response.
Procedural Requirements When Using AI Evidence
If AI-generated evidence forms part of your disciplinary case, these procedural requirements apply:
-
Disclose all AI-generated evidence to the employee before the disciplinary hearing, as part of the evidence bundle. This includes summaries, analysis reports, and raw outputs.
-
Explain how the evidence was generated: The employee and their companion must understand what data was analysed, how the AI tool works in general terms, and what conclusions the AI drew. An unexplained black-box output is not fair evidence.
-
Give the employee adequate time to respond: Complex AI-generated evidence may require the employee (or their trade union representative or companion) time to understand and respond to. Do not rush the process.
-
Maintain the investigator's independence: The investigator must have genuinely reviewed the AI evidence and formed their own view. If challenged, the investigator must be able to explain their conclusions in their own words, not simply refer back to the AI output.
-
Do not use AI evidence as the sole basis for a serious finding: Disciplinary decisions, particularly those leading to dismissal, should be based on a range of evidence, not solely on AI analysis. AI evidence can corroborate other evidence, but relying on it alone for a finding of gross misconduct creates serious risk.
Specific Scenarios
Email investigation: You suspect an employee has been sharing confidential company information externally. AI tools can search email archives for relevant communications. Disclosure of the specific emails to the employee is required. The investigator must read and assess the emails, not simply accept an AI conclusion that misconduct occurred.
Social media investigation: AI tools can be used to gather and analyse publicly available social media posts. Private posts accessed through means other than the platform's public interface raise serious legal questions. Anything obtained through a third party without the employee's knowledge may breach data protection law.
Time and activity log analysis: AI analysis of access logs showing an employee was not working during contracted hours is evidence that can be used, provided monitoring was disclosed and proportionate. The employee must be given the data and the opportunity to explain.
Communication sentiment analysis: Be extremely cautious about using AI sentiment analysis of employee communications as disciplinary evidence. The reliability of such analysis is not established, it is not an objective factual record, and it will be challenged vigorously. Stick to factual content of communications, not AI assessments of tone or intent.
Protecting Yourself at Tribunal
If a disciplinary decision is challenged at employment tribunal, you will need to demonstrate:
- That your investigation was reasonable and thorough
- That the employee was aware of all the evidence used against them
- That the employee had a genuine opportunity to respond
- That the decision-maker genuinely considered the evidence and the employee's response
- That the sanction imposed was within the band of reasonable responses
If AI was used in the investigation, be prepared to explain how it was used, what outputs it produced, and how those outputs were verified and assessed by a human investigator. A clear investigation report that documents these steps is your best protection.
This is guidance, not legal advice. Disciplinary investigations carry significant legal risk, particularly where dismissal is a possible outcome. Take advice from an employment solicitor if you are dealing with a complex case or if you are considering using AI tools as part of an investigation.
Related answers
AI Employee Monitoring: What's Legal in the UK
AI-powered monitoring tools including keystroke loggers, webcam tracking and sentiment analysis carry specific legal requirements. Here's what UK employers can and cannot do.
Disciplinary Procedure Steps UK
A step-by-step guide to running a fair disciplinary procedure in the UK. Follow these steps to stay ACAS-compliant and reduce your tribunal risk.
Unfair Dismissal UK: What Employers Need to Know
Unfair dismissal claims can cost employers tens of thousands. Learn the 5 fair reasons for dismissal, how to follow a fair procedure, and avoid tribunal claims.
Frequently Asked Questions
- Can I use AI to analyse employee communications as part of a disciplinary investigation?
- You can use AI to analyse communications you have lawful access to, but only if employees were told monitoring was in place before the communications were sent, the monitoring was proportionate, and you are using it for a purpose covered by your monitoring policy. Evidence gathered through covert or unlawfully proportionate monitoring is likely to be challenged at tribunal, and relying on it without giving the employee the chance to see and challenge it breaches procedural fairness.
- Does the ACAS Code of Practice apply to AI-assisted disciplinary processes?
- Yes. The ACAS Code of Practice on Disciplinary and Grievance Procedures applies to all disciplinary processes regardless of the tools used. This means: the employee must know the allegations against them, they must have the opportunity to respond, they must be given copies of all evidence relied upon, they must have the right to be accompanied, and there must be an appeal. An AI tool cannot replace any of these procedural requirements.
- Can AI-generated evidence be used in an employment tribunal?
- AI-generated evidence can be submitted to an employment tribunal, but it must be disclosed to the employee in advance, the employee must have had the opportunity to challenge it, and the tribunal will assess its reliability. Evidence that was obtained in breach of the employee's data protection rights or through disproportionate monitoring may be excluded or may weigh against you.