Catch-22: Negligence In The Age Of AI-Enhanced Safety Management
Negligence is a safety manager’s worst nightmare, and it can happen in many ways: an absence of a risk assessment, improper vetting, substandard process design or poor decision-making, to name a few. Fortunately, there are a range of centralized digital tools that can oversee and supervise safety managers in their administration of key safety workflows, such as training, contractor management and process safety management. While these technologies can by no means eliminate negligence, they can help to reduce the likelihood of negligence-related incidents.
Despite the technology available, there is still much work to do to reduce negligence – as even one incident is too many. To address this, vendors are integrating AI into their management systems to, for example, assist with data entry, automatically produce summaries of key safety indicators, or identifying root causes of accidents. These capabilities are all designed to help decision-makers optimize safety management, resulting in fewer cases of negligence.
However, AI systems are not infallible. They can hallucinate, produce inaccurate information, or malfunction. If a safety manager unintentionally relies on faulty outputs which then contribute to an incident, the manager could be deemed professionally negligent. Any incorrect or misleading advice generated by an AI system ultimately remains the responsibility of the professional who chooses to act upon it. It is, therefore, the user’s obligation to verify the validity, factual accuracy and appropriateness of AI-generated information before applying it in a professional or operational context. Given that current AI-enhanced safety management systems vary greatly in quality – from highly reliable to inconsistent – this issue represents the most significant negligence risk associated with AI for safety managers in 2025.
But decision-makers face a double-edged sword: when AI becomes widely accepted as a standard tool for improving safety performance, the decision not to use AI could become a primary source of negligence risk. In other sectors, such as healthcare, liability for professional negligence is often assessed based on whether an individual’s actions or omissions align with what would be considered acceptable practice by a ‘reasonable body of opinion’ within that profession.
Ultimately, safety managers will have to use AI-enhanced technology with extreme care to avoid negligence claims. Verdantix predicts that using AI for safety management, in some form, will be ubiquitous among firms within 5 years. Existing and planned implementations are already widespread: the majority of respondents to the 2025 Verdantix EHS global corporate survey noted that they are using or looking to use AI for a wide plethora of safety use cases. This will entail complexity in terms of investment, deployment and governance, posing business-wide challenges as noted in the Verdantix AI Applied Radar: AI Applied To Safety Management.
To read more about emerging technologies and the role of AI in EHS, check out the Verdantix research page and tune into our upcoming webinar AI In EHS Software: Building The Business Case.
About The Author

Moses Makin
Industry Analyst




