Varying Levels Of AI Adoption And Lagging Governance Strategies Take Over #RISK Expo Europe 2025

Corporate Risk Leaders
Blog
15 Dec, 2025

At #RISK Expo Europe in London, the main topic dominating the agenda this year – perhaps unsurprisingly – was AI. Not all vendors, however, are looking at AI in the same way. Instead, there are significant differences in the level of AI integration in GRC solutions due to varying views on the right pace of AI investment, and how it should be directed.

The former is completely natural: investment pace for AI will vary from firm to firm based on factors such as digital maturity, budget allocation and the board’s priorities. The interesting element is the latter.

On one end of the spectrum, there are organizations using AI in a relatively basic form. Their approach focuses on using large language models (LLMs) to support filling in fields across multiple sections of their solutions. These firms are taking a ‘wait-and-see’ approach that could be useful if they are prepared to invest heavily in the next two years, otherwise they risk lagging behind.

At the other extreme, some entities have taken an aggressive leap forward and invested significantly in the technology. They are integrating full predictive analytics models – along with AI agents – across multiple modules of their solution. Their first-mover strategy implies that their current and potential clients have a high level of data hygiene (see below). However, this could be a very large assumption that could leave investors with unmet expectations.

One area where everybody seems to agree, though, is the need to keep a ‘human in the loop’ (HITL). This approach, which maintains an element of real human oversight, has been subject to a shift in recent months. Unlike the early days of AI when real analysts seemed all but done for, many leading vendors are now reconciling the man with the machine.

The feeling that the analyst-AI gap needs to be closed is primarily based on two factors. Firstly, many firms are realising that a copilot (as opposed to an autopilot) might just have wider use cases in high-stakes, high-profile domains – accountability doesn’t outsource well. Second is an acute realization that some of the highest value insights still rely on the human domain. The collection, extraction and arrangement of information is critical here, precisely because the actionable data available to the analyst have been automatically refined.

Because of this shift, we are seeing more vendors design their AI systems with HITL in mind, with features like mandatory reviews and sign-offs, configurable escalation pathways, and embedded feedback loops that allow analysts to review and correct actions in real time. However, HITL alone is not an AI governance strategy. Firms looking to invest in AI must also consider their:

  • Data hygiene.
    When training an AI model, input data must meet minimum criteria in terms of architecture, labels, completeness and digitization to provide valuable outputs and tangible business outcomes. Ultimately, AI capabilities are dependent on the data the model is trained on, and the data it pulls outputs from. Remember: ‘garbage in, garbage out’.
  • Industry and use cases to ensure investment in the correct technology.
    Not all industries and use cases will support every AI technology, and vice versa. For example, a maintenance engineer looking to record an equipment defect (e.g. a cracked valve) would require image-to-text AI capabilities to extract visible defect details and log them into maintenance tracking records for audit evidence.
  • AI governance and controls frameworks.
    Without a defined, practical and frequent plan to govern AI, organizations risk exposing themselves to fines, penalties and other reputational damage due to AI misuse. Defining an AI governance plan and incorporating AI controls within internal control environments must be the first step to any implementation.

For further insights on AI risks and innovations, see Verdantix risk management research and the Verdantix AI Applied Radar: AI Applied To Risk Management. To find out what to expect from risk management in 2026, tune in to our predictions webinar.

Discover more Corporate Risk Leaders content
See More