Opening The AI Black Box: A Practical Guide To Building Your AI Governance Framework
Security, risk and governance considerations are major hurdles to AI adoption that enterprises need to tackle, particularly in the absence of strong regulatory guidance. In fact, nearly two-thirds of firms (65%) consider cybersecurity a significant or the most significant barrier to AI adoption.
Within a broader enterprise risk context, AI-related risk remains front and centre. Recent discussions at the #Risk AI conference in London emphasized the pressure on IT and risk leaders to strengthen governance frameworks and address AI integration risks. To support decision-makers facing these demands, Verdantix has distilled three practical takeaways for firms embarking on an AI governance journey:
- Governance frameworks should build upon existing best practices.
Continuous AI innovation creates an evolving risk profile with new threats and attack vectors. For instance, Asana’s adoption of the Model Context Protocol (MCP), introduced in November 2024 but widely implemented this year, led to an exposure incident costing $7.5 million in remediation. When mitigating risks of adoption, established regulatory frameworks provide a solid grounding: GDPR and SOC 2 offer a baseline; ISO 27001 and ISO 42001 deliver structured governance; and OWASP resources further inform secure product development and integration. Leveraging these existing frameworks and standards in conjunction with internal risk appetite statements and AI approval workflows is a practical starting point for risk professionals constructing a governance framework. - Organizations should allow controlled AI usage, rather than mandate full user restriction.
Shadow AI, or the unsanctioned use of AI within organizations, creates risks across data leakage, compliance breaches and technical vulnerabilities. Verdantix research shows that nearly half of firms do not have full oversight of AI use. Mitigation requires controlled AI adoption rather than blanket restrictions – prohibitive policies combined with easy access to commercial tools will drive unsanctioned use. Organizations should provide approved AI tools and ensure employees receive guidance on secure practices, with an objective of balancing risk management with accessibility to reduce exposure while supporting productivity. - AI literacy is a foundational component of AI governance.
AI literacy is essential for embedding risk awareness throughout the AI life cycle and should be a core element of governance frameworks. AI risks are often obscured by technical terminology and opaque functionality, as users input data and receive outputs without understanding underlying processes. A lack of awareness can lead to extreme outcomes, such as an employee at a multinational corporation in Hong Kong authorizing a £20 million transfer after an AI-enabled video call scam. Corporate IT and risk controls are most effective when paired with strong knowledge of AI’s capabilities and threats. Organizations are responding with large-scale literacy programmes; for example, IKEA is training over 160,000 employees on AI fundamentals, distributing governance responsibility at an individual level.
The combination of an absence of strong regulatory guidance and AI’s immaturity has created divergent approaches to enterprise AI governance and risk. Organizations should apply these practical takeaways to ground their governance approach in best practices for AI adoption. To learn more about AI governance frameworks and best practices for mitigating AI risk, join our webinar – Mind The AI Governance Maturity Gap – on November 5. For further insights on AI innovations and emerging threats, visit the Verdantix AI Applied insights page or schedule an analyst inquiry.
About The Author

Aleksander Milligan
Analyst




