Singapore Authorities Consider Stepping Up AI Risk Oversight In The Financial Industry
On November 13, the Monetary Authority of Singapore (MAS) issued a consultation paper asking market participants to comment on the proposed guidelines on AI risk management by January 31, 2026. The authority’s key objective is to establish a regulatory framework to ensure that the use of AI in Singapore’s financial sector does not introduce systemic and preventable risk. This is a step in the right direction and sets a benchmark for other markets to follow. However, some financial institutions (FIs) may find the new requirements difficult and expensive to implement.
The proposed guidelines were built on the existing MAS principles of fairness, ethics, accountability and transparency (FEAT), first set in 2019. This consultation process is part of the larger AI regulatory wave across the world; the European Union’s AI Act came into force in 2024 (see Verdantix Market Insight: The Evolving AI Regulation Landscape).
The paper hints at MAS’s supervisory expectations for the use of AI by FIs, looking into key aspects such as AI risk management systems, policies and procedures. In addition, the authority is requesting comments on the application of risk controls throughout the AI life cycle – and is particularly looking for FIs to adopt comprehensive AI model oversight with direct reporting to the board of directors. The risk framework should also include a risk materiality assessment of the impact, complexity and reliance of each AI model deployed internally, as well as a full inventory of AI models.
The MAS believes that AI risk implications should be considered broadly, covering traditional risks (financial and operational risks), as well as new areas such as conduct, financial crime and reputational risk. In the case of generative AI, the authority is considering the following additional risks: security risk, legal and intellectual property risk, privacy risk and third-party risk.
An interesting element is the oversight of AI by third parties. The MAS is expecting FIs to check that their partners, providers and vendors adopt AI using a similar risk framework to their own. FIs would have to request evidence that any third parties they work with have AI controls consistent with their risk materiality and are subject to appropriate reviews.
The approach that the MAS has taken to AI risk management is to place the AI risk management responsibility with the FI itself. This may lead to tensions between chief risk officers (CROs) and chief technology officers (CTOs) within the regulated entities. While this approach seeks responsible AI development and deployment, it will be burdensome. What is clear is that CROs need to start ensuring they can risk-manage AI implementation by preparing their staff and embracing robust oversight polices.
For more risk management content, check out Verdantix insights, and to learn more about how to navigate the intersections between risk, regulations and safety, register to attend the Verdantix Transform event in Amsterdam in March 2026.
About The Author

Luis Nino
Principal Analyst




