Singapore Sets Global AI Risk Standards for Finance

Nov 14, 2025
Interview
Singapore Sets Global AI Risk Standards for Finance

Today, we’re thrilled to sit down with Vernon Yai, a renowned data protection expert with deep expertise in privacy protection and data governance. With a career dedicated to risk management and pioneering detection and prevention techniques, Vernon is the perfect guide to help us unpack the latest developments in AI risk management within the financial sector. Our conversation dives into the groundbreaking proposals from the Monetary Authority of Singapore (MAS) on holding boards and senior managers accountable for AI-related risks, explores the unique challenges posed by this technology, and examines why such guidelines are critical in today’s rapidly evolving landscape.

Can you walk us through the core of what the Monetary Authority of Singapore is proposing with their new AI risk management guidelines for the financial sector?

Absolutely. The MAS has put forward a consultation document that aims to make boards and senior managers directly responsible for managing risks tied to artificial intelligence in financial institutions. It’s about ensuring that those at the top aren’t just signing off on AI adoption but are deeply involved in understanding and mitigating its risks. This means setting clear expectations for oversight, assigning specific responsibilities, and making sure there’s accountability if things go wrong. It’s a bold move to embed governance right at the executive level, tailored to fit organizations of varying sizes and risk profiles.

How does this proposal stand out when compared to AI regulations in other regions, like the European Union’s AI Act?

What’s striking about the MAS proposal is the level of detail and specificity around the role of boards and senior management. While the EU’s AI Act also emphasizes accountability, it’s broader and more legislative in nature, focusing on categorizing AI systems by risk level and imposing corresponding obligations. The MAS guidelines, on the other hand, drill down into practical expectations for financial institutions, offering a principles-based yet comprehensive framework. It’s less about heavy-handed laws and more about proportionate, actionable guidance, which could position Singapore as a model for others to follow.

What do you think is driving the timing of these guidelines from MAS right now?

The timing is no accident. AI adoption is booming in Singapore’s financial sector, mirroring global trends. We’re seeing major players like DBS, OCBC, and UOB heavily investing in AI, from retraining their workforce to automating day-to-day operations. This rapid integration increases dependency on AI, and with that comes heightened risk. MAS is stepping in proactively to set guardrails before these technologies become even more entrenched, ensuring that innovation doesn’t outpace oversight.

Could you elaborate on the specific responsibilities boards of directors would have under these proposed rules?

Under the MAS guidelines, boards aren’t just there to approve AI initiatives—they’re expected to have a solid grasp of AI’s implications to provide effective oversight. This means actively challenging decisions, assessing risks at every stage of AI implementation, and designating individuals or committees to handle specific risk areas. It’s a shift from passive approval to active engagement, ensuring that boards are equipped to foresee and address potential issues before they escalate.

What are some of the key risks AI introduces to the financial sector, as highlighted by MAS?

MAS has flagged several critical risks. For one, AI could cause unexpected service disruptions if systems behave unpredictably. There’s also the danger of failing to detect financial crime due to flawed models. Bias in AI systems is another big concern, as is the reputational damage from customer-facing tools like chatbots delivering incorrect information. These risks aren’t just technical—they can directly impact trust and stability in the financial ecosystem.

With generative AI becoming more common, how do these risks get amplified according to the MAS perspective?

Generative AI takes these risks to another level because it’s inherently unpredictable and tough to test thoroughly before deployment. MAS points out issues like data poisoning, where bad data corrupts the AI’s outputs, and prompt injection, where malicious inputs trick the system into harmful actions. There are also legal and ethical concerns, like using data without consent or facing outages in underlying AI services. The unpredictability of generative AI makes it a wild card that can magnify operational and reputational damage if not managed carefully.

For those who aren’t tech-savvy, can you break down terms like data poisoning and prompt injection, and explain their potential impact on a bank’s operations?

Sure, let’s simplify it. Data poisoning happens when an AI system is trained on corrupted or malicious data, leading it to make wrong decisions—like approving fraudulent transactions because it learned from tainted examples. Prompt injection is when someone manipulates the AI by inputting sneaky commands, potentially causing it to leak sensitive information or act against the bank’s interests. For a bank, this could mean financial losses, breaches of customer data, or even operational shutdowns if critical systems are compromised. It’s like teaching a guard dog the wrong signals—it might end up letting intruders in.

How can financial institutions mitigate these specific risks tied to generative AI?

Mitigation starts with robust design and testing. Banks need to ensure their AI systems are trained on clean, verified data and regularly audited for vulnerabilities. Implementing strict access controls and monitoring for unusual inputs can help counter prompt injection. There’s also a need for ongoing training for staff and boards to spot red flags. Beyond that, adopting a ‘secure by design’ approach—building safety into AI from the ground up—and collaborating with regulators for best practices can significantly reduce these risks.

MAS has raised concerns about AI models used for risk assessments. What could go wrong if these models underperform?

If AI models for risk assessment don’t work as intended, the consequences can be severe. Poor performance might mean misjudging credit risks or market trends, leading to substantial financial losses for the bank. On the customer side, it could result in unfair treatment—like denying loans to deserving applicants—or even direct financial harm if faulty advice is given. Operationally, unexpected behaviors in these systems could disrupt critical functions, grinding key processes to a halt and eroding confidence in the institution.

Looking ahead, what is your forecast for how AI risk management will evolve in the financial sector globally, especially with initiatives like these from MAS?

I believe we’re at a turning point where AI risk management will become a core pillar of financial regulation worldwide. The MAS guidelines could set a precedent, inspiring other regulators to adopt similar accountability frameworks tailored to their markets. We’ll likely see a push for global alignment on standards, especially as AI systems cross borders. At the same time, I expect more emphasis on real-time monitoring and adaptive governance as AI evolves. The challenge will be balancing innovation with safety, but proactive steps like these from MAS are a strong start in shaping a resilient future.

Trending

Subscribe to Newsletter

Stay informed about the latest news, developments, and solutions in data security and management.

Invalid Email Address
Invalid Email Address

We'll Be Sending You Our Best Soon

You’re all set to receive our content directly in your inbox.

Something went wrong, please try again later

Subscribe to Newsletter

Stay informed about the latest news, developments, and solutions in data security and management.

Invalid Email Address
Invalid Email Address

We'll Be Sending You Our Best Soon

You’re all set to receive our content directly in your inbox.

Something went wrong, please try again later