How Will Singapore’s New Online Safety Law Protect Users?

Oct 21, 2025
Interview
How Will Singapore’s New Online Safety Law Protect Users?

As we delve into the evolving landscape of online safety, I’m thrilled to sit down with Vernon Yai, a renowned data protection expert specializing in privacy protection and data governance. With a career dedicated to risk management and pioneering detection and prevention techniques, Vernon offers invaluable insights into safeguarding sensitive information in the digital age. Today, we’re exploring Singapore’s groundbreaking approach to online safety through a new commission and related legislation. Our conversation touches on the commission’s objectives, the specific online harms it aims to combat, the powers it will wield, and how this initiative fits into the broader context of digital regulation in the region.

Can you walk us through the purpose and vision behind Singapore’s new online safety commission?

Absolutely. This new commission is a significant step toward creating a safer digital environment in Singapore. Its primary goal is to address harmful online content and protect users from various digital threats. It’s designed to tackle issues that have been escalating with the growth of social media and online interactions. The commission is expected to be fully operational by the end of the first half of 2026, giving authorities time to set up the necessary frameworks and partnerships with tech platforms to ensure effective implementation.

What types of online harms is this commission prioritizing at the start?

Initially, the focus will be on some of the most pressing and damaging forms of online harm, such as online harassment, doxxing, stalking, the abuse of intimate images, and child pornography. These are areas where victims often suffer significant emotional and psychological distress, and the commission aims to provide swift intervention. There are also plans to expand the scope over time to include other harms like the non-consensual sharing of private information and content that incites enmity, ensuring the framework adapts to emerging threats.

How will the commission enforce its authority over harmful content on social media platforms?

The commission will have robust powers to direct social media platforms to restrict access to harmful material within Singapore. This means they can order the removal or blocking of specific posts or content that violate safety standards. Beyond that, they’re empowered to give victims a right to reply, which is a crucial step in restoring some control to those affected. They can also ban perpetrators from accessing platforms if the severity of the offense warrants it, though such actions would likely be guided by clear criteria to ensure fairness.

What role do internet service providers play in the commission’s strategy to combat online harms?

Internet service providers are a key part of the enforcement mechanism. The commission will have the authority to order these providers to block access to specific online locations, which could include group pages or even entire websites if they’re deemed sources of harmful content. This approach ensures that harmful material can be curtailed at a broader level, especially when dealing with platforms or pages that are unresponsive to direct requests for content moderation.

What prompted Singapore to establish this commission, and what challenges have they identified in the current online landscape?

The urgency for this commission stems from research by the Infocomm Media Development Authority, which revealed in February that over half of legitimate user complaints about harmful content—issues like child abuse and cyber-bullying—were not addressed promptly by platforms. The government has also pointed out that social media platforms often fail to act on reports of genuinely harmful content, leaving victims without recourse. This gap in responsiveness highlighted the need for a dedicated body to step in and enforce accountability.

Could you shed some light on the new online safety bill tied to this commission?

Certainly. The bill was introduced to parliament on Wednesday, marking a formal step toward establishing the commission’s legal foundation. It’s set to be debated at the next available parliamentary session, which will likely involve discussions on its scope, powers, and implementation timeline. This legislative process is critical to ensuring the commission has the authority and resources it needs to function effectively while balancing user rights and platform responsibilities.

How does this initiative tie into other recent digital safety laws in Singapore?

This commission builds on other recent efforts, like the Online Criminal Harms Act, which came into effect in February 2024. One of the first actions under that act was directed at a major social media company, with the government issuing an order to address impersonation scams. In September, the home affairs ministry threatened significant fines if the company didn’t implement measures like facial recognition to curb these scams. While it’s unclear whether full compliance was achieved, this demonstrates how Singapore is weaving together multiple legal tools to hold platforms accountable for user safety.

What has been the government’s perspective on the responsibility of social media platforms in managing harmful content?

The government has been quite vocal about its expectations. They’ve emphasized that platforms bear a significant responsibility to act on harmful content reported by users. There’s a clear frustration with the inconsistent or delayed responses from some platforms, which often leave victims vulnerable. This stance is driving the push for stronger oversight through mechanisms like the new commission, ensuring that platforms can’t simply ignore or delay addressing serious issues.

What is your forecast for the future of online safety regulations in regions like Singapore?

I believe we’re going to see a continued push for stricter and more comprehensive online safety regulations, not just in Singapore but across the globe. As digital threats evolve, governments will likely adopt more proactive and collaborative approaches with tech companies, while also empowering users through education and legal recourse. In Singapore specifically, I expect the commission’s scope to broaden over time, potentially addressing emerging issues like deepfakes or AI-generated misinformation. The balance between safety, innovation, and free expression will remain a key challenge, but with frameworks like this, there’s a strong foundation to build on.

Trending

Subscribe to Newsletter

Stay informed about the latest news, developments, and solutions in data security and management.

Invalid Email Address
Invalid Email Address

We'll Be Sending You Our Best Soon

You’re all set to receive our content directly in your inbox.

Something went wrong, please try again later

Subscribe to Newsletter

Stay informed about the latest news, developments, and solutions in data security and management.

Invalid Email Address
Invalid Email Address

We'll Be Sending You Our Best Soon

You’re all set to receive our content directly in your inbox.

Something went wrong, please try again later