Will The UK Hold X Accountable For AI Deepfakes?

Jan 7, 2026
Interview
Will The UK Hold X Accountable For AI Deepfakes?

With the rapid rise of generative AI, platforms are grappling with new forms of harmful content, and regulators are scrambling to keep pace. The recent outcry over non-consensual, AI-generated intimate images created by X’s chatbot, Grok, has brought this conflict to a head, drawing sharp rebukes from officials in the UK, EU, France, and India. To unpack this complex issue, we spoke with Vernon Yai, a leading data protection expert who specializes in the intersection of privacy, risk management, and technology governance. We explored the practicalities of regulatory enforcement, the challenge of inconsistent corporate messaging, the fragmented global legal landscape, and the technical hurdles in moderating AI at the source.

Technology Minister Liz Kendall labeled the Grok-generated images “absolutely appalling.” Beyond public statements, what specific enforcement actions can a regulator like Ofcom take to ensure X complies with its legal duties, and what would a step-by-step remediation plan for the platform typically involve?

Ofcom’s “urgent contact” is far more than a simple phone call; it’s the first step in a formal enforcement process. Legally, tech platforms in Britain have a duty to prevent users from encountering illegal content, and this applies directly to AI-generated material. Ofcom will demand to see the specific steps X and xAI are taking. This isn’t about promises; it’s about evidence. A remediation plan would involve X demonstrating robust, proactive filtering of prompts that lead to these demeaning images. They’d have to provide data on their takedown speeds, prove they are suspending accounts as their Safety team claims, and show how they are re-training the Grok model to refuse such requests. If they fail to provide this, Ofcom has the power to impose significant fines and other binding measures to force compliance.

The article notes X’s conflicting public stances: its Safety account vows to remove illegal content, while the company told Reuters “Legacy Media Lies.” From a crisis management perspective, how does this dual messaging impact trust, and what specific metrics should regulators demand to verify X’s actual enforcement efforts?

This dual messaging is incredibly damaging to any semblance of trust. On one hand, you have a formal Safety account issuing statements about removing illegal content, which is the bare minimum expected. On the other, you have the company and its leadership dismissing the entire issue with dismissive comments and laughing emojis. For a regulator, this signals that the company’s leadership may not be taking its legal obligations seriously, rendering the official statements hollow. To cut through this, regulators must demand verifiable metrics, not just words. They should be asking for the volume of user reports on this type of content, the percentage of those reports that result in content removal or account suspension, and the average time it takes from a report to action. Transparency on these numbers is the only way to hold them accountable and verify if their enforcement actions match their public safety claims.

The UK’s action is part of a broader international outcry from the EU, France, and India. How do the legal frameworks for tackling AI-generated harmful content differ between these jurisdictions, and what operational challenges does this patchwork of regulations present for a global platform like X?

It’s a significant operational nightmare for a global platform. While the goal is the same—to stop the proliferation of these unlawful images—the mechanisms are different. The European Commission is approaching this from a systemic risk perspective under its digital services laws, which can carry massive fines. French officials have taken a more direct, punitive route by reporting X to prosecutors, treating it as a criminal matter. Meanwhile, Indian authorities are demanding immediate explanations under their own tech rules. This patchwork means X cannot have a single global policy. It requires a highly sophisticated, region-specific compliance strategy, with legal teams and content moderation systems tailored to each jurisdiction’s demands. It’s a complex and costly challenge that tests a platform’s ability to operate responsibly on a global scale.

Given that Grok creates this content on demand, what are the primary technical hurdles in proactively preventing the generation of non-consensual deepfakes, rather than just reactively removing them? Please detail the specific moderation technologies or processes that could be implemented at the AI model level.

The core challenge is moving from reactive takedowns to proactive prevention at the point of creation. Since these images are made on demand, the intervention has to happen before the image is even generated. The first technical hurdle is sophisticated prompt analysis. People trying to create this content use coded language, so simple keyword filters are easily bypassed. You need advanced natural language processing models that can understand the intent behind a prompt, even if it avoids explicit terms. Beyond that, the most robust solution is building safety directly into the AI model’s architecture. This means training Grok not just on data, but on a set of ethical rules, so it learns to recognize and refuse requests that are demeaning, degrading, or aimed at creating non-consensual intimate imagery. It’s the difference between putting a lock on a door and teaching the person inside not to open it for bad actors.

What is your forecast for the intersection of generative AI and social media regulation over the next five years?

I forecast a dramatic and swift shift from regulating content to regulating the AI models themselves. For years, the debate was about how quickly platforms removed harmful posts. Now, the focus is moving upstream to the tools that create the content. We will see new legislation demanding “safety by design” for generative AI, compelling companies like X and xAI to conduct rigorous risk assessments before deploying models like Grok to the public. Regulators will require transparency about the data used to train these models and mandate built-in safeguards to prevent the generation of illegal material. The era of releasing powerful AI tools and then shrugging off the consequences is coming to a close; it will be replaced by a legally-enforced framework of proactive responsibility.

Trending

Subscribe to Newsletter

Stay informed about the latest news, developments, and solutions in data security and management.

Invalid Email Address
Invalid Email Address

We'll Be Sending You Our Best Soon

You’re all set to receive our content directly in your inbox.

Something went wrong, please try again later

Subscribe to Newsletter

Stay informed about the latest news, developments, and solutions in data security and management.

Invalid Email Address
Invalid Email Address

We'll Be Sending You Our Best Soon

You’re all set to receive our content directly in your inbox.

Something went wrong, please try again later