Poor Data Architecture Is Why Your AI Fails to Scale

Dec 16, 2025
Interview
Poor Data Architecture Is Why Your AI Fails to Scale

We are joined by Vernon Yai, a renowned expert in data protection and privacy governance. In an era dominated by the promise of artificial intelligence, many organizations are discovering that the biggest obstacle isn’t the sophistication of their algorithms, but the fractured state of their data. Vernon argues that a fundamental architectural shift is underway, where the concept of “AI gravity” is pulling computation back to a secure, sovereign data core. We’ll explore his insights on why fixing the data foundation is the only way to truly unlock the potential of AI.

You’ve said that while ‘agentic systems’ are the new force multiplier for businesses, siloed data is the primary hurdle. Can you describe what it looks like when a company successfully untangles this data sprawl to scale one of these intelligent systems and the kind of impact they see?

It’s a night-and-day difference. The vast majority of companies are stuck in this frustrating cycle of running AI pilots in isolated stacks, and when they try to scale, they hit a wall of inherited complexity. But the small percentage of firms that get this right are operating on a completely different level. We’re talking about companies that are seeing five times the ROI and are able to deploy twice as many of these agentic implementations. For them, data isn’t a bottleneck; it’s a competitive moat. They’ve established true AI and data sovereignty, which means their secured, controlled data is available where, when, and how it’s needed, allowing these intelligent systems to drive real decisions and automate operations seamlessly across the enterprise.

The data shows that firms mastering data sovereignty achieve incredible returns, with regions like Germany leading the way. What are they doing differently from a policy or architectural perspective, and if a company is lagging, what are the first few practical steps they should take to catch up?

What sets regions like Germany, Saudi Arabia, and the UAE apart is that they view sovereignty as the absolute foundation of modern AI, not an add-on. They are deeply focused on the proximity, security, and governance of their data from the very beginning. For a company that’s behind, the first step is a mindset shift. You have to accept that you don’t have an AI problem; you have a data architecture problem. The second practical step is to confront that complexity head-on by mapping out the data sprawled across all your systems, users, and vendors. Finally, you must commit to building a unified, governed data platform that will serve as the new “center of gravity” for your entire enterprise. Without that foundational work, you’ll just be spinning your wheels.

You argue that many CIOs don’t have an ‘AI problem’ but a ‘data architecture problem,’ and that AI must move closer to the data. Could you walk us through what this architectural shift looks like in practice and highlight the common pitfalls leaders should avoid?

Certainly. For years, the default approach was to pull data out of core systems and push it to specialized AI stacks. This creates massive issues with cost, risk, and speed. The architectural shift we’re seeing now flips that model entirely. Instead of moving the data, you move the AI compute closer to where the data lives, creating a stable, secure center of gravity. The single biggest pitfall is continuing to run AI pilots in those isolated environments. It feels like progress in the short term, but it just builds up technical debt and complexity that makes it nearly impossible to scale into a secure, compliant production environment later on. That inherited complexity is what kills AI initiatives.

Your research highlights a massive trend, with 97% of enterprises wanting to build their own AI platforms, and many are turning to Postgres. Beyond its open-source appeal, what specific capabilities make it so well-suited for this new model of handling transactional, analytical, and AI workloads all in one place?

The market is definitely moving from thinking about a single database to architecting a complete data platform, and Postgres is uniquely positioned for this. Its versatility is its greatest strength. It has always been a robust engine for structured and unstructured data, but now, in the age of LLMs, it has become central to the conversation around context data and retrieval. This means a single, unified platform built on Postgres can handle the transactional workloads that run your business, the analytical workloads that provide insights, and now the AI-driven workloads that power intelligent systems. Our research showed one in four enterprises are already building their own sovereign platforms on Postgres for exactly this reason—it’s an open, extensible foundation that avoids vendor lock-in and supports everything they need.

The idea of ‘AI gravity’ pulling compute towards the data is a compelling one. If we look three years down the road, what does a typical enterprise data stack look like under this model, and how does that unified platform change the day-to-day reality for a data scientist or developer?

In three years, I believe the most successful enterprise stacks will be far less fragmented. The era of stitching together dozens of niche tools will give way to a sovereign and open data foundation that serves as the core. For a data scientist or developer, this is a revolutionary change. Their daily work will shift from spending months navigating bureaucracy and security hurdles just to access the right data, to building and deploying AI-powered applications in a matter of days. This is what our low-code AI factory approach is all about. When you have a unified platform where data is already governed, secure, and available, the entire innovation lifecycle accelerates dramatically. It empowers them to fulfill that ambition of their company becoming its own self-sufficient AI and data powerhouse.

What is your forecast for the ‘architect’s dilemma’? As AI gravity reshapes the landscape, which legacy data practices are most at risk of extinction, and what new role do you see emerging for data architects in this sovereign, AI-centric world?

The legacy practice facing the fastest extinction is the constant, high-risk duplication and movement of data into separate, siloed stacks for every new AI project. That model is simply too slow, too costly, and too insecure for the agentic era. As for the data architect, their role is becoming more critical than ever, but it’s evolving. They are moving from being plumbers who connect disparate systems to being the master planners of the enterprise’s new digital core. Their new primary function will be to design, build, and govern this unified, sovereign data platform—that “center of gravity” I mentioned. They will be the ones ensuring that a single, secure foundation can power transactional, analytical, and AI systems all at once, which is an enormously compelling and strategic responsibility for the future of the enterprise.

Trending

Subscribe to Newsletter

Stay informed about the latest news, developments, and solutions in data security and management.

Invalid Email Address
Invalid Email Address

We'll Be Sending You Our Best Soon

You’re all set to receive our content directly in your inbox.

Something went wrong, please try again later

Subscribe to Newsletter

Stay informed about the latest news, developments, and solutions in data security and management.

Invalid Email Address
Invalid Email Address

We'll Be Sending You Our Best Soon

You’re all set to receive our content directly in your inbox.

Something went wrong, please try again later