Use Governance to Guide, Not Block, AI Innovation

Dec 31, 2025
Interview
Use Governance to Guide, Not Block, AI Innovation

As AI rapidly moves from experimental technology to a core business driver, leaders are grappling with a critical challenge: how to innovate without inviting unacceptable risk. We sat down with Vernon Yai, a data protection and governance expert, to navigate this complex landscape. He sheds light on how organizations can transform governance from a perceived barrier into a strategic enabler for AI adoption. Our conversation explores building collaborative oversight, tailoring compliance to market needs, creating safe spaces for experimentation, and even using AI itself to fortify risk management, offering a clear roadmap for turning regulatory hurdles into competitive advantages.

Given McKinsey’s finding that over half of organizations using AI report negative consequences, how can leaders build a collaborative governance model? Could you walk us through the key roles and processes involved in establishing an effective cross-functional AI governance forum?

That statistic from McKinsey is a sobering, but not surprising, reality. It highlights that you can’t simply unleash AI and hope for the best. The key is to treat governance as a team sport from day one. At RAC, they established an AI governance forum, which is a model I strongly endorse. It’s not about IT or data teams working in a silo; you must bring together specialists from across the business—think information security, legal, compliance, and representatives from the various lines of business that will actually use the technology. The process begins with a shared mindset that you have to embrace AI’s potential, not be a business that’s scared of it. The first thought for any leader considering a new technology should be to involve these governance partners immediately. It’s about feeling your way through the challenge together, having the right relationships, and never trying to sweep potential issues under the carpet. This ensures your change processes are carefully monitored so you don’t inadvertently do things wrong.

Charlotte Bemand suggests governance should evolve with organizational maturity. How do you advise leaders to balance strict compliance in sensitive markets with the need for agility in others? What specific frameworks or check-ins can ensure guardrails are adjusted appropriately over time?

This is a crucial point; governance is not a one-size-fits-all, static rulebook. It’s a dynamic framework that must breathe with the organization. The best leaders recognize this and create a tight match between their guardrails and their company’s maturity. For instance, in a business that serves highly regulated, super-sensitive end markets, the degree of compliance activity is naturally going to be much higher and more rigid. But that same company might also have customers in other markets who expect rapid innovation and agility. The art is in balancing both. You can achieve this by creating tiered governance frameworks. This might involve mandatory, stringent reviews for high-risk applications in sensitive sectors, while allowing for more streamlined, checklist-based self-assessments for lower-risk experiments in more agile areas. Regular check-ins, perhaps quarterly reviews by the AI governance council, are essential to ensure these guardrails are still fit for purpose. As the organization learns and the technology matures, you can adjust the framework, perhaps loosening some restrictions or tightening others based on real-world outcomes.

Shruti Sharma reframes governance as establishing “clarity” through boundaries rather than bureaucracy. What are the first practical steps to creating safe sandbox environments for AI exploration? Please detail how to define the right remit and access controls to encourage innovation without introducing organizational risk.

I love that reframing of governance as “clarity.” It shifts the perception from a bureaucratic burden to an enabling structure. The first practical step to creating a safe sandbox is to stop thinking about a 100-page rulebook. Instead, focus on defining clear boundaries. Start by establishing personas and role-based access. Not everyone needs access to all data or all tools. Define who your “explorers” are and what specific, non-sensitive data sets they can use. The second step is to clearly define the remit of the sandbox. What types of problems are they allowed to solve? What tools can they use? The goal is to give people the freedom to experiment within a space that is safe for the organization. This clarity is empowering; it means your teams aren’t paralyzed by uncertainty. They can explore and innovate confidently, knowing they are operating within approved, low-risk parameters, which prevents the rise of shadow AI and other blind spots that can introduce serious organizational risk down the line.

The article notes Paul Neville’s view that risk and opportunity are a continuum. How can a leader effectively communicate this vision to a team that is, as he described, too “focused on risk that they couldn’t move forward”? What mechanisms, like an AI advisory council, work best?

It was quite sad to hear that story of a leader so fixated on today’s problems that they couldn’t see a different future. This paralysis is a real danger. The most effective way to communicate that risk and opportunity are two sides of the same coin is through visionary leadership. You have to paint a vivid picture of a better tomorrow, one where automation and AI allow you to do things in a completely new way. You acknowledge the risks head-on but frame them as manageable challenges on the path to a significant opportunity. An AI advisory council, like the one established at The Pensions Regulator, is a fantastic mechanism for this. Crucially, their council is chaired by the COO, not the head of technology, which gives it independence and a business-wide perspective. By bringing in both internal and external members, the council can “kick the tires” on new initiatives, provide an ethical viewpoint, and challenge the team to consider opportunities, not just governance. This creates a forum where the conversation naturally shifts from “What could go wrong?” to “How do we manage the risks to achieve this great outcome?”

The Heico Companies achieved a 60% reduction in compensation costs using AI for risk management. For a company inspired by this, what are the first three steps to identify and deploy an AI tool to improve its own governance and compliance functions effectively?

That 60% reduction is a powerful testament to how AI can supercharge governance rather than just being a subject of it. For a company wanting to replicate that success, the first step is not to chase the technology. Instead, as Mike Bray at RS advises, start with a crystal-clear understanding of the problem or opportunity. What is your biggest compliance headache? Is it managing incident reports, keeping up with a flood of new regulations, or assessing risk across global operations? Only when you’ve defined the problem can you move to step two: scanning the market for the right AI solution. Heico, for example, faced an ever-growing raft of guidelines and needed to simplify risk management. They identified an AI tool specifically designed to extract and summarize details from incident reports. The third step is implementation and measurement to build credibility. Deploy the tool, track the outcomes—like the reduction in workplace incidents at Heico—and present that hard data to leadership. When they see tangible results, like a massive drop in compensation costs, it completely changes the conversation. They see the AI not as a risk to be managed, but as a critical tool that gets them to what’s most important.

What is your forecast for the evolution of AI governance over the next five years?

I believe we’re moving from a phase of reactive caution to one of proactive, integrated governance. Right now, many organizations are still grappling with the basics, driven by the fear of looming regulations like the EU’s AI Act. However, over the next five years, I forecast three major shifts. First, governance will become deeply embedded in the AI development lifecycle, not an after-the-fact checkpoint. Second, we will see a surge in the use of “AI for governance”—tools that automate compliance, monitor model behavior, and manage risk in real time, much like The Heico Companies did. Finally, the conversation will elevate from pure risk mitigation to strategic enablement. Mature organizations will have frameworks that are so well-honed and agile that their governance model becomes a genuine competitive advantage, allowing them to innovate faster and more safely than their peers who are still stuck seeing compliance as a barrier. The future of AI governance isn’t just about putting up fences; it’s about building smarter, more responsive, and ultimately more innovative organizations.

Trending

Subscribe to Newsletter

Stay informed about the latest news, developments, and solutions in data security and management.

Invalid Email Address
Invalid Email Address

We'll Be Sending You Our Best Soon

You’re all set to receive our content directly in your inbox.

Something went wrong, please try again later

Subscribe to Newsletter

Stay informed about the latest news, developments, and solutions in data security and management.

Invalid Email Address
Invalid Email Address

We'll Be Sending You Our Best Soon

You’re all set to receive our content directly in your inbox.

Something went wrong, please try again later