When Vendors Define Responsible AI: Risks and Challenges

Oct 17, 2025
Interview
When Vendors Define Responsible AI: Risks and Challenges

Today, we’re thrilled to sit down with Vernon Yai, a renowned expert in data protection and privacy governance. With years of experience in risk management and a passion for developing cutting-edge techniques to protect sensitive information, Vernon has become a trusted voice in the realm of AI ethics and data security. In this conversation, we’ll explore the complexities of responsible AI, the roles different teams play in AI adoption, the influence of vendors on governance, and the ongoing challenges of ensuring AI systems align with organizational values over time.

How do you define “responsible AI” within the context of data protection and privacy governance?

To me, responsible AI is about ensuring that systems are designed and used in ways that prioritize transparency, accountability, and the protection of individual rights. It’s not just about building a tool that works well today; it’s about anticipating how it might impact privacy and security tomorrow. In my field, this means embedding safeguards into AI systems to prevent misuse of personal data and making sure there are clear mechanisms for oversight. It’s a commitment to balancing innovation with ethical boundaries, and that requires constant vigilance.

Why do you think the concept of responsible AI can vary depending on whether it’s defined by a vendor or by an organization’s internal teams?

Vendors often frame responsible AI through the lens of their product features—think built-in filters or settings that seem to address ethical concerns. Their definition is naturally shaped by business goals and market demands. On the other hand, internal teams, especially those focused on data governance or HR, bring a perspective rooted in the organization’s specific values and long-term risks. The disconnect happens when a vendor’s version of “responsible” doesn’t fully align with an organization’s unique needs or when internal teams lack the authority to challenge that definition.

Can you explain why safety and responsibility in AI are not just one-time fixes but ongoing outcomes that need continuous effort?

Absolutely. AI systems aren’t static; they evolve based on new data, user interactions, and updates from vendors. What looks safe at launch can become problematic as the system learns or as its context changes. Responsibility is an outcome of active governance—regular audits, monitoring for unintended behaviors, and having the power to intervene when things go off track. Without that ongoing effort, you’re just crossing your fingers and hoping the system stays aligned with your principles.

What are some of the biggest challenges you’ve encountered in maintaining responsible AI over the long term?

One major challenge is drift—when an AI system starts behaving in ways that no longer match the original intent or values. This can happen subtly, through data shifts or vendor updates, and it’s tough to catch without robust monitoring. Another issue is resource allocation; many organizations don’t budget for continuous oversight, so governance gets sidelined. Finally, there’s often a lack of alignment between teams like HR and data leaders, which creates blind spots in how AI impacts both people and systems over time.

In your experience, who typically takes the lead in deciding whether to adopt AI tools within an organization?

It varies, but often HR or department heads focused on employee experience are at the forefront, especially for tools tied to workforce management. They’re evaluating whether the tool fits culturally and improves workflows. However, IT or data leaders should be equally involved, particularly when it comes to security and compliance. When that balance isn’t there, decisions can skew too heavily toward usability and miss critical data risks.

How do you see HR and data leaders collaborating—or failing to collaborate—on AI adoption decisions?

When collaboration works, it’s powerful. HR brings insight into employee well-being and cultural fit, while data leaders focus on privacy, security, and long-term system behavior. But too often, these teams operate in silos due to different priorities, budgets, or even just a lack of shared language. I’ve seen cases where HR adopts a tool without fully consulting data teams, only to later face issues like data leaks or compliance violations that could’ve been avoided with early partnership.

What are some common barriers that prevent HR and data teams from working together effectively on AI governance?

A big barrier is structural—different reporting lines, separate budgets, and distinct goals. HR might be incentivized to improve engagement, while data teams are focused on risk mitigation, and those objectives can clash. There’s also often a knowledge gap; HR leaders might not feel confident asking technical questions about data risks, and data leaders might not fully grasp cultural nuances. Without intentional efforts to bridge those gaps, like joint decision-making frameworks, collaboration just doesn’t happen.

How can organizations ensure that both cultural fit and data security are prioritized when selecting an AI system?

It starts with bringing the right people to the table from day one. HR and data leaders need to co-evaluate any AI tool, asking both “Will this support our people?” and “Does this protect our data?” Organizations should also establish clear criteria for adoption that cover both angles—cultural alignment and security standards. Finally, having a governance framework that includes veto power for data risks, not just cultural misfits, ensures neither aspect gets overlooked.

How much influence do vendors have in shaping what responsible AI looks like within organizations today?

Vendors can have a huge influence, especially if organizations lack strong internal governance. Many companies rely on vendor-provided features or assurances as their definition of “responsible,” without questioning whether those align with their own values. This is particularly true for smaller organizations or those without dedicated data expertise. The risk is that the vendor’s interpretation becomes the default, and over time, that can erode an organization’s control over its own ethical standards.

What are the potential downsides of allowing vendors to set the rules for how an AI tool operates over time?

The biggest downside is loss of autonomy. If a vendor defines what’s responsible, they can update the tool in ways that prioritize their interests—say, pushing new features or data collection practices—over your organization’s needs. There’s also the risk of drift, where the tool’s behavior shifts without your input or even knowledge. Without independent oversight, you’re essentially outsourcing your ethics, and that can lead to reputational damage or legal issues if things go wrong.

How can organizations retain control over AI governance instead of outsourcing it to vendors?

It’s about building internal muscle for oversight. Organizations need clear policies on how AI tools are monitored and updated, with explicit boundaries on vendor-driven changes. Regular audits—both internal and external—help ensure the tool stays aligned with your values. Most importantly, there has to be authority to say “no” to vendor decisions, whether that’s through contract terms or exit strategies. Governance isn’t just a checkbox; it’s a commitment to owning the rules.

Can you share a time when you had to challenge a vendor’s definition of responsible AI behavior, and how that played out?

I once worked with an organization adopting an AI tool for employee feedback, and the vendor touted built-in features as proof of ethical design. But when we dug deeper, we found the tool was collecting more data than necessary and lacked transparency on how it was used. We pushed back, demanding stricter data minimization and clearer user consent processes. It wasn’t easy—the vendor initially resisted—but with a strong governance framework and legal backing, we got them to adjust. It reinforced for me that organizations must be proactive in setting their own standards.

What does the concept of “drift” in AI mean to you, and why is it such a critical issue?

Drift in AI refers to when a system starts to behave in ways that deviate from its intended purpose or ethical guidelines, often due to changes in data, usage, or vendor updates. It’s critical because it can happen gradually, without anyone noticing, until there’s a major issue—like biased outputs or privacy breaches. In data protection, drift can mean a tool that once safeguarded information starts exposing vulnerabilities, and that’s a nightmare for trust and compliance.

What steps do you take to monitor and address drift in AI systems within an organization?

Monitoring starts with setting baseline expectations for how the AI should behave and then using automated tools and manual reviews to track deviations. We look at outputs, user feedback, and data patterns regularly. If drift is detected, we investigate the root cause—whether it’s new data or a vendor update—and decide whether to recalibrate the system or, in extreme cases, shut it down. Having predefined triggers for intervention is key; you can’t wait for a crisis to act.

What is your forecast for the future of responsible AI governance in organizations over the next decade?

I think we’ll see a growing recognition that responsible AI isn’t a product feature but a strategic priority. Organizations will invest more in cross-functional teams—HR, data, and legal working together—to build robust governance frameworks. There’ll likely be stricter regulations globally, pushing companies to prioritize transparency and accountability. My hope is that we move away from vendor-driven definitions toward a model where organizations truly own their AI ethics, backed by technology and policies that make drift and misuse far less likely.

Trending

Subscribe to Newsletter

Stay informed about the latest news, developments, and solutions in data security and management.

Invalid Email Address
Invalid Email Address

We'll Be Sending You Our Best Soon

You’re all set to receive our content directly in your inbox.

Something went wrong, please try again later

Subscribe to Newsletter

Stay informed about the latest news, developments, and solutions in data security and management.

Invalid Email Address
Invalid Email Address

We'll Be Sending You Our Best Soon

You’re all set to receive our content directly in your inbox.

Something went wrong, please try again later