Vernon Yai is a sentinel in the digital age, standing at the high-stakes intersection of global finance and sophisticated data governance. As a premier expert in privacy protection and risk management, he has spent his career building the defensive perimeters that keep our financial systems resilient against ever-evolving threats. With the recent explosion of generative AI, Yai has become a pivotal voice for institutions like JPMorgan Chase and Goldman Sachs as they navigate the fine line between technological breakthrough and systemic vulnerability. In this conversation, we explore the strategic shift from experimental AI projects to enterprise-wide integration and the rigorous security frameworks required to manage this “superpower” responsibly.
The following discussion examines the rapid acceleration of AI adoption in the banking sector, moving from routine automation to the deployment of autonomous digital employees. We delve into the complexities of “frontier models” that can identify thousands of software vulnerabilities in seconds, the massive capital investments driving cloud migration, and the collaborative defense programs designed to turn AI from a threat into a formidable security asset.
New frontier AI models can now discover thousands of software vulnerabilities across major operating systems. How are you integrating these tools into defensive security, and what specific protocols ensure these capabilities don’t inadvertently create new risks during testing? Please elaborate with step-by-step details on your testing process.
The introduction of frontier models like Claude Mythos Preview has fundamentally shifted the landscape, as these systems can identify thousands of serious vulnerabilities across browsers and operating systems that human eyes might miss for years. We integrate these tools through a strictly tiered “Glasswing” protocol, which begins by isolating the AI in a sandboxed environment where it can poke and prod at a digital twin of our infrastructure without touching live assets. The first step involves a controlled “red-teaming” exercise where the model attempts to exploit known and unknown gaps; we then move to a secondary verification stage where human experts audit the AI’s findings to ensure they aren’t “hallucinating” risks. Finally, we implement a closed-loop patching cycle where the AI-discovered vulnerabilities are remediated in real-time before the model is allowed to scan the next segment of the network. It feels like a high-stakes chess match where the board is constantly shifting, but by pulling this “superpower” into our own environment, we transform a potential weapon into a primary shield.
Financial institutions are increasing AI budgets by over 30% to move toward enterprise-wide integration. What does a successful migration from “small wins” to a unified AI operating model look like, and what metrics demonstrate that this shift is actually driving productivity? Provide a few specific examples.
We are seeing a massive surge in capital commitment, with projected AI spend for banks reaching an average of $177 million in the first quarter of 2026, representing a 33% increase from the previous quarter alone. A successful migration looks like the “One Goldman Sachs 3.0” model, where the technology isn’t just a siloed tool for one department but a foundational layer of the entire firm’s operating system. We measure success through concrete metrics such as the reduction in time-to-onboard for new clients and the acceleration of code deployment cycles for our internal developers. When an institution reports that 80% of its executives are now embedding cyber and data security directly into their AI budgets, it’s a clear signal that the industry is moving past the “novelty” phase and into a period of disciplined, scaled growth. You can feel the energy in the room during earnings calls; there is a palpable sense that we are no longer just talking about “what if” but are now executing on “what is.”
Some firms are deploying hundreds of AI products, including agentic digital employees that work alongside human staff. How do you manage the governance of these autonomous agents, and what step-by-step processes do you follow to ensure they improve service without compromising compliance? Share an anecdote regarding their implementation.
Managing a fleet of agentic digital employees—like the more than 200 AI products currently in use at BNY—requires a governance framework that treats these agents with the same level of accountability as a human hire. Our process starts with “Role Definition,” where we strictly limit the agent’s permissions to specific datasets, followed by “Continuous Shadowing,” where every decision the AI makes is logged and periodically audited for compliance with financial regulations. We then move to “Intervention Logic,” which creates a hard stop if the AI’s actions deviate from established risk parameters, requiring a human supervisor to sign off before the agent can proceed. I remember a recent implementation where a digital employee was tasked with streamlining a manually intensive process; the air in the operations center changed when staff realized the AI wasn’t there to replace them, but to take over the “grunt work,” allowing them to focus on complex client relationships. It is this synergy, where the machine handles the end-to-end process improvement while the human provides the ethical and emotional oversight, that defines the modern banking workflow.
Scaling AI solutions requires heavy investment in cloud migration and extreme data accuracy. How are you aligning your infrastructure to support these demands, and what specific hurdles must be overcome to ensure your data is clean enough for high-stakes deployment? Describe the technical milestones you prioritize.
The transition to high-performance AI is impossible without a massive pivot toward cloud-native infrastructure, which is why we see firms doubling down on cloud migration as a top-tier priority. The biggest hurdle is the “data swamp”—the decades of legacy information that is often fragmented or improperly labeled—which we must cleanse using automated data-lineage tools to ensure absolute accuracy for high-stakes trading or risk models. We prioritize three technical milestones: first, the establishment of a unified data lake that breaks down departmental silos; second, the implementation of real-time data validation engines that catch errors at the point of entry; and third, the achievement of “zero-trust” architecture within our cloud environments. It is a grueling, invisible labor, but without this foundation of clean data, even the most sophisticated frontier model is essentially flying blind. We are methodically building the “pipes” of the organization so that when we turn on the AI “faucet,” the information coming out is pure, actionable, and secure.
Large banks are joining collaborative defense programs and sharing data with AI providers to harden their systems. What are the practical trade-offs of this transparency, and how do you maintain a competitive edge while sharing security intelligence with industry peers and government agencies? Explain the long-term impact.
There is a fascinating tension between the need for secrecy and the necessity of collaborative defense, such as the industry’s participation in OpenAI’s Trusted Access for Cyber program. The trade-off is clear: by sharing our threat intelligence with peers and the government, we lose some level of proprietary “stealth,” but we gain a collective immunity that no single bank could achieve on its own. We maintain our competitive edge by focusing our proprietary efforts on how we use the technology to serve clients—the “secret sauce” of our unique algorithms and customer service—while treating the underlying security layer as a public utility that everyone must contribute to. Long-term, this transparency will lead to a more stable global financial system where a “win” for one bank’s security is a win for the entire ecosystem. As we’ve seen with initiatives like Project Glasswing, the goal is to ensure that while AI makes the threat landscape more complex, our collective defensive capabilities evolve at an even faster rate.
What is your forecast for AI in the banking industry?
I believe we are entering an era of “Superpowered Banking,” where the distinction between a financial firm and a technology firm will vanish entirely as AI becomes the primary driver of both defensive resilience and revenue generation. Over the next few years, we will see the total integration of agentic AI into every layer of the bank, from the front office interacting with clients to the back office neutralizing cyber threats in milliseconds. However, the true winners will not be the firms with the largest budgets, but those that master the “responsible growth” aspect—aligning their risk, compliance, and business functions to ensure the AI superpower is always used for good. We will see a shift where cybersecurity is no longer a cost center, but a competitive advantage that builds the deep, unshakable trust clients require in a digital-first world. The technology is evolving in an entirely predictable way, and while it certainly makes the environment “harder” as Jamie Dimon noted, it also gives us the tools to be better, faster, and more secure than ever before.


