Vernon Yai is a preeminent figure in the landscape of data governance and hardware security, known for his deep technical expertise in protecting complex silicon architectures. With an illustrious career focused on risk management and the creation of advanced detection protocols, he provides critical oversight for companies navigating the transition to next-generation computing. As the industry moves away from traditional monolithic designs toward modular chiplet architectures, Vernon’s insights become essential for understanding how to secure the backbone of AI data centers and autonomous vehicles.
In this conversation, we explore the shifting paradigms of semiconductor design, focusing on the security implications of global supply chains and the move toward multi-die systems. Vernon details the challenges of verifying silicon integrity in overseas fabrication, the necessity of rigorous identity and authentication across modular components, and the specific failover mechanisms required to maintain safety in mission-critical applications like self-driving cars.
Traditional monolithic chip designs are being replaced by modular chiplets to increase flexibility in AI and automotive applications. How does this shift specifically expand the attack surface, and what unique risks are introduced when stitching together silicon components from a variety of global vendors?
The transition to chiplets fundamentally changes the security perimeter because we are moving from a single-vendor, “closed-loop” design to a modular, multi-vendor environment. In a traditional monolithic setup, one company controls the fabrication of the entire circuit on a single die, but with chiplets, you are stitching together components that may have been designed, validated, and manufactured in entirely different parts of the world. This complexity is often not taken seriously enough, yet it introduces significant risks like hardware Trojans or backdoored components being hidden within a single module. If even one chiplet in a multi-die system is compromised, it can sabotage the integrity of the whole system, allowing a malicious actor to snoop on data or execute man-in-the-middle attacks across what was supposed to be a secure circuit.
Many companies rely on overseas fabrication where verifying the integrity of the silicon “baking” process is notoriously difficult. What concrete steps can organizations take to establish provenance for individual chiplets, and how can they move away from blindly trusting bulk orders to ensure no hardware Trojans are present?
The reality is that while you can design a perfect chip on a computer in the US, you often have no direct visibility into how that design is actually “baked” into silicon at an overseas foundry. To counter the habit of blindly trusting bulk orders, organizations must implement rigorous traceability protocols that treat every chiplet as a potentially untrusted entity until proven otherwise. This involves moving beyond paper certifications to physical audits; for instance, procurement teams should literally send experts to the factories to review claims and inspect the manufacturing environment firsthand. By taking more of the design and development in-house and reducing the number of third-party suppliers to a bare minimum, companies can reclaim control over the provenance of their hardware.
Security in multi-die systems requires a different approach than single-circuit designs, particularly regarding identity and authentication. How should secure-boot mechanisms be implemented across multiple chiplets, and what are the primary technical challenges in maintaining a chain of trust when components originate from different distributors?
In a multi-die architecture, the concept of a “chain of trust” must be much more granular because you can no longer assume that the proximity of components equals security. Every single chiplet needs its own unique identity and a robust authentication process to ensure it hasn’t been swapped or tampered with during the distribution process. The technical challenge lies in the secure-boot sequence; the system must verify each component individually before the entire circuit is allowed to function, preventing a compromised chiplet from hijacking the boot process. We have to move away from the idea of inherent trust and instead focus on a zero-trust hardware model where identity is verified at every interface between the stitched components.
In autonomous driving, a single defective chiplet can trigger massive safety failures or costly vehicle recalls. How do independent hardware schedulers and failover mechanisms protect against malicious code injection, and what specific protocols ensure the last cypher is maintained when one chiplet takes over for a compromised one?
In high-stakes environments like Level 3 or Level 4 autonomous driving, we use hardware schedulers that operate entirely independent of software control to create “whitelisted” processes. This is a critical defense because it makes it incredibly difficult for an attacker to inject malicious code, as the hardware itself monitors and restricts what actions can be taken regardless of software vulnerabilities. If a chiplet fails or detects an attack, failover mechanisms allow a secondary chiplet to take over the workload instantaneously. For this to be secure, the “last cypher” or the exact state of encrypted data must be maintained across all capable chiplets, ensuring that there is no window of vulnerability or data loss during the transition.
While industry consortiums are developing new standards, the rapid evolution of AI threats often outpaces regulation. How can developers balance the demand for low-cost production with the need for rigorous factory audits, and what metrics should be used to evaluate the security of a multi-vendor supply chain?
There is always a tension between the desire to make chips as cheap as possible and the necessity of securing the supply chain, but in sectors like automotive, the cost of a recall far outweighs the cost of security. Developers must adhere to international standards like ISO 26262 and ISO 21434, which mandate that security is addressed at every link in the chain, but they must also go further by implementing their own audit mechanisms. We should evaluate security based on metrics of traceability and the depth of vendor transparency, rather than just performance or price. Because threats evolve alongside AI, we cannot rely on static standards; instead, we need a dynamic approach where health monitoring and power management are treated as core safety functions that are audited regularly.
What is your forecast for chiplet security?
I anticipate that as the AI era accelerates, we will see a massive shift toward “sovereign silicon” where major tech players bring almost all design and validation in-house to mitigate the risks of global distribution. The industry will likely move toward a standardized “security-by-design” framework for chiplets, led by organizations like the UCIe Consortium, making independent hardware monitoring a mandatory feature for any mission-critical system. Ultimately, the companies that succeed will be those that stop viewing hardware security as a checkbox and start treating the silicon supply chain with the same level of scrutiny we currently apply to high-level software ecosystems.


