In the rapidly evolving world of artificial intelligence, the clash between state-level regulation and federal oversight is reaching a boiling point. New York’s recent passage of the Responsible AI Safety and Education (RAISE) Act places it directly at the center of this debate. To help us navigate this complex landscape, we’re speaking with Vernon Yai, a leading expert in data protection and technology governance. He’ll shed light on the impending legal battles facing New York, the immense operational hurdles companies like OpenAI must overcome to comply with new mandates, and the fundamental tension between technological innovation and public safety.
Governor Hochul signed the RAISE Act despite a recent executive order from President Trump. Given the creation of an AI Litigation Task Force, what specific legal challenges do you anticipate for New York, and what steps might the state take to defend its new law?
The primary legal challenge will undoubtedly be a federal lawsuit arguing that the RAISE Act unconstitutionally regulates interstate commerce. The President’s executive order explicitly created the AI Litigation Task Force for this very purpose, and it has already signaled its intent by naming Colorado’s law as problematic. I expect the task force to argue that a patchwork of state laws creates an unmanageable burden for companies that operate nationally. To defend itself, New York will frame this not as an economic regulation, but as a fundamental public safety measure, an area where states have traditionally held strong authority. They will also lean heavily on the argument that by aligning with California’s framework, they are not creating chaos but are actively building a “unified benchmark,” demonstrating a responsible, multi-state approach to a national issue that the federal government has failed to address.
The law requires developers to report safety incidents within 72 hours. Can you walk us through the step-by-step operational and technical infrastructure a company like OpenAI would need to build by 2027 to reliably detect, verify, and report these incidents within that tight deadline?
That 72-hour window is incredibly demanding and requires a sophisticated, almost military-grade command structure. First, you need automated detection systems that are constantly monitoring model outputs for defined “safety incidents,” such as a model engaging in autonomous behavior or a critical failure of its own technical controls. This can’t be a person watching a screen; it has to be a robust, 24/7 automated alert system. Second, when an alert is triggered, a pre-designated rapid response team must immediately begin verification. They have to quickly determine if it’s a false positive or a genuine incident of theft, malicious use, or unauthorized access. Finally, there needs to be a clear, streamlined protocol to escalate the verified incident to the designated senior compliance officer, who must then compile and submit a formal report to New York State. To meet the 2027 deadline, companies need to be building and stress-testing this entire chain of command right now.
The article mentions New York’s law builds on California’s framework to create a “unified benchmark.” In your view, how closely do these two laws align, and can you provide an example of a compliance challenge a developer might face navigating this emerging state-by-state regulatory patchwork?
While the intention to create a unified benchmark is commendable, the reality is that even small differences can create major compliance headaches. Imagine a scenario where California’s law defines a reportable incident based on a specific threshold of “unauthorized access,” while New York’s RAISE Act has a slightly broader definition of “malicious use.” A developer might build a sophisticated reporting system perfectly tailored to California’s rule, only to find it doesn’t capture the specific nuances required by New York. This forces them to run parallel compliance systems or constantly re-engineer their existing one, which is incredibly costly and inefficient. It’s precisely this kind of friction—navigating divergent definitions and reporting requirements—that fuels the tech industry’s outcry about a “burdensome patchwork” and gives the federal task force ammunition for its legal challenges.
State Senator Gounardes’s quote suggests a view that tech companies prioritize profits over safety. Beyond the public statements, what specific metrics or internal protocols would a company need to demonstrate to regulators to prove they are genuinely committed to safety and not just avoiding penalties?
To truly move beyond the perception Senator Gounardes described, companies have to demonstrate that safety is a core operational value, not just a line item in the legal budget. This starts with transparency in their testing protocols. They could proactively publish detailed reports on their “red-teaming” efforts—showing how they tried to break their own models, what flaws they found, and how they fixed them before launch. Internally, the single most powerful signal would be the organizational placement and power of the senior personnel member responsible for compliance. If that person is a C-level executive with the authority to delay or veto a product launch over safety concerns, it proves their commitment is real. It shows they are willing to sacrifice short-term profit for long-term safety, which is a far more convincing argument than simply paying a fine of up to $1 million or even $3 million after something has already gone wrong.
What is your forecast for the state-level AI regulatory landscape in the next two years, especially concerning the tension with the federal AI Litigation Task Force?
I forecast a period of intense and strategic conflict. We are going to see the federal AI Litigation Task Force make a very public example of one or two states to create a chilling effect and discourage others from passing their own laws. However, I don’t believe states like New York and California will back down. Instead, I anticipate they will double down, working to bring more states into their “unified benchmark” to create a stronger coalition. This will lead to a high-stakes legal battle that will likely escalate through the court system, ultimately forcing a much-needed national conversation to define the boundaries between federal and state authority in governing this transformative technology. The next two years won’t be about finding harmony; they will be about drawing the battle lines for the future of AI governance in America.


