Today, we’re diving into the complex world of AI regulation with Vernon Yai, a renowned data protection expert whose work in privacy protection and data governance has shaped industry standards. With a focus on risk management and innovative strategies for safeguarding sensitive information, Vernon offers a unique perspective on how tech giants like Meta are navigating the evolving landscape of state AI laws. In this interview, we explore the motivations behind Meta’s recent lobbying efforts, the implications of state-level regulations on innovation, and the broader tech industry’s stance on these policies. Join us as we unpack the intersection of technology, policy, and corporate influence.
Can you walk us through what might have driven Meta to create the American Technology Excellence Project (ATEP) as a super PAC?
Meta’s launch of ATEP seems to stem from a growing concern over the patchwork of state AI laws popping up across the U.S. With over 1,100 AI-related bills introduced in 2025 alone, the company is worried about inconsistent regulations that could create a fragmented environment for AI development. Their primary fear is that overly restrictive laws at the state level might stifle innovation, making it harder for U.S. tech companies to maintain a competitive edge globally. It’s a strategic move to influence policy early on, before these laws solidify into barriers.
How do you think Meta views the impact of these state AI laws on the broader landscape of AI innovation in the United States?
From Meta’s perspective, state AI laws could act as a significant roadblock to innovation. They argue that varying regulations across states create compliance challenges, especially for technologies like AI that often operate across borders. There’s a real concern that if every state has its own set of rules, it could slow down the deployment of new AI tools and services, ultimately harming the U.S.’s position as a tech leader. Meta seems to believe that a more unified approach—perhaps at the federal level—would be less disruptive to progress.
What do you see as the core objectives of ATEP in supporting state political candidates?
ATEP’s main goal appears to be electing state lawmakers who align with Meta’s vision for tech-friendly policies. This means backing candidates who prioritize minimal regulation on AI and support the growth of U.S. tech industries. It’s not just about opposing restrictive laws; it’s also about shaping a legislative environment that champions technological advancement and defends American tech leadership on a global stage. There’s also an interesting angle about empowering parents, which might tie into policies around user control and safety.
How do you think Meta decides which candidates to support, regardless of their political affiliation?
Meta has made it clear that party lines aren’t the deciding factor for ATEP’s endorsements. Instead, they’re likely looking at candidates’ stances on key tech issues—whether they advocate for policies that promote AI development and protect U.S. tech interests. I’d imagine they’re evaluating voting records, public statements, and even direct engagements to gauge how a candidate might approach AI regulation. It’s a pragmatic approach, focusing on policy alignment over partisan loyalty.
Why do you think Meta is targeting state-level politics rather than focusing on federal AI policies?
State-level politics have become a battleground for AI regulation because federal action has been slow to materialize. Without a cohesive national framework, states are stepping in to fill the gap, and that’s where the immediate impact on tech companies is felt. Meta likely sees state lawmakers as more accessible and influential in shaping policies that directly affect their operations right now. It’s a tactical choice—address the problem where it’s most active, rather than waiting for a federal solution that might never come.
In what ways do you think the lack of federal AI regulation is influencing companies like Meta to engage with state lawmakers?
The absence of federal AI regulation creates a vacuum that states are filling with their own rules, and that uncertainty pushes companies like Meta to act at the state level. Without a unified federal standard, they face the risk of navigating a maze of conflicting laws, which is costly and inefficient. Engaging with state lawmakers allows Meta to try to shape these policies early, potentially preventing harsher regulations from taking hold. It’s a defensive strategy born out of necessity.
How do you interpret Meta’s strategy of using a super PAC to influence state AI laws without directly donating to candidates?
Meta’s use of a super PAC like ATEP is a clever workaround to influence policy without direct campaign contributions. Super PACs can raise and spend unlimited funds to advocate for or against candidates, so Meta can fund ads, grassroots campaigns, or other initiatives that sway public opinion and indirectly support lawmakers who share their views. It’s a way to wield significant influence while staying within legal boundaries, focusing on issue advocacy rather than direct financial ties to candidates.
What’s your perspective on the tech industry’s broader resistance to state AI regulations, as seen with other major players?
The tech industry’s pushback, including Meta’s efforts, reflects a deep-seated concern that even moderate state AI regulations could set a precedent for stricter oversight down the line. Companies worry that these laws, while not overly burdensome now, could evolve into barriers that limit experimentation and growth in AI. There’s also a fear of a domino effect—once one state passes a tough law, others might follow, creating a regulatory environment that’s hard to navigate. It’s a preemptive stance to protect their ability to innovate freely.
Are there specific state proposals or laws that you think are particularly worrisome for companies like Meta?
While I can’t speak for Meta directly, proposals like California’s SB 1047, which was vetoed but aimed at imposing safety requirements on AI developers, likely raised red flags. Such bills, even if they don’t pass, signal a trend toward accountability measures that could impose compliance costs or limit how AI models are deployed. For Meta, any state law that mandates strict guardrails or transparency could be seen as a threat to their operational flexibility and competitive edge.
What’s your forecast for the future of AI regulation at the state level in the U.S.?
I expect state-level AI regulation to continue growing in both number and complexity over the next few years, especially as public concerns about AI’s risks—like misinformation or privacy breaches—intensify. Without federal intervention, states will likely remain the primary battleground, leading to an even more fragmented regulatory landscape. This could push more tech companies to ramp up lobbying efforts like Meta’s, creating a tug-of-war between innovation and oversight. The real question is whether this patchwork approach will ultimately force a federal response or if states will set lasting precedents on their own.