As a leading voice in data protection and privacy law, Vernon Yai brings unparalleled expertise to the table. With a career dedicated to safeguarding sensitive information and pioneering risk management strategies, Vernon has become a trusted thought leader in navigating the complex intersection of technology and privacy rights. In this interview, we dive into the recent high-profile Flo class action verdict against Meta, exploring the implications of the ruling under a decades-old California privacy law, the potential financial fallout for Meta, and the broader impact on data governance in the digital age. We also unpack the nuances of how personal data was allegedly accessed, the scale of the affected class, and what this case signals for the future of privacy litigation.
Can you walk us through the core issue in the recent Flo class action case against Meta in San Francisco?
Certainly. The crux of the case revolves around Meta being found liable for violating user privacy through its integration with the fertility tracking app Flo. A federal jury in San Francisco determined that Meta accessed highly sensitive personal information—think details about menstrual cycles, pregnancies, and intimate aspects of users’ lives—without proper consent. This was allegedly facilitated by software development kits that Flo incorporated from Meta, which enabled data analytics but also, according to the plaintiffs, allowed unauthorized access to private data despite assurances of confidentiality.
How does a law from 1967, the California Invasion of Privacy Act, apply to modern internet data practices in this context?
The California Invasion of Privacy Act, enacted in 1967, was originally designed to protect against covert eavesdropping or recording of telephone conversations. It’s a bit of a relic, but its language around privacy intrusion has been interpreted to cover modern data collection practices. In this case, the court applied it to Meta’s alleged interception of personal data through Flo’s app, equating each instance of data access to a distinct privacy violation. It’s a fascinating stretch of an old statute to fit the digital age, raising questions about how far such laws can be adapted before new legislation is needed.
What could this verdict mean financially for Meta if the plaintiffs’ damages request is approved?
The financial stakes are enormous. If the plaintiffs’ approach is fully granted, Meta could face a payout of up to $8 billion. This figure comes from estimates of 1.6 million California subclass members potentially receiving $5,000 each under the state privacy law’s statutory penalty. Compared to past penalties—like the $1.4 billion settlement with Texas in 2024 over facial recognition misuse or the $5 billion fine in 2019 for privacy violations—this could dwarf previous amounts and set a new benchmark for class action privacy cases.
Can you explain how damages are being calculated in this case and any concerns about proportionality?
The plaintiffs’ lawyer proposed a straightforward method: $5,000 per person for a single incident, avoiding astronomical figures like a quadrillion dollars that were floated earlier based on per-data-entry violations. Judge Donato seemed somewhat receptive to this cap, noting it helps avoid a due process issue where the punishment might be disproportionate to the harm. He’s clearly mindful of balancing accountability with fairness, ensuring the penalty doesn’t spiral into something unreasonable given the scale of the class and the nature of the violation.
What kind of personal information was reportedly accessed by Meta through Flo, and why is this so concerning?
The data in question is deeply personal—users of Flo were entering details about their menstrual cycles, sexual health, and pregnancies, believing this information was confidential. The plaintiffs allege that Meta’s software kits embedded in the app allowed access to this sensitive data, breaching trust and privacy. This is particularly troubling because health-related information is among the most intimate and protected categories of data, and its exposure can have profound personal and emotional consequences for users.
How have other companies involved in this lawsuit, like Google and Flo, responded to the allegations?
Google opted for a settlement early on, agreeing to pay $48 million while denying any wrongdoing. Flo, on the other hand, settled mid-trial for $8 million, with their lawyers highlighting that the company has never turned a profit, suggesting financial constraints influenced their decision. Both settlements contrast sharply with Meta’s approach of contesting the verdict, showing varying strategies in handling such high-stakes privacy litigation.
With an estimated 1.6 million subclass members in California alone, how does the scale of this class action impact the case’s significance?
The sheer number of affected individuals—1.6 million in California, with a national class potentially reaching 13 million—makes this case a landmark in privacy law. It underscores the massive reach of tech platforms and the potential for widespread harm when data protections fail. The scale also amplifies the financial and reputational risks for companies like Meta, while setting a precedent for how large class actions might be managed in terms of damages and claimant verification in future privacy disputes.
What is your forecast for the future of privacy litigation in light of this verdict?
I anticipate a surge in privacy class actions as users become more aware of their rights and courts continue to adapt older laws to digital contexts. This verdict could embolden plaintiffs to challenge tech giants over data practices, especially with sensitive information like health data. We’re likely to see stricter scrutiny of software integrations and data-sharing agreements, alongside calls for updated legislation that directly addresses internet-era privacy challenges. For companies, the pressure to prioritize robust data governance and transparency will only intensify as the legal landscape evolves.