We are joined today by Vernon Yai, a data protection expert who has spent his career examining the intersection of technology, government surveillance, and personal privacy. With the rapid deployment of facial recognition tools like Mobile Fortify by U.S. immigration agencies, the line between security and civil liberties has become increasingly blurred. We will delve into the real-world consequences of treating a ‘possible match’ as a definitive identification, explore how procedural shortcuts are leading to vast, long-term biometric databases, and uncover how recent policy shifts have dismantled crucial privacy safeguards, allowing this technology to proliferate with minimal oversight.
Experts state that facial recognition technology is only for generating leads, not positive identification. What are the specific risks when an agent treats a “possible match” from an app like Mobile Fortify as a definitive ID, and what are the potential consequences for that individual?
The risks are profound and immediate. The entire industry, from manufacturers to police departments, consistently states this technology is for generating leads, not for positive identification. When an agent in the field treats a “possible match” as gospel, they are bypassing the entire investigative process. For the individual, the consequences are devastating. Imagine being stopped on the street, perhaps because of your accent or the color of your skin. An agent points a phone at you, and an app returns a photo of someone else. Suddenly, that “maybe” becomes the sole basis for your detention. You could be handcuffed, interrogated, and entered into a federal database, all based on a flawed algorithm’s guess. It gives a veneer of certainty to what is, in reality, a high-tech shot in the dark, and the person on the receiving end pays the price.
In one documented case, an agent got two different identities for the same person. Considering factors like poor lighting and head tilt, how does this reflect the technology’s reliability in street encounters, and what safeguards, like confidence scores, appear to be missing from this process?
That Oregon case is a perfect, and frankly terrifying, illustration of the system’s unreliability in the field. You have a woman in custody, handcuffed and looking down, yelping in pain as the agent repositions her just to get a photo. The app returns one name. When she doesn’t respond to it, they take another picture and get a completely different identity. This isn’t a bug; it’s a feature of how these systems break down in uncontrolled, “wild” conditions. Federal testing from NIST confirms that even top-tier algorithms falter with something as simple as a head tilt or bad lighting. The most alarming part is what’s missing. The agent testified there was no confidence score—no percentage telling him how likely the match was. He was just left to eyeball it, comparing a grainy photo on his phone to a distressed person in front of him. Without those basic safeguards, the technology becomes a tool of arbitrary enforcement, not accurate identification.
Agents are reportedly instructed to use face scans in the field before collecting fingerprints back at an office. Given that fingerprints are a stronger identifier, what does this procedural choice suggest about the system’s primary goal—is it accurate identification or rapid, mass biometric collection?
This procedural choice speaks volumes, and it points directly to a primary goal of rapid, mass biometric collection. Fingerprints are the gold standard for biometric identification; they are far more reliable. The fact that agents are instructed to prioritize a quick, often inaccurate face scan on the street over a more definitive fingerprint scan back at an office tells us that positive identification is not the immediate priority. The goal is to capture as much biometric data from as many people as possible, as quickly as possible. A face scan requires just a phone and a moment. It lowers the barrier for collection, sweeping up not just targeted individuals but also bystanders, protesters, and even U.S. citizens into a vast surveillance net. The data is then funneled into a system where it can be stored for up to 75 years, fundamentally changing the nature of routine encounters into permanent biometric data harvesting operations.
Data from street-level face scans, including those of U.S. citizens, is reportedly stored for up to 75 years in various databases and watch lists. What are the long-term civil liberties implications of this, and what kind of redress, if any, is available for someone wrongly included?
The long-term implications are chilling. We are building a society where any public encounter with law enforcement can result in your face print being stored in a federal intelligence database for the better part of a century. This data isn’t just sitting there; it’s being cross-referenced against watch lists like the “Seizure and Apprehension Workflow” or the “Fortify the Border Hotlist.” A “derogatory hit” doesn’t mean you’ve committed a crime, yet it could shadow you for life. For a U.S. citizen wrongly swept into this system, the path to redress is practically nonexistent. The records show there is no clear appeals process for these watch lists. How do you prove you were wrongly included in a secret list you don’t even know you’re on? This creates a permanent digital cloud of suspicion over people, impacting their ability to travel, their interactions with law enforcement, and their fundamental right to privacy and due process.
In early 2025, DHS reportedly removed its department-wide directive that constrained facial recognition use. What specific protections were lost, and how did this policy change pave the way for a tool like Mobile Fortify to be deployed without the traditional headquarters-level privacy review?
The removal of Directive 026-11 was a watershed moment that effectively opened the floodgates. We lost a critical set of guardrails that were, until then, the bare minimum of protection. That directive explicitly stated that facial recognition could not be the sole basis for an enforcement action, and it prohibited wide-scale, indiscriminate surveillance. Crucially, it gave U.S. citizens the right to opt out of biometric collection in non-law enforcement contexts. Most importantly, it centralized oversight, requiring any use by agencies like ICE and CBP to get a headquarters-level privacy review. When that directive vanished from the DHS website just three weeks after the inauguration, that central check was gone. Suddenly, component agencies like CBP could self-approve their own privacy reviews, which is exactly what happened with Mobile Fortify. This dismantling of oversight, seemingly steered by individuals with a history of advocating for weaker central control, created the perfect policy vacuum for a tool this invasive to be fast-tracked into agents’ hands.
The system’s underlying technology is designed to balance speed and accuracy, sometimes timing out and returning the “best available” guess. How does this trade-off function in real-time street encounters, and what is the likelihood of getting a wrong match when a person is poorly photographed?
This trade-off is at the heart of the problem. In a controlled environment, you can take your time to get a perfect, visa-quality photo. But on the street, an agent needs an answer in seconds. The patents for the NEC technology used in Mobile Fortify explicitly describe tuning the system to favor speed. This means setting a very short time window for the search—maybe just a few seconds. If a perfect match isn’t found in that time, the system doesn’t return nothing; it returns the highest-scoring candidate it found before time ran out, even if that score is too low for an automated match. Now, combine that with a poorly framed photo from a cell phone—blurry, bad angle, weird lighting. When the initial image quality is low, it’s a mathematical certainty that the correct person might be filtered out early in the search. Any match returned in that scenario is not just likely to be wrong; it’s almost guaranteed to be wrong. It’s a guess, presented as a lead.
What is your forecast for the use of facial recognition technology in immigration enforcement?
My forecast is, unfortunately, one of rapid and unchecked expansion unless there is significant legislative or judicial intervention. The technological infrastructure and the policy voids are already in place. We are seeing a clear shift from controlled, port-of-entry surveillance to a model of pervasive, interior enforcement where every street corner is a potential digital checkpoint. The agencies have demonstrated a clear appetite for these tools, and without strong central oversight from DHS or new laws like the proposed ICE Out of Our Faces Act, the trend will be to deploy more powerful, more mobile, and more integrated systems. This technology will become a default component of an agent’s toolkit, normalizing the nonconsensual scanning of faces—citizens and noncitizens alike—and feeding a surveillance apparatus that will become increasingly difficult to dismantle. The future is a digital dragnet that grows wider and more permanent with each passing day.


