AI-Powered Fraud Attacks Surge by an Astounding 1,210%

Feb 13, 2026
Interview
AI-Powered Fraud Attacks Surge by an Astounding 1,210%

In the rapidly evolving landscape of cybersecurity, few threats are as daunting as the rise of artificial intelligence in the hands of malicious actors. To shed light on this new frontier of digital crime, we sat down with Vernon Yai, a data protection expert specializing in risk management and innovative fraud prevention. We explored the tectonic shift from manual fraud to an automated, AI-driven supply chain, discussing how deepfakes and generative AI are weaponized for sophisticated impersonation schemes, the specific tactics used to target the retail sector with low-dollar refund scams, and the urgent need for businesses to overhaul their internal security processes to counter these advanced threats.

We’re seeing fraud shift from manual efforts to what’s described as an “automated supply chain.” Could you elaborate on this shift and explain how tools like deepfakes and generative AI make these scams faster, cheaper, and harder for businesses to detect in real-time?

Last year truly was the watershed moment when the nature of fraud fundamentally changed. What we’re witnessing is a move away from isolated, manual attacks to a fully automated, industrial-scale operation. Generative AI and deepfake technologies have become the engine of this new supply chain. They allow attackers to launch thousands of attempts simultaneously across every communication channel—voice, video, chat, and email—using synthetic identities that look and sound frighteningly legitimate. This is why we saw a staggering 1,210% increase in attacks against major U.S. companies last year. These AI-driven scams are faster to deploy, significantly cheaper to run, and incredibly difficult to spot because they hit businesses at their most vulnerable points: the real-time decision-making in contact centers, urgent payment requests, and high-pressure executive impersonations.

With AI enabling sophisticated impersonations of both job candidates and executives, what are the primary goals of these different attacks? Could you share some red flags or metrics that might indicate a company is being targeted by one of these deepfake-driven social engineering schemes?

The goals differ depending on the target, but the underlying method is the same: using convincing, AI-backed schemes to manipulate people. When fraudsters impersonate job candidates with AI-generated videos and voices, their primary objective is to infiltrate the organization to gain internal system access. Once inside, they can plant malware, steal data, or lay the groundwork for a larger attack. On the other hand, impersonating high-level executives is a classic social engineering tactic supercharged by AI. The goal here is to pressure employees into making fraudulent payments or revealing sensitive information. A major red flag is simply the volume of attacks. We know that nearly three-quarters of U.S. companies experienced a surge in AI-powered fraud attempts last year. This isn’t a trickle; it’s a flood. When your finance and HR teams are flagging a significant increase in suspicious requests, especially those that come with a sense of urgency, it’s a strong indicator that you’re being targeted by an automated campaign.

The retail industry is seeing a significant rise in automated, low-dollar refund scams. Can you detail how fraudsters use bots to execute these attacks at scale while staying under the radar? What steps can retailers take to differentiate between a legitimate customer and a fraudulent bot?

The retail sector has become a prime target for a very specific and clever type of automated fraud. Attackers deploy bots that are programmed with common scripts to methodically initiate return requests on retail websites. The strategy is all about volume and subtlety. They intentionally target low-dollar refunds, keeping each fraudulent transaction below a certain threshold—just enough to avoid triggering automatic suspicion or manual review. By doing this thousands of times, they accumulate significant profits while each individual act appears negligible. To fight this, retailers must move beyond simple threshold-based fraud rules. They need to invest in systems that can analyze behavioral patterns. A real customer might browse, hesitate, or use the site in an idiosyncratic way, whereas a bot will execute its script with machinelike precision and speed, often from multiple accounts originating from similar IP ranges. Differentiating between the two requires technology that can spot those subtle, inhuman patterns of behavior.

Given that many finance leaders report their internal processes haven’t kept pace with the growing risk of AI fraud, what are the primary obstacles they face? Could you provide a step-by-step approach for how a company can begin to update its security protocols to address these new threats?

The fundamental obstacle is that AI has completely raised the baseline for what constitutes a credible threat, yet many internal processes are still designed to catch the frauds of five years ago. Companies are struggling because their traditional verification methods, which often rely on manual checks and human intuition, are simply too slow and unreliable to counter automated, AI-generated attacks. We’ve seen that one in four companies has already reported six-figure losses due to this gap. To start closing it, the first step is acceptance: leadership must recognize that the risk has fundamentally changed and that existing protocols are inadequate. The second step is a comprehensive risk assessment to identify the most vulnerable processes, particularly around payments and identity verification. Third, companies must invest in modern, AI-powered defensive tools that can detect synthetic media and automated behavior in real time. Finally, this can’t be a one-time fix; it requires continuous training, process refinement, and a proactive security posture to keep pace with the attackers.

What is your forecast for AI-driven fraud over the next two to three years?

I believe we are just at the beginning of this wave. The “automated supply chain” for fraud will become even more sophisticated, efficient, and accessible over the next few years. We can expect to see deepfake technology become virtually indistinguishable from reality, making it even harder for the untrained eye to spot a scam. Furthermore, as these AI tools become cheaper and easier to use, they will empower a wider range of criminals, not just highly technical groups. Businesses will face a constant barrage of attacks that are more personalized, more convincing, and deployed at an unprecedented scale. The only viable path forward is to fight fire with fire—organizations must aggressively adopt their own AI and machine learning defenses, not just to detect fraud but to predict it before it even happens.

Trending

Subscribe to Newsletter

Stay informed about the latest news, developments, and solutions in data security and management.

Invalid Email Address
Invalid Email Address

We'll Be Sending You Our Best Soon

You’re all set to receive our content directly in your inbox.

Something went wrong, please try again later

Subscribe to Newsletter

Stay informed about the latest news, developments, and solutions in data security and management.

Invalid Email Address
Invalid Email Address

We'll Be Sending You Our Best Soon

You’re all set to receive our content directly in your inbox.

Something went wrong, please try again later