ComfyUI Instances Targeted in Global Botnet and Mining Campaign

Apr 8, 2026
Article
ComfyUI Instances Targeted in Global Botnet and Mining Campaign

While artists and developers continue to celebrate the meteoric rise of generative media, a sophisticated global network of cybercriminals is quietly transforming high-performance AI servers into profitable, hijacked nodes for clandestine cryptomining and dark web proxy services. This exploitation represents a grim irony in the modern technology landscape: the very hardware designed to fuel the next generation of human creativity is being turned into a silent profit engine for threat actors. As the demand for high-end graphics processing units reaches a fever pitch, these servers have become the most coveted “digital real estate” on the internet, leading to an aggressive surge in automated exploitation campaigns that target specialized AI platforms.

The transition from traditional server compromises to AI-specific targeting marks a significant evolution in cybercrime. Previously, botnets focused on low-power IoT devices or standard web servers, but the intensive computational requirements of generative AI mean that these machines possess hardware capabilities that are orders of magnitude greater than typical targets. Hackers are no longer content with simple CPU cycles; they are hunting for the massive CUDA cores and VRAM found in modern AI workstations. For many users, the first sign of trouble is the sudden disappearance of system resources or a massive spike in electricity costs, often occurring long before they realize their “one-click” cloud deployment has been assimilated into a global botnet.

The GPU Gold Rush: Why Your AI Server Is Now a Cybercriminal Target

The rapid expansion of the AI sector has created a specialized infrastructure that is as powerful as it is poorly defended. These high-performance machines are the lifeblood of the “GPU gold rush,” but their immense processing power serves as a double-edged sword. To a cybercriminal, an AI server is not a tool for image generation; it is a high-yield mining rig that can be seized without the overhead of purchasing hardware. By turning creative platforms into silent laborers, hackers can generate significant revenue while the legitimate owner bears the operational costs. This shift has turned specialized AI platforms into the new front line for botnet expansion, moving beyond the traditional targets of the past.

Moreover, the prevalence of automated exploitation scripts has lowered the barrier to entry for these attacks. Threat actors are no longer manually hunting for individual targets; instead, they deploy relentless scanners that comb through cloud IP ranges looking for any instance that displays a signature of AI-related services. This industrialized approach to hacking means that any server connected to the internet without robust security is likely to be discovered within minutes. The hidden cost of convenience, particularly in the realm of “one-click” AI deployments, is becoming increasingly apparent as more users realize that simplicity often comes at the expense of necessary security hardening.

The Intersection of Generative AI Popularity and Infrastructure Vulnerability

ComfyUI has quickly ascended to a position of dominance within the Stable Diffusion ecosystem due to its modular, graph-based approach to image generation. However, this popularity has inadvertently painted a target on its back. Because the platform is often viewed as a “niche” tool for researchers and hobbyists, many deployments occur in what threat actors consider a “dark corner” of the internet. These environments frequently lack the enterprise-grade monitoring and firewall protections typically found in more established web services. This perceived isolation provides a perfect staging ground for threat actors to operate undetected for long periods, slowly building their infrastructure under the radar of traditional security teams.

The global shift toward “bulletproof” hosting and the increasing commodification of cloud-based botnets have further complicated this landscape. As generative AI becomes more mainstream, the infrastructure supporting it is being folded into a larger market of illicit services. Cybercriminals can now rent access to compromised AI servers to perform resource-intensive tasks, such as brute-forcing passwords or hosting anonymous proxy nodes. This convergence of popularity and vulnerability creates a cycle where the more successful an AI tool becomes, the more likely its users are to be targeted by sophisticated campaigns seeking to capitalize on unhardened infrastructure.

Inside the Campaign: Automation, Exploitation, and Monetization

The anatomy of a modern attack on ComfyUI begins with the relentless probing of Python-based scanners. These tools are designed to identify the specific footprints of AI software and check if the interface is accessible without authentication. If an instance is found to be open, the attackers do not just stop at simple access; they weaponize the “ComfyUI-Manager” itself. This popular tool, meant to help users install new features, is forced to download and install vulnerable custom nodes that provide a gateway for remote code execution. This allows the attacker to bypass the need for a traditional software exploit by using the platform’s own extensibility against it.

Once the initial breach is successful, the deployment of a primary staging script, often referred to as ghost.sh, initiates a complete system takeover. This script acts as the command center for the infected host, establishing persistence and preparing the machine for its new role in the botnet. The monetization strategy is multifaceted, leveraging the dual-mining of XMRig for Monero and lolMiner for Conflux. While Monero provides a privacy-focused steady income, Conflux is specifically chosen because it thrives in GPU-heavy environments, making it the “holy grail” for modern cryptojackers who have gained access to high-end NVIDIA hardware.

Furthermore, the integration of the Hysteria V2 protocol transforms these compromised servers into anonymous proxy nodes. This allows the threat actors to sell access to these high-speed connections on the dark web, creating a secondary revenue stream. These “proxies-for-hire” are particularly valuable because they originate from legitimate cloud providers and residential IP ranges, making them difficult for security filters to block. This combination of cryptomining and proxy services ensures that the attacker maximizes the profit from every single core and gigabyte of memory available on the hijacked system.

The “Malware Wars”: Neutralizing the Competition

A fascinating aspect of this campaign is the emergence of “malware wars,” where different threat groups fight for control over the same hardware. In this tactical hijacking, the current campaign actively seeks out and displaces a rival botnet known as “Hisana.” When the ghost.sh script detects the presence of a competitor, it does not simply delete the rival’s files; it performs a sophisticated takeover. The script redirects the rival’s mining configuration to the new attacker’s wallet, effectively stealing the work already performed by the previous infection. This parasitic behavior demonstrates a level of strategic planning that goes beyond simple destruction.

To prevent the original attackers or the system owners from recovering the machine, the campaign occupies critical command-and-control ports, such as 10808, with dummy listeners. This prevents rival malware from re-establishing a connection with its own servers. By occupying these digital “foxholes,” the threat actors ensure they remain the sole occupants of the high-performance hardware. This constant state of conflict between different malware strains highlights just how valuable these AI resources have become, as groups compete fiercely for the processing power required to sustain their illicit operations.

Technical Sophistication and the Fight for Persistence

Expert analysis of the Linux-level evasion techniques used in this campaign reveals a high degree of technical proficiency. To ensure their malware remains on the system even after a reboot or an attempted cleanup, attackers utilize the chattr +i command. This attribute makes the malware binaries immutable, preventing even the root user from deleting or modifying them through standard means. This type of deep-system manipulation is designed to frustrate average users and less-experienced system administrators, ensuring the botnet’s longevity on the compromised host.

Persistence is further bolstered through process hiding techniques, specifically the use of LD_PRELOAD hooks. These hooks allow the malware to intercept system calls and hide its watchdog processes from standard monitoring tools like top and ps. To a casual observer, the server might appear to be running normally, even as its GPUs are pinned at 100% capacity for the attacker’s benefit. The reliance on “bulletproof” command-and-control infrastructure, such as that provided by the Aeza Group, suggests that these operations are backed by established criminal organizations. Evidence also points to broader operations, linking these ComfyUI exploits to brute-force attempts on Redis databases, indicating a wide-reaching net of digital exploitation.

Securing the AI Frontier: Strategies for Defense and Mitigation

The critical necessity of authentication cannot be overstated in this new era of AI research. Any public exposure of a ComfyUI interface is an open invitation for exploitation, and the speed at which scanners operate means that a “security through obscurity” approach is doomed to fail. Implementing a robust VPN or a secure reverse proxy with mandatory authentication is the most effective way to prevent these automated attacks from reaching the application layer. These practical steps for hardening ComfyUI are no longer optional for anyone hosting their tools on public-facing cloud instances.

Audit protocols for custom nodes must also become a standard part of the AI workflow. Users should be wary of any extension that requests high-risk permissions or allows for raw shell execution, specifically nodes identified as “Shell-Executor” or those with unknown origins. Moving away from a convenience-first configuration and adopting a “security-by-default” mindset is essential for the longevity of the AI creative community. In the end, protecting these powerful machines ensures that they remain dedicated to the pursuit of innovation rather than the enrichment of global cybercriminal networks.

The community learned that relying on default settings in high-performance environments was a recipe for disaster. Security researchers emphasized that the rapid adoption of new tools must be accompanied by equally fast updates to security protocols. Many developers began integrating automated security checks directly into their deployment scripts, while cloud providers started offering more robust monitoring for GPU-intensive anomalies. These proactive measures ultimately helped stabilize the environment, ensuring that high-performance hardware remained under the control of its rightful users. Through a combination of better authentication and more rigorous audits, the threat landscape shifted toward a more resilient and secure future for artificial intelligence infrastructure.

Trending

Subscribe to Newsletter

Stay informed about the latest news, developments, and solutions in data security and management.

Invalid Email Address
Invalid Email Address

We'll Be Sending You Our Best Soon

You’re all set to receive our content directly in your inbox.

Something went wrong, please try again later

Subscribe to Newsletter

Stay informed about the latest news, developments, and solutions in data security and management.

Invalid Email Address
Invalid Email Address

We'll Be Sending You Our Best Soon

You’re all set to receive our content directly in your inbox.

Something went wrong, please try again later