April 11, 2026 Newsletter
Google’s latest threat intel confirms something defenders have worried about for years: malware is no longer just written with AI — it’s now running with AI in the loop.
In a new report highlighted by Cybersecurity Dive, Google’s Threat Intelligence team describes five malware families that use large language models (LLMs) to rewrite themselves, generate new capabilities on demand, and hide from traditional defenses.
This isn’t “AI helps a script kiddie write ransomware.” This is malware dynamically querying an LLM during execution to decide what to do next. That’s a different class of problem.
From AI helper to AI operator
Historically, attackers leaned on AI to:
Write more convincing phishing lures
Clean up code, translate languages, or automate mundane tasks
AI-assisted malware existed, but it wasn’t the dominant use case. Google’s new findings show a shift: AI is now embedded into malware as an active decision-making component during the attack itself, not just during development.
Google calls out five emerging malware families: FRUITSHELL, PROMPTFLUX, PROMPTSTEAL, PROMPTLOCK and QUIETVAULT. These tools share a few core traits:
They use LLMs to generate or mutate malicious scripts in real time
They obfuscate or regenerate their own code to bypass static detection
They can create attack functions “just in time,” rather than shipping a fixed payload
Result: detection rules based on known strings, hashes, or static behavior are easier to evade.
PROMPTFLUX: Malware that rewrites itself
PROMPTFLUX is one of the clearest examples of what “AI-powered” actually means in practice.
Key behaviors:
Uses Google’s Gemini to rewrite its own source code on a recurring basis (one observed sample regenerated itself every hour).
Drops the reconstituted file in the Windows Startup folder so it persists quietly on the system.
Uses AI-driven obfuscation techniques specifically tuned to dodge security tools.
Google notes several components in PROMPTFLUX are still inactive and its interaction with Gemini is throttled, suggesting it’s under active development and not yet at full capability. They’ve also taken steps to disable the assets and API usage tied to this activity. Importantly, PROMPTFLUX itself isn’t a break-in tool; it needs another vector to land on the endpoint first.
But the pattern matters: attackers are experimenting with malware that can continuously re-camouflage itself with AI’s help.
PROMPTSTEAL: Just-in-time commands from an LLM
PROMPTSTEAL pushes the model-in-the-loop idea further. Instead of shipping a static set of commands, the malware:
Calls an LLM hosted on Hugging Face to generate short Windows commands on demand
Masquerades as an image-generation tool for the victim
Runs reconnaissance and data theft activities in the background by dynamically generating new scripts at runtime
Because the commands are generated “just in time,” defenders can’t rely on finding fixed code snippets or known script bodies on disk. Every run can look slightly different, even if the intent is the same.
Google has observed PROMPTSTEAL in use by APT28, a Russia-linked group associated with the GRU, in operations against Ukrainian targets. Those incidents are the first time Google has seen malware actively querying an LLM in the wild as part of an attack chain.
The broader AI abuse pattern
The report makes it clear these families are early signals, not yet the dominant threat. Most real-world breaches still pivot on familiar tactics: phishing, credential theft, vulnerable edge services, misconfigurations. But AI is now visibly embedded in offensive toolchains, especially among advanced actors.
Examples from Google’s observations:
A China-linked group using Gemini to:
Craft lure content
Design technical infrastructure
Develop exfiltration tooling
They bypassed safety controls by pretending to be participants in a capture-the-flag (CTF) exercise and asking for “help” exploiting lab systems — then repurposed those instructions for real targets.
An Iran-linked group attempting to use Gemini to build custom malware, but in the process revealing details of its command-and-control infrastructure. That operational sloppiness gave Google enough insight to disrupt activity.
This reinforces a key point: AI is a force multiplier for both sides. Attackers can move faster and adapt; defenders get more telemetry and can use the same models to detect anomalies, cluster behavior, and hunt.
Why static defenses will lose this race
Google’s core warning: as AI-enabled malware moves from experiment to standard practice, static detection alone will fail more often.
Implications for security programs:
Signatures and simple YARA rules degrade faster. When code can be regenerated hourly, traditional indicators age out quickly.
Behavioral and anomaly-based detection becomes mandatory. You need tools that notice “this process keeps spawning short-lived scripts and beaconing in odd patterns,” not just “this file hash is bad.”
Model-aware monitoring matters. If endpoints or services can call external AI APIs, you need visibility into:
Which processes are making those calls
What data they’re sending
How often and to which models
Supply-chain and platform risk increases. Malware leaning on public AI platforms (Gemini, Hugging Face-hosted models, etc.) blurs the line between legitimate and malicious traffic. Controlling where and how your environment can talk to those services becomes part of endpoint and network hardening.
Bottom line
The Cybersecurity Dive piece on Google’s findings marks a clear inflection point: AI is now embedded inside malware, not just sitting in the attacker’s toolbox off to the side. FRUITSHELL, PROMPTFLUX, PROMPTSTEAL, PROMPTLOCK and QUIETVAULT are still early-stage, but they show where things are going — toward adaptive, self-obfuscating malware that can generate new capabilities on demand.
Defensive strategies built around known-bad artifacts, static signatures, and once-a-year playbooks will not survive that shift. Behavioral analytics, AI-aware controls, and continuous threat intelligence ingestion move from “nice to have” to required baseline.
Subscribe to our newsletter to keep you updated on the latest cybersecurity insights & resources.
One follow-up from a security expert—no spam, ever.
Enter your details below to download the PDF.