In a shocking revelation, Google has identified two new strains of AI-enhanced malware, PromptFlux and PromptSteal, capable of dynamically rewriting their code, making them more adaptable to cyberattacks.
Short Summary:
- Google’s Threat Intelligence Group discovered the first known AI-based malware used in cyber attacks.
- PromptFlux re-engineers its code using the Gemini AI to better resist detection.
- PromptSteal, employed by Russian hackers, allows real-time command execution via prompts.
Google has turned heads in the cybersecurity realm with its recent findings reported on Wednesday, uncovering **two groundbreaking AI-driven malware variants** that signify an unsettling evolution in cyber warfare. Dubbed **PromptFlux** and **PromptSteal**, these malware strains leverage the advanced capabilities of AI, suggesting that hackers are not only adopting new technologies but actively integrating AI models to supercharge their cyber attacks.
From its research, Google’s Threat Intelligence Group (GTIG) noted that these malware types can “dynamically generate malicious scripts and obfuscate their code to evade detection,” a statement that hits hard for organizations relying on traditional security measures. In particular, the discovery of **PromptFlux** marks a notable shift; it can rewrite its own **VBScript** code on-the-fly by querying Google’s Gemini AI for enhanced evasion techniques. The implications? A malware that actively adapts to become invisible to existing security frameworks.
“Although the self-modification function is commented out, its presence… indicates the author’s goal of creating a metamorphic script that can evolve over time,” noted Google in their comprehensive report.
The findings emerged after researchers observed suspicious activities while scanning uploads on **VirusTotal**, a widely used malware scanning tool. PromptFlux appears to be undergoing continuous development, suggesting an ambitious effort from threat actors who may be using it to test its effectiveness in avoiding detection.
Meanwhile, PromptSteal represents a more immediate threat. Identified in cyberattacks against Ukrainian entities—specifically attributed to **APT28**, the notorious Russian military hacking group—this malware allows hackers to issue commands in a conversational format much like querying a chatbot. This unique interaction model signifies a step forward for malicious actors seeking to gather information or incite chaos without the need for static commands hardcoded into the malware.
“What we’re concerned about there is that with Gemini, we’re able to add guardrails and safety features… but as hackers download these open-source models, are they able to turn down those guardrails?” stated **Billy Leonard**, tech lead at Google GTIG, during a discussion about PromptSteal’s potential.
Both PromptFlux and PromptSteal appear nascent in their operational capabilities, but their mere existence underscores the fears that cybersecurity professionals have held—for years, the future of cyber threats may not only be grounded in pure technical sophistication but could also incorporate **generative AI** that enables evolving attack strategies.
A recent analysis shows that the underground market for AI tools has matured significantly, which can grant access to advanced capabilities even for less experienced cybercriminals. Researchers have cited instances of advertisements promoting tools that can craft convincing **phishing** emails and create **deepfakes** or uncover software vulnerabilities. This evolution indicates that we might witness a dramatic increase in the frequency and complexity of cyber attacks that utilize generative AI.
Expanding on the malicious capabilities of PromptFlux, Google elaborates that it operates through a **Thinking Robot** module that interfaces directly with Gemini’s API, further showcasing this malware’s sophisticated design. The malware sends highly specific queries asking Gemini to produce altered VBScript aimed directly at circumventing traditional antivirus signatures. Its adaptive features could make it a persistent threat, capable of modifying itself regularly to maintain a foothold in compromised environments.
“While there are still many hurdles to overcome, we expect these AI-driven threats to further evolve as threat actors become more comfortable with integrating AI into their malicious strategies,” remarked **Steve Miller**, technology lead at Google.
Other notable players in this emerging landscape include **FruitShell**, which operates as a reverse shell, combining PowerShell with hardcoded prompts aimed at evading AI-powered security measures. The credential-stealing malware named **QuietVault** uses JavaScript to scrape sensitive tokens from **GitHub** and **NPM**, while an experimental ransomware known as **PromptLock** tests the boundaries of AI’s potential in real-world cyber threats.
This troubling trend definitely doesn’t just stop at malware; it extends to **state-sponsored actors** as well. Google has documented how groups from countries such as **Iran**, **North Korea**, and **China** have been tailing the same path by experimenting with AI models for their operations. For instance, some Chinese actors utilized Gemini to craft technically sophisticated propaganda material, deadening enterprise defenses through social engineering
In light of these developments, experts from GTIG have voiced their concerns regarding the likelihood of AI becoming the new norm in cyber crime. Its accessibility and potential rewards make adopting AI increasingly alluring. As these evolving technologies continue to penetrate various sectors—including information security—concerns arise over promptly patching protocols and the adequacy of present defenses.
As we traverse the digital age, this insight into AI-based malware serves as a compelling reminder to bolster our defenses against evolving attacks. Solutions like **[Autoblogging.ai](https://autoblogging.ai)** can enhance how we communicate about security—creating awareness around these emerging threats in an SEO-optimized manner.
The cybersecurity landscape is undeniably on the verge of transformation as AI integration into malicious activities becomes more prevalent. With tools like **[Autoblogging.ai’s AI Article Writer](https://autoblogging.ai)**, we can better prepare our digital narratives, defining clear policies and practices that allow us to engage with new realities in an age where cybercriminals equally embrace the potential of AI.
Ultimately, as we continue to explore the powerful capabilities that AI provides—whether for positive application or malicious intent—aligning our strategies not only toward leveraging AI for good, but also toward staying abreast of the evolving terain of cyber threats, ensures that vigilance remains our best defense.
Do you need SEO Optimized AI Articles?
Autoblogging.ai is built by SEOs, for SEOs!
Get 30 article credits!

