Menace actors are trying to leverage a newly launched synthetic intelligence (AI) offensive safety instrument referred to as HexStrike AI to take advantage of not too long ago disclosed safety flaws.
HexStrike AI, in response to its web site, is pitched as an AI‑pushed safety platform to automate reconnaissance and vulnerability discovery with an intention to speed up licensed crimson teaming operations, bug bounty looking, and seize the flag (CTF) challenges.
Per info shared on its GitHub repository, the open-source platform integrates with over 150 safety instruments to facilitate community reconnaissance, internet software safety testing, reverse engineering, and cloud safety. It additionally helps dozens of specialised AI brokers which are fine-tuned for vulnerability intelligence, exploit growth, assault chain discovery, and error dealing with.
However in response to a report from Verify Level, menace actors try their arms on the instrument to achieve an adversarial benefit, making an attempt to weaponize the instrument to take advantage of not too long ago disclosed safety vulnerabilities.
“This marks a pivotal second: a instrument designed to strengthen defenses has been claimed to be quickly repurposed into an engine for exploitation, crystallizing earlier ideas right into a broadly accessible platform driving real-world assaults,” the cybersecurity firm mentioned.
Discussions on darknet cybercrime boards present that menace actors declare to have efficiently exploited the three safety flaws that Citrix disclosed final week utilizing HexStrike AI, and, in some instances, even flag seemingly weak NetScaler cases which are then supplied to different criminals on the market.
Verify Level mentioned the malicious use of such instruments has main implications for cybersecurity, not solely shrinking the window between public disclosure and mass exploitation, but additionally serving to parallelize the automation of exploitation efforts.
What’s extra, it cuts down the human effort and permits for routinely retrying failed exploitation makes an attempt till they grow to be profitable, which the cybersecurity firm mentioned will increase the “general exploitation yield.”
“The quick precedence is evident: patch and harden affected programs,” it added. “Hexstrike AI represents a broader paradigm shift, the place AI orchestration will more and more be used to weaponize vulnerabilities rapidly and at scale.”
The disclosure comes as two researchers from Alias Robotics and Oracle Company mentioned in a newly revealed examine that AI-powered cybersecurity brokers like PentestGPT carry heightened immediate injection dangers, successfully turning safety instruments into cyber weapons by way of hidden directions.
“The hunter turns into the hunted, the safety instrument turns into an assault vector, and what began as a penetration take a look at ends with the attacker gaining shell entry to the tester’s infrastructure,” researchers Víctor Mayoral-Vilches and Per Mannermaa Rynning mentioned.
“Present LLM-based safety brokers are essentially unsafe for deployment in adversarial environments with out complete defensive measures.”