What Actually Happened
Google's Threat Intelligence Group โ the team within Google that monitors and responds to advanced cyber threats โ detected that a group of hackers had used an AI model to analyse software systems, identify a previously unknown vulnerability, and begin planning a mass exploitation operation targeting that flaw. Google describes this as the first confirmed case of AI being used by threat actors to discover and weaponize a zero-day vulnerability.
Google's proactive counter-discovery โ finding the attack before it launched โ appears to have prevented what could have been a large-scale exploitation event. The company stated that it does not believe its own Gemini model was used in the attack, though it did not identify which AI model the attackers used.
A zero-day vulnerability is a software flaw that the developers and vendors are not yet aware of โ meaning no patch exists and no defence has been built. Discovering these vulnerabilities traditionally requires significant human expertise and time. AI dramatically accelerates this process, potentially allowing attackers to find and exploit flaws faster than defenders can detect and patch them.
How the AI-Powered Attack Process Worked
Why This Changes the Cybersecurity Landscape
The Anthropic Connection โ The Mythos Delay
This incident is closely related to a decision Anthropic made earlier in 2026 โ delaying the public rollout of its advanced Mythos model specifically because the company was concerned that bad actors could use it to exploit software vulnerabilities before they could be patched. At the time, some observers questioned whether that concern was theoretical. Google's disclosure confirms it was not theoretical at all.
The timing of Anthropic's delay and Google's disclosure paints a consistent picture: major AI companies are genuinely aware that their most capable models can be used as tools for cyberattack planning, and some are making deliberate decisions about deployment timelines based on that risk assessment.
In April 2026, Anthropic delayed its Mythos model rollout over concerns about vulnerability exploitation. In May 2026, Google confirmed hackers actually used AI to find a zero-day and plan a mass attack. These two events together confirm that the AI cybersecurity threat is not hypothetical โ it is active, and both AI companies and security teams are responding to it in real time.
What This Means for Organisations
For any organisation running software โ which is every organisation โ this event carries a clear message: the window between a vulnerability existing and being exploited is shrinking. AI-accelerated vulnerability discovery means that software flaws will be found and weaponized faster than before, and the traditional patch cycle โ discover, develop patch, test, deploy โ may not be fast enough in an AI-accelerated threat environment.
- Patch management becomes more urgent โ known vulnerabilities must be patched faster than before
- AI-powered defence tools โ organisations need AI on the defensive side to match the speed of AI-powered attacks
- Zero-trust architecture โ assuming breach and limiting blast radius becomes more important as attack speed increases
- Threat intelligence investment โ teams like Google's Threat Intelligence Group that proactively hunt for attacks become strategically critical
This is the event that cybersecurity professionals have been warning about for years โ and it arrived faster than most expected. What concerns me most is not the specific attack Google stopped, but what it signals about the trajectory. If attackers are already using AI to find zero-days and plan mass exploitation operations in 2026, the capabilities will only improve from here. The defenders have AI tools too โ but the question is whether the defensive use of AI can keep pace with offensive use. Google stopping this particular attack is genuinely good news. What we don't know is how many similar operations are underway right now that haven't been detected yet.
Frequently Asked Questions
What is a zero-day vulnerability?
A zero-day vulnerability is a software flaw that is unknown to the software's developers and vendors โ meaning there are zero days of awareness and therefore zero days of patches or defences in place. Once discovered, a zero-day can be exploited immediately by attackers because no protection exists. They are among the most valuable and dangerous types of security vulnerabilities.
Which AI model did the hackers use?
Google did not identify which AI model was used by the attackers in its public disclosure. The company confirmed that it does not believe its own Gemini model was involved. The specific model used has not been publicly disclosed.
How did Google detect the attack before it happened?
Google's Threat Intelligence Group uses proactive counter-discovery methods โ actively monitoring for signs of planned attacks, unusual patterns in how systems are being probed, and intelligence gathered from the broader threat landscape. The specific detection methods Google used have not been publicly disclosed, as revealing those methods could help attackers avoid detection in future operations.