Anthropic’s new AI found a 27-year-old bug in minutes, but they’re keeping it a secret. Is the future of cybersecurity just a private race for the elite?
Anthropic just revealed a new AI model called Claude Mythos Preview, but you won’t be using it anytime soon.
Instead of a public launch, the company formed a private club called Project Glasswing. This coalition involves heavyweights such as Google, Microsoft, and NVIDIA. They are currently using Mythos to find the holes in our digital world before someone else does.
The data behind this move is jarring.
Mythos found thousands of zero-day vulnerabilities that humans had missed for decades in the first few tests itself. It spotlit a 27-year-old bug in OpenBSD as well as a 16-year-old flaw in FFmpeg that survived millions of previous scans.
The model doesn’t just highlight a single glitch; it can chain four or five small bugs together to take over an entire system. Few in the market are already calling it basically a professional-grade hacker in a box.
That’s where the transparency paradox kicks in.
Anthropic named the project after a transparent butterfly to signal openness. Yet, they are keeping the tech behind a heavy gate. They argue the model is a dual-use risk. In the right hands, it fixes the internet. But in the wrong hands, it could shut down a power grid.
Project Glasswing might trigger the inevitable shift in how we think about AI safety.
We are moving away from the black box debate and into a period of permanent cyber-warfare. Anthropic is attempting to patch the world’s plumbing in secret by giving $100 million in credits to Big Tech and open-source groups.
They are betting that a small group of good guys can stay ahead of the curve. But all this forces a tough question-
Are we safer because Mythos can find the leaks, or are we in more danger now that a tool with such a power actually exists?


