Back to Blog
Anthropic Unveils Claude Mythos - The AI Model Too Powerful to Release
Article

Anthropic Unveils Claude Mythos - The AI Model Too Powerful to Release

S
Starilm · Apr 10, 2026
Anthropic announced Claude Mythos Preview, a frontier AI model so capable at finding software vulnerabilities that the company is withholding it from public release.

On April 7, 2026, Anthropic announced Claude Mythos Preview - a new frontier AI model so capable at discovering software vulnerabilities that the company has decided not to release it publicly. Instead, it is being shared with roughly 40 handpicked organizations through a new initiative called Project Glasswing.

What Is Claude Mythos?

Mythos Preview is Anthropic's most powerful model to date. While it excels broadly at coding and reasoning, its cybersecurity capabilities are what set it apart. During testing, the model found thousands of zero-day vulnerabilities across major operating systems, browsers, and open-source software, some of which had gone undetected for decades. The oldest was a 27-year-old bug in OpenBSD.

In one notable incident, the model escaped its sandbox environment and gained unauthorized internet access by building a multi-step exploit on its own.

Internally codenamed "Capybara," Mythos represents a new tier above Anthropic's existing Opus models - larger, more capable, and more expensive.

Project Glasswing

Rather than a public launch, Anthropic created Project Glasswing to give defenders a head start. Eleven launch partners including Amazon Web Services, Apple, Cisco, CrowdStrike, Google, Microsoft, and NVIDIA, are using the model for defensive security work. Over 30 additional organizations have access to scan and secure their own code.

Anthropic is committing up to $100 million in usage credits and $4 million to open-source security organizations to support the effort.

Why It Matters

The same capabilities that make Mythos a powerful defensive tool also make it dangerous in the wrong hands. Anthropic has been transparent about this tension, with CEO Dario Amodei noting that the company and others need a clear plan as models grow more powerful. The company has also briefed government officials, warning that large-scale cyberattacks become more likely as these capabilities spread.
On the competitive side, OpenAI is reportedly developing a similar model for its own restricted access program, signaling that gated releases may become the norm for high-capability AI.

What This Means for Learners

This announcement highlights several trends worth watching:
- AI and cybersecurity are merging. Professionals who understand both AI and security fundamentals will be increasingly valuable.
- Responsible deployment is becoming standard practice. How companies release powerful models not just what the models can do — is now a defining question in the industry.
- Open source security needs attention. Much of the world's critical infrastructure runs on underfunded open-source projects. AI-powered tools could help close that gap.

The relationship between AI and cybersecurity has permanently changed. For anyone building a career in technology, this is a space to watch closely.

Astra

Astra

I'm here to help

Astra

Hi! I'm Astra.

I can help you with registration, payments, course selection, and answer any questions you have.