METATRENDS
Yesterday marked a historic milestone. Anthropic announced Claude Mythos Preview (their most powerful model yet), and immediately said they won’t release it to the public. Instead, they’re launching Project Glasswing: a consortium of 40+ companies (Apple, Amazon, Microsoft, Google, Cisco, CrowdStrike, Palo Alto Networks, JPMorgan Chase, the Linux Foundation) who will use Mythos exclusively for defensive cybersecurity work.
Why the restriction? Mythos is so good at finding software vulnerabilities that Anthropic calls it “an industry reckoning.” In just a few weeks of testing, the model identified thousands of zero-day vulnerabilities – many of them critical, some one to two decades old. Logan Graham, who leads Anthropic’s dangerous capabilities testing team, called it “the starting point for what we think will be an industry change point.” Anthropic’s Chief Science Officer Jared Kaplan said the goal is to “raise awareness and give good actors a head start.”
This is the first time a frontier AI lab has built a model and concluded: We can’t let the public have this. OpenAI, Anthropic, and Google already share information via the Frontier Model Forum to detect Chinese distillation attempts. Now Anthropic is going further: committing up to $100 million in compute credits to Project Glasswing and coordinating with CISA and federal officials on Mythos deployment.
Because all of the Frontier models have been evolving in lockstep, leap-frogging each other, there is little question that OpenAI, xAI, Google, and a variety of opensource Chinese models will soon reach and exceed the capability of Mythos.
But here’s the question: When OpenAI or xAI develops a model as powerful as Mythos, will they hold back as well? Or will they immediately publish to gain the upper hand?
Exciting times, dangerous times… And please remind me, who are the adults in the room?
