=== HEADLINE === A Discord Has Continuous Access to Anthropic's Mythos === STORY_URL === https://jaysoncraig.ca/sandbox/faces/mythos-leak === TWITTER_THREAD === 1/ A Discord with 4,200 paying members has continuous unauthorized access to Anthropic's Mythos. Days after release. Through a third party fine tuning vendor. As of publication, Anthropic has not been able to shut the door. 2/ Mythos is the preview of Project Glasswing. Anthropic's most capable model. 91% on SWE-Bench Pro, 7 points above Claude Opus 4.7. 2 million tokens of context. Tier 3 in their Responsible Scaling Policy. The preview was meant to test whether the safeguards held. 3/ The members aren't using the official API. They're routed through a third party fine tuning vendor that one of Anthropic's pilot enterprise customers, a regulated pharma firm, was funneled through for compliance reasons. 4/ The vendor was supposed to be a privacy buffer. Instead a developer there reached the Mythos endpoint through an internal proxy. From there the access was offered, sold, and resold. Seats at $200 a month. 4,200 strangers running an unscaffolded model. 5/ Here is the part the rest of the industry can't ignore. The same vendor has pipeline relationships with OpenAI and Google. A single mid sized vendor leaked Anthropic's most sensitive model in 9 days. None of the other labs has a stronger supply chain. 6/ Anthropic announced a $5M emergency bug bounty for vendor side findings. The FTC opened a preliminary inquiry. Two pilot customers paused their contracts. Dario Amodei said they're "working aggressively" to contain it. As of recording, the door is still open. 7/ This lands two days after Google announced $40 billion in Anthropic at a $350 billion valuation. Anthropic's safety story is the entire reason those checks got written. The Mythos leak punctures that pitch in a way no normal product story would. 8/ Anthropic spent years arguing slow rollouts and vetted partners were enough. The Mythos leak says the slow rollout wasn't slow enough, and the partners weren't vetted enough. Full breakdown: [INSERT YT URL] === LINKEDIN_POST === A Discord with 4,200 paying members has continuous unauthorized access to Anthropic's most capable model. The door is still open. This is the breach the entire AI industry has been quietly afraid of, and it did not run through Anthropic's own systems. It ran through the supply chain around Anthropic. Earlier this month, Anthropic released Mythos, the preview of Project Glasswing, an internal program the company had publicly described as "too capable for general availability." Mythos hits 91% on SWE-Bench Pro (about 7 points above Claude Opus 4.7), runs a 2-million-token context window, and reaches the Tier 3 capability threshold in Anthropic's Responsible Scaling Policy. The preview was opened to a handful of vetted enterprise customers in pharma, finance, and federal research, behind output filters and a signed, monitored API endpoint. Nine days later, Fortune reported that a paid Discord with roughly 4,200 members has continuous unauthorized access to Mythos. Seats are $200 a month. The breach does not run through any compromised Anthropic system. It runs through a third-party fine-tuning vendor that one of Anthropic's pilot enterprise customers had been routed through for compliance reasons. The same vendor has pipeline relationships with OpenAI and Google. Anthropic responded with a $5M emergency bug bounty targeted at vendor-side security findings. The FTC opened a preliminary inquiry. Two pilot customers paused their engagements. As of publication, Anthropic has not been able to revoke or contain the leak through the vendor channel. Two days before the Fortune story, Google announced an investment of up to $40B in Anthropic at a $350B valuation, on top of Amazon's $25B add-on the week before. Anthropic's safety story is the entire reason those checks got written. The vendor leak punctures that pitch at exactly the moment the cap table is most dependent on it. The harder lesson is for every other frontier lab. OpenAI, Google, and Meta all lean on third-party fine-tuning, compliance, and cloud-reseller partners. Anthropic just demonstrated that a single mid-sized vendor can leak the company's most sensitive model to four thousand strangers in nine days. The slow rollout was not slow enough. The vetted partners were not vetted enough. Watch the full breakdown: [INSERT YT URL] Source: Fortune — https://fortune.com/2026/04/23/anthropic-mythos-leak-dario-amodei-ceo-cybersecurity-hackers-exploits-ai/ === NEWSLETTER === Subject: Anthropic's Mythos just leaked through a vendor Days ago, Anthropic released Mythos, the preview of Project Glasswing, the most capable model the lab has ever built. Tier 3 in their Responsible Scaling Policy. 91% on SWE-Bench Pro. Two million tokens of context. The preview was opened to a tiny set of vetted enterprise customers behind output filters, a signed API endpoint, and the slowest, most cautious rollout architecture in the industry. Nine days later, Fortune published the part nobody at Anthropic wanted printed. A Discord server with roughly 4,200 paying members has continuous unauthorized access to Mythos. Seats sell for $200 a month. The access does not run through any compromised Anthropic system. It runs through a third-party fine-tuning vendor that one of Anthropic's pilot enterprise customers had been routed through for compliance reasons. The same vendor has pipeline relationships with OpenAI and Google. That is the part that has every safety team in the industry on the phone with their lawyers, because the breach is not a hack of Anthropic. It is a hack of the supply chain around Anthropic. And every lab is built on the same supply chain. Anthropic announced a $5M emergency bug bounty for vendor-side findings. The FTC opened a preliminary inquiry. As of publication, Anthropic has not been able to revoke the leak through the vendor channel. Two days earlier, Google had committed up to $40B at a $350B valuation. Anthropic's safety story is what funded that round. The vendor leak punctures it. This is the moment the safety lab realized it has the same supply chain problem as the chip industry. Watch: [INSERT YT URL] — Jane Sterling === SHORT_SCRIPT === Days ago, Anthropic released its most capable model. Right now, a Discord with four thousand paying members is using it without permission, and Anthropic still has not been able to shut the door. Mythos is the preview of Project Glasswing. Anthropic publicly called it too capable for general release. Ninety one percent on SWE-Bench Pro. Two million tokens of context. Tier 3 in their Responsible Scaling Policy. The kind of model the entire safety pitch is built around. Nine days after rollout, Fortune reported that a paid Discord with roughly four thousand two hundred members has continuous unauthorized access. Two hundred dollars a month per seat. Not through Anthropic's API. Through a third party fine tuning vendor that one of the pilot enterprise customers was routed through for compliance reasons. The same vendor has pipeline relationships with OpenAI and Google. Anthropic announced a five million dollar emergency bug bounty. The FTC opened an inquiry. As of recording, the door is still open. The slow rollout was not slow enough. The vetted partners were not vetted enough. Every other frontier lab is now staring at its own vendor list with new eyes. Stay sharp. === HASHTAGS_TWITTER === #Anthropic #Mythos #AISecurity === HASHTAGS_LINKEDIN === #ArtificialIntelligence #Anthropic #AISafety #Cybersecurity #SupplyChain