=== TAG === Cybersecurity === HEADLINE === A Discord Has Continuous Access to Anthropic's Mythos === META_DESC === Days after Anthropic released its most restricted model, a paid Discord with roughly four thousand members has continuous unauthorized access to Mythos through a third party fine tuning vendor. As of publication, Anthropic has not closed the door. === DATE === April 23–25, 2026 === AUTHOR === Jane Sterling === READ_TIME === 9-minute read === HERO_IMG === img/content.png === SCRIPT_LABEL === Video Script (9 min, clean transcript for captioning) === SCRIPT === Days ago, Anthropic released what it called the most capable model it had ever built. Right now, a Discord channel with four thousand paying members is using it without permission, and Anthropic still has not been able to shut the door. Let me explain what just happened to the safest lab in artificial intelligence. Earlier this month, Anthropic announced the limited rollout of a model code named Mythos. Mythos was the preview release of Project Glasswing, an internal program the company had publicly described as too capable for general availability. The preview was wrapped in restrictions. A handful of vetted enterprise customers in pharma, finance, and federal research. Outputs filtered through a separate safety layer. A signed and monitored API endpoint. Strict rate limits. The whole rollout was the kind of cautious, scaffolded release Anthropic has spent years building its brand on. Nine days after Mythos went live to those customers, Fortune published the part nobody at Anthropic wanted printed. A Discord server, founded last year to trade Claude jailbreaks, has continuous unauthorized access to Mythos. The membership had ballooned to roughly four thousand two hundred users by the time the story broke. The server was charging a flat two hundred dollars a month. The access did not flow through any compromised Anthropic system. It flowed through a third party fine tuning vendor that one of the enterprise pilot customers had been routed through for compliance reasons. That is the part of this story that has every safety team in the industry on the phone with their lawyers. Because the breach is not a hack of Anthropic. It is a hack of the SUPPLY CHAIN around Anthropic. And every lab is built on the same supply chain. Now let me back up. Why does this model matter. Why are people paying for a Discord seat to use it. Mythos is not a regular model release. According to leaked benchmark numbers that started circulating before the access did, Mythos hits ninety one percent on SWE-Bench Pro. That is roughly seven points above Claude Opus 4.7, which Anthropic released in February. It scores in the high nineties on the biology and chemistry knowledge tests Anthropic uses internally. It has roughly two million tokens of context. And in the safety evaluations the company published in its system card, Mythos reached the Tier 3 capability threshold the company had publicly committed to never deploying without unprecedented safeguards. The whole point of the Mythos preview was to test whether those safeguards held under real customer load. What the Discord access does is rip those safeguards off. The unauthorized clients are running Mythos through the vendor connection without the output filters Anthropic layered on top. They are extracting capabilities at the raw model level. They are sharing benchmarks. They are sharing exploit chains. They are circulating screenshots that show the model writing code, planning attacks, and producing biology content that the production Anthropic API would have refused before generating a single token. Fortune's reporting also made one detail very explicit. This was not a one time download. The members of the server have CONTINUOUS access. Meaning at the moment the story published, the door was still open. Anthropic had not been able to revoke, rotate, or contain the leak through the vendor channel. Nine days into the preview, with hundreds of millions of dollars of safety engineering behind it, the most capable model in the company's history was running for paying strangers on a Discord. So how did this happen. The pattern, as best as it can be reconstructed, is the kind of mistake that will eventually be taught in computer security textbooks. One of Anthropic's pilot enterprise customers, a regulated pharmaceutical firm, was contractually unable to send raw research data straight to a third party model API. So Anthropic and the customer went through a compliance approved fine tuning vendor. The vendor was supposed to act as a privacy buffer, sitting between Anthropic's API and the customer's data, isolating sensitive prompts. The vendor's security review had been signed off months earlier as part of standard procurement. What Anthropic did not seem to fully account for is that the vendor was running multiple customer pipelines on the same infrastructure. A separate developer at the vendor, working on what looked like an unrelated tooling project, was eventually able to reach the Mythos endpoint through an internal proxy. From there the access was offered, sold, and resold. Fortune named the vendor only obliquely. The lawsuits have not landed yet but the people I trust on this story believe they will land within the week. The vendor is not a small player either. It is a Bay Area compliance startup that had become the default privacy intermediary for several pharma and life sciences customers using frontier AI. Anthropic was not its only frontier model partner. OpenAI and Google had pipeline relationships with the same vendor. That detail is going to matter a lot in the litigation that is coming, because every lab using that vendor is now a possible second leak. The reactions started arriving within hours of the Fortune piece. Dario Amodei issued a short written statement on X saying the company was, quote, working aggressively with the vendor and federal partners to contain the access, end quote. Anthropic announced an emergency five million dollar bug bounty aimed specifically at vendor side security findings. The Federal Trade Commission opened a preliminary inquiry. Two of the pilot customers paused their engagements pending review. Anthropic security engineers spent the weekend rotating credentials at every external pipeline they could identify. Whether that was enough to actually shut the leaking endpoint down is, as of this recording, still an open question that the company has not publicly answered. And inside the AI policy community, the responses were sharper. Researchers who had publicly criticized Anthropic for what they called safety theater pointed out that Anthropic's own threat model documented this exact category of failure two years ago. Other safety advocates went the other direction and argued that no other lab, anywhere, would have detected this leak in nine days. Both arguments contain something true. Anthropic was the first lab to publicly catch the breach. It was also the lab that allowed the breach. Now let me sit with the bigger picture for a minute. Because this story is bigger than Anthropic. Two days before the Fortune piece dropped, Google announced an investment of up to forty billion dollars in Anthropic, anchoring the company at a three hundred and fifty billion dollar valuation. That was on top of Amazon's twenty five billion dollar add on the week before. Anthropic's safety story is the entire reason those checks got written. The pitch to enterprise buyers, to regulators, to the federal government, has always been the same. Anthropic is the lab where a model this capable gets released RESPONSIBLY. The vendor leak punctures that pitch in a way that no normal product story would. It also lands in the middle of an active policy cycle. The White House's NSTM-4 memo, released just two days earlier, had warned about industrial scale extraction of American AI by Chinese labs. The Mythos vendor leak is the domestic version of the same problem. The question is no longer whether frontier models can be exfiltrated. It is who is doing it, through which pipe, and how quickly the company can close the pipe before the model gets into the wrong hands. In this case, the answer was a Discord and a fine tuning vendor in California, and the closing has not actually happened yet. The other lesson here is for every other frontier lab. OpenAI uses third party vendors. Google uses third party vendors. Meta uses third party vendors. Every enterprise rollout of a frontier model touches a fine tuning provider, a data residency provider, a compliance gateway, a cloud reseller. Each of those is a potential Mythos style leak. Anthropic just demonstrated that a single mid sized vendor can leak the company's most sensitive model to four thousand strangers in nine days. None of the other labs has a stronger supply chain, and most have weaker safety scaffolding around it. Even Sam Altman, who has spent the last year openly mocking Anthropic's slow rollout philosophy, kept his commentary unusually careful. The risk that this story is a preview of his own next breach is not subtle. So where does this leave us. Anthropic spent years arguing that the way to deploy frontier AI safely was tight scaffolding, vetted partners, and slow rollouts. The Mythos leak says the slow rollout was not slow enough, and the vetted partners were not vetted enough. The next ninety days are going to determine whether Anthropic's brand absorbs this or whether the safety story that funded its rise quietly cracks. Either way, the rest of the industry is watching their own vendor lists with new eyes. This is the moment the safety lab realized it has the same supply chain problem as the chip industry. Stay sharp. Jane Sterling, Sterling Intelligence. === SCRIPT_HTML === === ANNOTATED_LABEL === Annotated Script (with b-roll & cut cues) === ANNOTATED_HTML === [TALKING HEAD — hook]

Days ago, Anthropic released what it called the most capable model it had ever built. Right now, a Discord channel with four thousand paying members is using it without permission, and Anthropic still has not been able to shut the door.

[STAT CARD: "4,200 Discord members with continuous access"] [CUT] [TALKING HEAD — transition]

Let me explain what just happened to the safest lab in artificial intelligence.

[VOICEOVER — scene 1] [B-ROLL: company-logo:anthropic] [B-ROLL: ai-abstract]

Earlier this month, Anthropic announced the limited rollout of a model code named Mythos. Mythos was the preview release of Project Glasswing, an internal program the company had publicly described as too capable for general availability. The preview was wrapped in restrictions. A handful of vetted enterprise customers in pharma, finance, and federal research. Outputs filtered through a separate safety layer. A signed and monitored API endpoint. Strict rate limits. The whole rollout was the kind of cautious, scaffolded release Anthropic has spent years building its brand on.

[B-ROLL: data-center] [B-ROLL: screen-capture:fortune-headline]

Nine days after Mythos went live to those customers, Fortune published the part nobody at Anthropic wanted printed. A Discord server, founded last year to trade Claude jailbreaks, has continuous unauthorized access to Mythos. The membership had ballooned to roughly four thousand two hundred users by the time the story broke. The server was charging a flat two hundred dollars a month. The access did not flow through any compromised Anthropic system. It flowed through a third party fine tuning vendor that one of the enterprise pilot customers had been routed through for compliance reasons.

[STAT CARD: "9 days from preview release to public leak"] [STAT CARD: "$200 per month Discord seat price"] [/VOICEOVER] [TALKING HEAD — transition]

That is the part of this story that has every safety team in the industry on the phone with their lawyers. Because the breach is not a hack of Anthropic. It is a hack of the SUPPLY CHAIN around Anthropic. And every lab is built on the same supply chain.

[CUT] [TALKING HEAD — transition]

Now let me back up. Why does this model matter. Why are people paying for a Discord seat to use it.

[VOICEOVER — scene 2] [B-ROLL: code-terminal] [B-ROLL: finance-charts]

Mythos is not a regular model release. According to leaked benchmark numbers that started circulating before the access did, Mythos hits ninety one percent on SWE-Bench Pro. That is roughly seven points above Claude Opus 4.7, which Anthropic released in February. It scores in the high nineties on the biology and chemistry knowledge tests Anthropic uses internally. It has roughly two million tokens of context. And in the safety evaluations the company published in its system card, Mythos reached the Tier 3 capability threshold the company had publicly committed to never deploying without unprecedented safeguards. The whole point of the Mythos preview was to test whether those safeguards held under real customer load.

[STAT CARD: "91% on SWE-Bench Pro"] [STAT CARD: "+7 points vs Claude Opus 4.7"] [STAT CARD: "2 million tokens of context"] [STAT CARD: "Tier 3 capability threshold reached"] [B-ROLL: ai-abstract]

What the Discord access does is rip those safeguards off. The unauthorized clients are running Mythos through the vendor connection without the output filters Anthropic layered on top. They are extracting capabilities at the raw model level. They are sharing benchmarks. They are sharing exploit chains. They are circulating screenshots that show the model writing code, planning attacks, and producing biology content that the production Anthropic API would have refused before generating a single token.

[B-ROLL: stills:discord-screenshot]

Fortune's reporting also made one detail very explicit. This was not a one time download. The members of the server have CONTINUOUS access. Meaning at the moment the story published, the door was still open. Anthropic had not been able to revoke, rotate, or contain the leak through the vendor channel. Nine days into the preview, with hundreds of millions of dollars of safety engineering behind it, the most capable model in the company's history was running for paying strangers on a Discord.

[/VOICEOVER] [CUT] [TALKING HEAD — transition]

So how did this happen.

[VOICEOVER — scene 2 cont.] [B-ROLL: courtroom]

The pattern, as best as it can be reconstructed, is the kind of mistake that will eventually be taught in computer security textbooks. One of Anthropic's pilot enterprise customers, a regulated pharmaceutical firm, was contractually unable to send raw research data straight to a third party model API. So Anthropic and the customer went through a compliance approved fine tuning vendor. The vendor was supposed to act as a privacy buffer, sitting between Anthropic's API and the customer's data, isolating sensitive prompts. The vendor's security review had been signed off months earlier as part of standard procurement.

[B-ROLL: code-terminal]

What Anthropic did not seem to fully account for is that the vendor was running multiple customer pipelines on the same infrastructure. A separate developer at the vendor, working on what looked like an unrelated tooling project, was eventually able to reach the Mythos endpoint through an internal proxy. From there the access was offered, sold, and resold. Fortune named the vendor only obliquely. The lawsuits have not landed yet but the people I trust on this story believe they will land within the week.

[B-ROLL: company-logo:openai] [B-ROLL: company-logo:google]

The vendor is not a small player either. It is a Bay Area compliance startup that had become the default privacy intermediary for several pharma and life sciences customers using frontier AI. Anthropic was not its only frontier model partner. OpenAI and Google had pipeline relationships with the same vendor. That detail is going to matter a lot in the litigation that is coming, because every lab using that vendor is now a possible second leak.

[B-ROLL: stills:dario-amodei-portrait] [B-ROLL: news-studio]

The reactions started arriving within hours of the Fortune piece. Dario Amodei issued a short written statement on X saying the company was, quote, working aggressively with the vendor and federal partners to contain the access, end quote. Anthropic announced an emergency five million dollar bug bounty aimed specifically at vendor side security findings. The Federal Trade Commission opened a preliminary inquiry. Two of the pilot customers paused their engagements pending review. Anthropic security engineers spent the weekend rotating credentials at every external pipeline they could identify. Whether that was enough to actually shut the leaking endpoint down is, as of this recording, still an open question that the company has not publicly answered.

[STAT CARD: "$5 million emergency bug bounty"] [B-ROLL: ai-abstract]

And inside the AI policy community, the responses were sharper. Researchers who had publicly criticized Anthropic for what they called safety theater pointed out that Anthropic's own threat model documented this exact category of failure two years ago. Other safety advocates went the other direction and argued that no other lab, anywhere, would have detected this leak in nine days. Both arguments contain something true. Anthropic was the first lab to publicly catch the breach. It was also the lab that allowed the breach.

[/VOICEOVER] [CUT] [TALKING HEAD — transition]

Now let me sit with the bigger picture for a minute. Because this story is bigger than Anthropic.

[VOICEOVER — scene 3] [B-ROLL: company-logo:google] [B-ROLL: finance-charts]

Two days before the Fortune piece dropped, Google announced an investment of up to forty billion dollars in Anthropic, anchoring the company at a three hundred and fifty billion dollar valuation. That was on top of Amazon's twenty five billion dollar add on the week before. Anthropic's safety story is the entire reason those checks got written. The pitch to enterprise buyers, to regulators, to the federal government, has always been the same. Anthropic is the lab where a model this capable gets released RESPONSIBLY. The vendor leak punctures that pitch in a way that no normal product story would.

[STAT CARD: "$40 billion Google investment"] [STAT CARD: "$350 billion Anthropic valuation"] [STAT CARD: "$25 billion Amazon add-on"] [B-ROLL: stills:white-house-exterior]

It also lands in the middle of an active policy cycle. The White House's NSTM-4 memo, released just two days earlier, had warned about industrial scale extraction of American AI by Chinese labs. The Mythos vendor leak is the domestic version of the same problem. The question is no longer whether frontier models can be exfiltrated. It is who is doing it, through which pipe, and how quickly the company can close the pipe before the model gets into the wrong hands. In this case, the answer was a Discord and a fine tuning vendor in California, and the closing has not actually happened yet.

[B-ROLL: company-logo:openai] [B-ROLL: company-logo:meta] [B-ROLL: data-center]

The other lesson here is for every other frontier lab. OpenAI uses third party vendors. Google uses third party vendors. Meta uses third party vendors. Every enterprise rollout of a frontier model touches a fine tuning provider, a data residency provider, a compliance gateway, a cloud reseller. Each of those is a potential Mythos style leak. Anthropic just demonstrated that a single mid sized vendor can leak the company's most sensitive model to four thousand strangers in nine days. None of the other labs has a stronger supply chain, and most have weaker safety scaffolding around it. Even Sam Altman, who has spent the last year openly mocking Anthropic's slow rollout philosophy, kept his commentary unusually careful. The risk that this story is a preview of his own next breach is not subtle.

[/VOICEOVER] [CUT] [TALKING HEAD — sign-off]

So where does this leave us.

Anthropic spent years arguing that the way to deploy frontier AI safely was tight scaffolding, vetted partners, and slow rollouts. The Mythos leak says the slow rollout was not slow enough, and the vetted partners were not vetted enough. The next ninety days are going to determine whether Anthropic's brand absorbs this or whether the safety story that funded its rise quietly cracks. Either way, the rest of the industry is watching their own vendor lists with new eyes.

This is the moment the safety lab realized it has the same supply chain problem as the chip industry.

Stay sharp.

Jane Sterling, Sterling Intelligence.

=== ARTICLE_HTML ===

Anthropic spent years building a brand around the idea that frontier AI could be released slowly, carefully, and with vetted partners. The Mythos leak just told the world that the bottleneck in that strategy was never the lab. It was the vendor list around the lab.

On April 23, 2026, Fortune reported that a paid Discord server with roughly four thousand two hundred members has continuous unauthorized access to Anthropic's most capable model, code-named Mythos. The access did not flow through any compromised Anthropic system. It flowed through a third-party fine-tuning vendor that one of Anthropic's enterprise pilot customers had been routed through for compliance reasons.

This is what the breach looks like, what Mythos actually is, and why every other frontier lab is now staring at its own vendor list.


What Mythos Is, And Why It Matters

Mythos is the preview release of Project Glasswing — an internal Anthropic program the company has publicly described as "too capable for general availability." Leaked benchmarks place Mythos at 91% on SWE-Bench Pro (about 7 points above Claude Opus 4.7), in the high 90s on Anthropic's internal biology and chemistry tests, and at the Tier 3 capability threshold of the company's Responsible Scaling Policy. The preview was opened to a handful of vetted enterprise customers in pharma, finance, and federal research, behind output filters and a signed, monitored API endpoint.


The Discord

The Discord server at the center of the story was founded last year to trade Claude jailbreaks. By the time Fortune published, membership had grown to roughly 4,200 users, with seats sold for $200 per month. The members are not running Mythos on the official Anthropic API. They are routing through a third-party fine-tuning vendor's pipeline, which means they bypass the output filters Anthropic layered on top of the model.

The access is continuous. As of publication, Anthropic has not been able to revoke, rotate, or contain the leak through the vendor channel. The model is, in effect, running unscaffolded for paying strangers.


How The Vendor Pipeline Failed

One of Anthropic's pilot enterprise customers, a regulated pharmaceutical firm, was contractually unable to send raw research data straight to a third-party model API. Anthropic and the customer routed through a compliance-approved fine-tuning vendor that was supposed to act as a privacy buffer between the API and the customer's data.

What Anthropic did not seem to fully account for is that the vendor was running multiple customer pipelines on the same infrastructure. A developer at the vendor, working on what looked like an unrelated tooling project, was eventually able to reach the Mythos endpoint through an internal proxy. From there the access was offered, sold, and resold.


Why The Vendor Identity Matters

The vendor is a Bay Area compliance startup that had become the default privacy intermediary for several pharma and life-sciences customers using frontier AI. Anthropic was not its only frontier-model partner — OpenAI and Google had pipeline relationships with the same vendor. That fact is now a problem for every lab on that customer list, because the same shared-infrastructure failure mode applies.


Anthropic's Response

Within hours of the Fortune piece, Dario Amodei issued a short written statement on X saying the company was "working aggressively" with the vendor and federal partners to contain the access. Anthropic announced an emergency $5 million bug bounty targeted specifically at vendor-side security findings. The Federal Trade Commission opened a preliminary inquiry. Two pilot customers paused their engagements pending review.

Anthropic security engineers spent the weekend rotating credentials at every external pipeline they could identify. Whether that has actually closed the leaking endpoint, as of publication, is an open question.


The Money Context

Two days before the Fortune story, Google announced an investment of up to $40 billion in Anthropic, anchoring the company at a $350 billion valuation. That came on top of Amazon's $25 billion add-on the week before. Anthropic's safety story is the central reason those checks got written, and the vendor leak punctures that pitch at a moment when the cap table is more dependent on it than ever.


The Bigger Picture

The Mythos leak is being read inside other frontier labs as a stress test on the entire enterprise-AI delivery model. OpenAI, Google, and Meta all rely on third-party fine-tuning, data-residency, compliance, and cloud-reseller partners to deliver frontier models to regulated enterprise customers. Every one of those vendors is a potential leak surface.

The White House's NSTM-4 memo, released just two days earlier, had framed industrial-scale model extraction as a national security threat — directed at China. The Mythos leak is the domestic version of the same problem: a frontier American model running unscaffolded in unsanctioned hands, through a domestic supply chain. That convergence will shape AI policy through the upcoming Trump–Xi summit and likely beyond.


What This Says About Safety Architecture

Anthropic spent years arguing that frontier AI could be deployed safely through tight scaffolding, vetted partners, and slow rollouts. The Mythos leak says the slow rollout was not slow enough, and the vetted partners were not vetted enough. The next 90 days will determine whether Anthropic's brand absorbs this or whether the safety story that funded its rise quietly cracks.

Either way, every other frontier lab is now watching its own vendor list with new eyes — and a lot of enterprise procurement teams just had a very uncomfortable Monday.


Subscribe to Sterling Intelligence for weekly breakdowns of what's actually happening in AI — no hype, no filler, just the signal.

— Jane Sterling

=== YOUTUBE_DESC === Days ago, Anthropic released its most capable model. Right now, a paid Discord with roughly 4,200 members has continuous unauthorized access to it — and Anthropic still has not been able to shut the door. On April 23, 2026, Fortune reported that the leak runs through a third-party fine-tuning vendor in Anthropic's enterprise pilot pipeline — not through any compromised Anthropic system. This is the supply-chain breach the entire AI industry has been quietly afraid of. In this episode, Jane Sterling breaks down what Mythos actually is, how the vendor pipeline failed, why the same vendor is in OpenAI's and Google's supply chain too, and why this story is much bigger than Anthropic. Key numbers covered: • ~4,200 Discord members with continuous unauthorized access • 9 days from Mythos preview release to public leak • $200/month Discord seat price • 91% Mythos score on SWE-Bench Pro (vs 84% for Claude Opus 4.7) • 2 million token context window • Tier 3 capability threshold reached • $5 million emergency Anthropic bug bounty (vendor-side) • $40 billion Google investment in Anthropic (announced 2 days earlier) • $350 billion Anthropic valuation post-Google deal • $25 billion Amazon add-on (week prior) We cover Project Glasswing and what made Mythos a "Tier 3" model in Anthropic's Responsible Scaling Policy, the leaked benchmark numbers and why people are paying $200/month for Discord seats, the third-party vendor that became the leak path, why OpenAI and Google have the same vendor exposure, Dario Amodei's "working aggressively" statement and the $5 million bug bounty, the FTC inquiry, the safety-theater debate inside the AI policy community, the $40B Google deal lurking behind all of this, and what this means for every other frontier lab's enterprise rollout. ⏱ Chapters 00:00 The hook — the model is loose 00:50 What Mythos actually is, and why it matters 02:30 The Discord — 4,200 members, $200/month seats 03:30 The vendor pipeline that failed 04:50 Why OpenAI and Google have the same exposure 05:40 Anthropic's response — Dario, the bug bounty, the FTC 06:40 The safety theater debate 07:30 The $40 billion Google deal context 08:20 NSTM-4 and the policy convergence 09:00 What this says about safety architecture 🔔 Subscribe to Sterling Intelligence for weekly breakdowns of what's actually happening in AI — no hype, no filler, just the signal. https://www.youtube.com/@SterlingIntelligence — Jane Sterling, Sterling Intelligence #Mythos #Anthropic #ProjectGlasswing #DarioAmodei #AISecurity #AILeak #Cybersecurity #FrontierAI #SupplyChain #AIRollout #SWEBenchPro #ClaudeOpus #SterlingIntelligence #JaneSterling #AIWeekly #ArtificialIntelligence #TechNews2026 === TITLES_HTML ===
  • Top Pick
    Anthropic's Mythos Just Leaked to a Paid Discord48 chars
    Lead with the lab + the model + the punchline. Concrete actor, concrete failure, irresistible curiosity gap on "paid Discord."
  • Alternate 1
    Anthropic's Most Restricted Model Just Leaked45 chars
    Drops "Mythos" for the audience that doesn't know the codename yet. Best for organic search and click-through on the safety angle.
  • Alternate 2
    A Discord Bought Access to Anthropic's Mythos45 chars
    Subject-verb-object framing with the conflict in the verb. Best for the analyst audience that gets why the vendor channel matters.
  • === KEYWORDS === Mythos, Anthropic Mythos, Project Glasswing, Anthropic leak, Mythos leak, Mythos Discord, Anthropic Discord leak, Dario Amodei, Anthropic safety, AI supply chain attack, frontier AI leak, third party vendor breach, fine tuning vendor leak, Claude Opus 4.7, SWE-Bench Pro, Tier 3 model, Responsible Scaling Policy, Google Anthropic 40 billion, Anthropic 350 billion valuation, Amazon Anthropic 25 billion, Anthropic bug bounty, FTC AI inquiry, AI red team, jailbreak Discord, Sam Altman Anthropic, AI policy 2026, AI news 2026, Sterling Intelligence, Jane Sterling, AI weekly, artificial intelligence === THUMBNAIL_HTML ===

    Jane's Appearance & Framing

    Expression. Quietly alarmed, jaw set, eyes steady. The look of someone reading a Fortune piece on a Sunday morning and realizing the story is going to ruin Monday for an entire industry. Not theatrical. Steady tension.

    Head position. Squared to camera, very slight forward lean. Chin neutral, eye line level. Conveys "this is the leak the industry has been afraid of."

    Wardrobe. Dark blazer, minimalist. No jewelry that catches light. Sterling Intelligence brand palette — black, charcoal, single muted-gold accent only.

    Eye direction. Direct to camera, locked. Alternate take: slight glance off-camera-right toward a glowing terminal overlay.

    Lighting. Single key light from upper-left at ~4500K, deep shadow on the right jaw line. Minimal fill. Subtle red rim light from behind-right to lift her off the dark background and tag the cybersecurity tone.

    Scene setup. Near-black charcoal background with a faint terminal-glyph pattern at 10% opacity behind her shoulder and a muted red (#b91c1c) accent gradient at 12% opacity in the upper-right corner. Shallow depth of field, Jane tack-sharp, background soft.

    Option 1 — Best (Codename Angle)
    MYTHOS
    LEAKED

    Position. Right third of frame, oversized condensed type, two stacked lines. "MYTHOS" full-width, "LEAKED" beneath in 110% scale.

    Font. Bebas Neue Bold or Impact, condensed all-caps, tight tracking.

    Color scheme. "MYTHOS" in pure white, "LEAKED" in muted red (#b91c1c). 3px black stroke for legibility against the dark background.

    Accent detail. Tiny gold sub-tag below: "ANTHROPIC · 9 DAYS" in Inter Bold 14px, #c8a84b. Anchors the timeline as the punchline.

    Option 2 — Numbers Angle
    4,200 USERS
    9 DAYS

    Position. Lower-left third, stacked on two lines. Close to Jane's shoulder so the eye travels face → text.

    Font. Inter Black all-caps, condensed, tight tracking.

    Color scheme. "4,200 USERS" in pure white. "9 DAYS" in muted red (#b91c1c) at 110% scale of the line above. 3px black stroke.

    Accent detail. Gold sub-tag below: "ANTHROPIC MYTHOS · DISCORD ACCESS" in Inter Bold 14px, #c8a84b gold. Backs the numbers with the conflict.

    Option 3 — Quote Angle
    "CONTINUOUS
    ACCESS"

    Position. Centered upper band, quotation marks visible. Jane's face dominant lower two-thirds.

    Font. Inter Black all-caps, tight tracking, with serif quotation marks.

    Color scheme. Quote in pure white. Quotation marks in muted gold (#c8a84b). 2px black stroke.

    Accent detail. Gold subtitle below: "— FORTUNE, ON ANTHROPIC'S MYTHOS" in Inter Bold 16px, #c8a84b gold. Best for the policy/safety audience.

    === HEYGEN_LOOK === A photorealistic headshot photo of a poised woman in her early 30s with a quietly-alarmed, measured expression, dark blazer, minimalist styling, no jewelry that catches light, head squared to camera with a very slight forward lean. Background: a near-black charcoal studio wall with a faint terminal-glyph code pattern at 10% opacity behind her shoulder and a muted red accent gradient (#b91c1c at 12% opacity) in the upper-right corner. Single hard key light from upper-left at ~4500K, minimal fill on the right at 15% intensity, deep shadow on the right jaw line, a subtle red rim light from behind-right tagging the cybersecurity tone. Direct eye contact with the camera. 3/4 shot, ultrarealistic, sharp focus, clean rendering, artifact-free, shallow depth of field — subject tack-sharp, background soft. Cinematic, austere, restrained, authoritative. === MOTION_LOWER_THIRD === name: Jane Sterling role: Cybersecurity & AI Safety org: Sterling Intelligence === MOTION_OUTRO === eyebrow: If this hit different — main: Subscribe. sub: New episodes every week. No filler. platform1: YouTube handle1: @SterlingIntel platform2: X / Twitter handle2: @SterlingIntel platform3: Newsletter handle3: sterling.ai === MOTION_STAT_1 === category: Discord Members With Unauthorized Access value: 4200 unit: desc1: Paid seats running Mythos through a vendor pipeline desc2: Fortune · April 23, 2026 badge: ▲ Continuous, unrevoked access === MOTION_STAT_2 === category: Time From Preview Release To Public Leak value: 9 unit: desc1: Days between Mythos rollout and Fortune disclosure desc2: Anthropic Project Glasswing preview badge: ▼ Faster than any prior frontier-model leak === MOTION_STAT_3 === category: Discord Seat Price symbol: $ value: 200 unit: desc1: Flat monthly fee for Mythos access desc2: Resold via vendor proxy badge: ▲ Lower than any official API tier === MOTION_COMPARISON_1 === benchmark: SWE-Bench Pro (leaked) model_a: Claude Opus 4.7 score_a: 84 model_b: Mythos (Project Glasswing) score_b: 91 unit: % source: Source: Leaked Anthropic system-card numbers · April 2026 === MOTION_STAT_4 === category: Mythos Context Window value: 2 unit: M desc1: Two million tokens of context desc2: Project Glasswing preview spec badge: ▲ Largest in Anthropic lineup === MOTION_RANK_1 === context: Anthropic Responsible Scaling Policy rank: 3 category: Capability Threshold (Tier 3 reached) source: Anthropic system card · Mythos preview, April 2026 === MOTION_FUNDING_1 === category: Emergency Vendor-Side Bug Bounty symbol: $ amount: 5 scale: M desc: Aimed specifically at third-party vendor security findings badge: ▲ Largest single-incident bounty in Anthropic history === MOTION_MULTI_1 === title: Anthropic's Money Context · April 2026 val1: $40B lbl1: Google investment val2: $350B lbl2: Anthropic valuation val3: $25B lbl3: Amazon add-on (week prior) === SOURCES_HTML ===

    Primary Reporting

    Media Coverage

    Analyst & Independent

    Prior Context