=== TAG === Policy === HEADLINE === White House Names China's AI Distillation A National Security Threat === META_DESC === OSTP Director Michael Kratsios released NSTM-4 framing China's industrial scale distillation of American AI as a national security threat. Anthropic traced 24,000 fake accounts and 16 million Claude queries to DeepSeek, MiniMax, and Moonshot AI. === DATE === April 23, 2026 === AUTHOR === Jane Sterling === READ_TIME === 9-minute read === HERO_IMG === img/content.png === SCRIPT_LABEL === Video Script (9 min, clean transcript for captioning) === SCRIPT === Let me tell you what the White House just put on paper. This week, the Office of Science and Technology Policy released a memo. The title sounds dry. It is called Adversarial Distillation of American AI Models. Inside the building, they refer to it as NSTM-4. And what it says, in plain English, is that the United States government now has evidence that China is running an industrial scale operation to clone American artificial intelligence. Not steal weights. Not break into a data center. Something far more interesting and, in some ways, much harder to stop. They are typing into the chatbot. The memo was signed by Michael Kratsios. He runs the OSTP. He is the most senior science and technology official in the White House. And the words he chose are not the usual diplomatic hedging. He calls the campaigns deliberate. He calls them industrial scale. He says foreign actors are using tens of thousands of proxy accounts, with jailbreaking techniques layered on top, to siphon off the outputs of frontier AI models built by American labs. The technique has a name. It is called distillation. And if you have followed the AI industry at all in the last year, you have heard it before. Every major lab in the United States has been quietly accusing Chinese labs of doing this for months. Now the federal government has gone ON THE RECORD. What changed this week is the tone. Until now, the accusations came from private companies. OpenAI accused DeepSeek of running obfuscated traffic through third party routers. Anthropic accused DeepSeek and Moonshot and MiniMax. Google has been quietly tracking Chinese researchers extracting Gemini outputs at scale. The administration mostly stayed in the background. As of this week, that has changed. The federal government is no longer reading along. It is putting its name on the indictment, and the language it is using is the language of state level intelligence loss. Here is the receipt the memo leans on the hardest. Earlier this year, Anthropic published an internal investigation into a coordinated campaign run against its Claude models. The numbers are stunning. Anthropic identified roughly 24,000 fraudulent accounts engaged in what it calls extraction at scale. Those accounts generated more than 16 million exchanges with Claude. The traffic was traced to three Chinese AI labs you have probably heard of. MiniMax. Moonshot AI. And DeepSeek. MiniMax alone ran more than 13 million exchanges through Claude. Moonshot ran more than three million, focused on reasoning, tool use, and coding. DeepSeek ran the smallest volume on Anthropic's models, around 150,000 exchanges, concentrated tightly on logic and alignment. All three labs allegedly used commercial proxy services to bypass the geographic restriction Anthropic places on customers in China. They did not steal any source code. They did not break in. They simply BOUGHT API access through layers of intermediaries and ran a giant data harvesting operation through the front door. That is the case the White House is now formalizing as a national security concern. So what is distillation, in practice. Why does it matter. And why is the federal government suddenly treating this like a counterintelligence problem. Distillation is, fundamentally, a way of transferring capability from a strong model into a weaker one. The strong model, called the teacher, generates a very large number of carefully chosen responses. The weaker model, called the student, learns to mimic those responses. If you do it well, the student starts to approximate the teacher on the tasks the queries were chosen to surface. The result is a smaller, cheaper model that performs at a level its training budget should not have purchased. Distillation as a technique is not new. It is not illegal. American labs do it INTERNALLY all the time. The legal and political question is what happens when you do it across the wall, on a model you do not own, using accounts that explicitly violate the terms of service of the model you are extracting from. What changed is the scale. A research team using distillation on a competitor's API once or twice a week is annoying but tolerable. A state level operation running tens of thousands of accounts continuously, around the clock, is something else entirely. It is the difference between shoplifting and a heist crew. The OSTP memo lands hard on that distinction. The campaigns it describes were not curiosity driven. They were industrialized. The Kratsios memo argues, essentially, that the answer is national security harm. The reasoning has three layers. First, distilled models can match the teacher on enough benchmarks to confuse the market about who is actually leading the frontier. Second, distillation strips the safety training and the careful alignment that the teacher was trained to enforce. And third, at industrial scale, this transfers years of American research and tens of billions of dollars of compute into adversary hands at almost no cost. Kratsios told reporters that foreign entities who build on such fragile foundations should have little confidence in the integrity and reliability of the models they produce. Translation. The student looks like the teacher until it is asked something the teacher was carefully trained to refuse, and then it falls apart in unpredictable ways. That is a safety story, but it is also a marketing story. The administration is openly framing distillation not just as theft but as a quality and trust problem in the resulting Chinese models. Now look at the timing. The memo dropped this week. The very next day, DeepSeek announced DeepSeek V4. A new flagship model that, according to early benchmarks, gets within a handful of points of frontier American systems at a fraction of the cost. Was that timing an accident. Almost certainly not. And the Trump and Xi summit is now scheduled for next month in Beijing, less than three weeks away. A summit where AI controls and semiconductor controls are the central agenda items. The White House did not put this memo out into a vacuum. It put it out as the leading edge of a negotiating position. Now let's talk about what actually happens next. Because the most honest part of this story is that nobody has a clean answer. The memo itself is, at this stage, not a regulation. It is a directive to the federal apparatus. It tells the executive branch to share threat intelligence with frontier AI companies. It tells agencies to coordinate on technical defenses, like smarter rate limiting and detection of coordinated proxy traffic. It tells lawyers and diplomats to develop accountability options. None of those steps, on their own, stop a Chinese lab from buying API credits through a third country. Retired NSA director Paul Nakasone, one of the few public voices the administration is listening to on this, has floated three categories of response. New export controls on the compute and chips Chinese labs need to retrain a distilled student. Formal diplomatic protests through State and Commerce. And tailored technology restrictions, which is a polite way of saying targeted sanctions on the specific labs and their funding networks. Each of these is HARDER than it sounds. Export controls have been in place against Chinese chip access for years, and Chinese frontier labs have continued to scale anyway. Diplomatic protests against Beijing have produced almost nothing during this entire AI cycle. And targeted sanctions on labs like DeepSeek and MiniMax run straight into the fact that they are deeply embedded with Chinese state backed funds. The deeper problem is structural. The American AI labs being distilled from also need the largest commercial markets to fund their next wave of models. They want every developer in the world on their API. They cannot easily wall off their own product without slowing their own growth. And they cannot easily detect a well funded adversary running tens of thousands of stealth accounts through commercial proxies. Anthropic admitted in its own write up that it took months to map the campaign. Months in which the data was ALREADY OUT. The administration also says it wants to preserve legitimate open source AI development. But the line between legitimate research collaboration and adversarial extraction is, by design, fuzzy. A Chinese researcher fine tuning Llama 4 on her laptop is not a national security event. A coordinated campaign of tens of thousands of commercial accounts is. The space between those two cases is where policy will actually get written, and it is going to be uncomfortable. The reaction from Beijing has been predictable. Chinese officials have rejected the framing, argued that distillation is a normal part of AI development, and characterized the memo as a political maneuver to set the agenda for the upcoming summit. They are not entirely wrong about the last point. For the AI industry inside the United States, the message is the one that has been quietly true for two years. The frontier is now being defended like infrastructure. The labs are now defended like banks. The federal government is treating extraction of model behavior as a category of intelligence loss. So where does this leave things. The White House just told the world that the AI race is now a counterintelligence story. The named adversaries are DeepSeek, MiniMax, and Moonshot AI. The platform of attack is the chatbot itself. The next move belongs to the administration and to the labs they are about to share threat intelligence with. The move after that belongs to whoever sits across the table in Beijing next month. This is the moment the AI cold war stopped being a metaphor. Stay sharp. Jane Sterling, Sterling Intelligence. === SCRIPT_HTML === === ANNOTATED_LABEL === Annotated Script (with b-roll & cut cues) === ANNOTATED_HTML === [TALKING HEAD — hook]

Let me tell you what the White House just put on paper.

[CUT] [VOICEOVER — scene 1] [B-ROLL: stills:white-house-exterior]

This week, the Office of Science and Technology Policy released a memo. The title sounds dry. It is called Adversarial Distillation of American AI Models. Inside the building, they refer to it as NSTM-4. And what it says, in plain English, is that the United States government now has evidence that China is running an industrial scale operation to clone American artificial intelligence.

[B-ROLL: screen-capture:nstm4-memo-pdf] [B-ROLL: ai-abstract]

Not steal weights. Not break into a data center. Something far more interesting and, in some ways, much harder to stop. They are typing into the chatbot.

[/VOICEOVER] [TALKING HEAD — transition]

The memo was signed by Michael Kratsios. He runs the OSTP. He is the most senior science and technology official in the White House. And the words he chose are not the usual diplomatic hedging. He calls the campaigns deliberate. He calls them industrial scale. He says foreign actors are using tens of thousands of proxy accounts, with jailbreaking techniques layered on top, to siphon off the outputs of frontier AI models built by American labs.

[VOICEOVER — scene 1 cont.] [B-ROLL: stills:kratsios-portrait] [B-ROLL: code-terminal]

The technique has a name. It is called distillation. And if you have followed the AI industry at all in the last year, you have heard it before. Every major lab in the United States has been quietly accusing Chinese labs of doing this for months. Now the federal government has gone ON THE RECORD.

[B-ROLL: company-logo:openai] [B-ROLL: company-logo:anthropic] [B-ROLL: company-logo:google]

What changed this week is the tone. Until now, the accusations came from private companies. OpenAI accused DeepSeek of running obfuscated traffic through third party routers. Anthropic accused DeepSeek and Moonshot and MiniMax. Google has been quietly tracking Chinese researchers extracting Gemini outputs at scale. The administration mostly stayed in the background. As of this week, that has changed. The federal government is no longer reading along. It is putting its name on the indictment, and the language it is using is the language of state level intelligence loss.

[B-ROLL: finance-charts]

Here is the receipt the memo leans on the hardest. Earlier this year, Anthropic published an internal investigation into a coordinated campaign run against its Claude models. The numbers are stunning. Anthropic identified roughly 24,000 fraudulent accounts engaged in what it calls extraction at scale. Those accounts generated more than 16 million exchanges with Claude. The traffic was traced to three Chinese AI labs you have probably heard of. MiniMax. Moonshot AI. And DeepSeek.

[STAT CARD: "24,000 fraudulent accounts"] [STAT CARD: "16 million Claude exchanges"] [B-ROLL: company-logo:deepseek] [B-ROLL: company-logo:minimax] [B-ROLL: company-logo:moonshot]

MiniMax alone ran more than 13 million exchanges through Claude. Moonshot ran more than three million, focused on reasoning, tool use, and coding. DeepSeek ran the smallest volume on Anthropic's models, around 150,000 exchanges, concentrated tightly on logic and alignment. All three labs allegedly used commercial proxy services to bypass the geographic restriction Anthropic places on customers in China. They did not steal any source code. They did not break in. They simply BOUGHT API access through layers of intermediaries and ran a giant data harvesting operation through the front door.

[STAT CARD: "13 million MiniMax exchanges"] [STAT CARD: "150,000 DeepSeek exchanges"] [/VOICEOVER]

That is the case the White House is now formalizing as a national security concern.

[CUT] [TALKING HEAD — transition]

So what is distillation, in practice. Why does it matter. And why is the federal government suddenly treating this like a counterintelligence problem.

[VOICEOVER — scene 2] [B-ROLL: ai-abstract]

Distillation is, fundamentally, a way of transferring capability from a strong model into a weaker one. The strong model, called the teacher, generates a very large number of carefully chosen responses. The weaker model, called the student, learns to mimic those responses. If you do it well, the student starts to approximate the teacher on the tasks the queries were chosen to surface. The result is a smaller, cheaper model that performs at a level its training budget should not have purchased.

[B-ROLL: code-terminal]

Distillation as a technique is not new. It is not illegal. American labs do it INTERNALLY all the time. The legal and political question is what happens when you do it across the wall, on a model you do not own, using accounts that explicitly violate the terms of service of the model you are extracting from.

[B-ROLL: data-center]

What changed is the scale. A research team using distillation on a competitor's API once or twice a week is annoying but tolerable. A state level operation running tens of thousands of accounts continuously, around the clock, is something else entirely. It is the difference between shoplifting and a heist crew. The OSTP memo lands hard on that distinction. The campaigns it describes were not curiosity driven. They were industrialized.

[/VOICEOVER] [TALKING HEAD — transition]

The Kratsios memo argues, essentially, that the answer is national security harm. The reasoning has three layers. First, distilled models can match the teacher on enough benchmarks to confuse the market about who is actually leading the frontier. Second, distillation strips the safety training and the careful alignment that the teacher was trained to enforce. And third, at industrial scale, this transfers years of American research and tens of billions of dollars of compute into adversary hands at almost no cost.

[VOICEOVER — scene 2 cont.] [B-ROLL: news-studio]

Kratsios told reporters that foreign entities who build on such fragile foundations should have little confidence in the integrity and reliability of the models they produce. Translation. The student looks like the teacher until it is asked something the teacher was carefully trained to refuse, and then it falls apart in unpredictable ways. That is a safety story, but it is also a marketing story. The administration is openly framing distillation not just as theft but as a quality and trust problem in the resulting Chinese models.

[B-ROLL: screen-capture:deepseek-v4-launch]

Now look at the timing. The memo dropped this week. The very next day, DeepSeek announced DeepSeek V4. A new flagship model that, according to early benchmarks, gets within a handful of points of frontier American systems at a fraction of the cost. Was that timing an accident. Almost certainly not. And the Trump and Xi summit is now scheduled for next month in Beijing, less than three weeks away. A summit where AI controls and semiconductor controls are the central agenda items.

[B-ROLL: stills:beijing-summit]

The White House did not put this memo out into a vacuum. It put it out as the leading edge of a negotiating position.

[/VOICEOVER] [CUT] [TALKING HEAD — transition]

Now let's talk about what actually happens next. Because the most honest part of this story is that nobody has a clean answer.

[VOICEOVER — scene 3] [B-ROLL: courtroom]

The memo itself is, at this stage, not a regulation. It is a directive to the federal apparatus. It tells the executive branch to share threat intelligence with frontier AI companies. It tells agencies to coordinate on technical defenses, like smarter rate limiting and detection of coordinated proxy traffic. It tells lawyers and diplomats to develop accountability options. None of those steps, on their own, stop a Chinese lab from buying API credits through a third country.

[B-ROLL: military] [B-ROLL: stills:nakasone-portrait]

Retired NSA director Paul Nakasone, one of the few public voices the administration is listening to on this, has floated three categories of response. New export controls on the compute and chips Chinese labs need to retrain a distilled student. Formal diplomatic protests through State and Commerce. And tailored technology restrictions, which is a polite way of saying targeted sanctions on the specific labs and their funding networks.

[B-ROLL: finance-charts]

Each of these is HARDER than it sounds. Export controls have been in place against Chinese chip access for years, and Chinese frontier labs have continued to scale anyway. Diplomatic protests against Beijing have produced almost nothing during this entire AI cycle. And targeted sanctions on labs like DeepSeek and MiniMax run straight into the fact that they are deeply embedded with Chinese state backed funds.

[B-ROLL: data-center]

The deeper problem is structural. The American AI labs being distilled from also need the largest commercial markets to fund their next wave of models. They want every developer in the world on their API. They cannot easily wall off their own product without slowing their own growth. And they cannot easily detect a well funded adversary running tens of thousands of stealth accounts through commercial proxies. Anthropic admitted in its own write up that it took months to map the campaign. Months in which the data was ALREADY OUT.

[B-ROLL: code-terminal]

The administration also says it wants to preserve legitimate open source AI development. But the line between legitimate research collaboration and adversarial extraction is, by design, fuzzy. A Chinese researcher fine tuning Llama 4 on her laptop is not a national security event. A coordinated campaign of tens of thousands of commercial accounts is. The space between those two cases is where policy will actually get written, and it is going to be uncomfortable.

[B-ROLL: stills:beijing-press-room]

The reaction from Beijing has been predictable. Chinese officials have rejected the framing, argued that distillation is a normal part of AI development, and characterized the memo as a political maneuver to set the agenda for the upcoming summit. They are not entirely wrong about the last point.

[B-ROLL: ai-abstract]

For the AI industry inside the United States, the message is the one that has been quietly true for two years. The frontier is now being defended like infrastructure. The labs are now defended like banks. The federal government is treating extraction of model behavior as a category of intelligence loss.

[/VOICEOVER] [CUT] [TALKING HEAD — sign-off]

So where does this leave things. The White House just told the world that the AI race is now a counterintelligence story. The named adversaries are DeepSeek, MiniMax, and Moonshot AI. The platform of attack is the chatbot itself. The next move belongs to the administration and to the labs they are about to share threat intelligence with. The move after that belongs to whoever sits across the table in Beijing next month.

This is the moment the AI cold war stopped being a metaphor.

Stay sharp.

Jane Sterling, Sterling Intelligence.

=== ARTICLE_HTML ===

The White House just told the world that China is running an industrial-scale operation to clone American AI. Not by stealing weights. Not by breaking into data centers. By typing into the chatbot.

On April 23, 2026, the Office of Science and Technology Policy released a memorandum titled "Adversarial Distillation of American AI Models." The shorthand inside the building is NSTM-4. The memo is signed by Michael Kratsios, the OSTP Director. It accuses foreign entities — primarily in China — of running deliberate, industrial-scale campaigns to extract American frontier AI by hammering chatbot APIs with tens of thousands of fake accounts.

This piece breaks down what NSTM-4 actually says, the receipts it leans on, the technique at the center of it (distillation), and what the U.S. government can — and probably can't — do about it.


What NSTM-4 Actually Says

The memorandum directs federal agencies to share threat intelligence with U.S. AI developers about foreign distillation campaigns, partner with industry on technical defenses, and explore accountability options against foreign actors. It does not, by itself, impose any new export controls or sanctions. It is a framing document — a White House declaration that adversarial distillation is now a national-security category, not just a private-sector grievance.

The language is unusually direct. Kratsios calls the campaigns "deliberate, industrial-scale." He says foreign entities are using "tens of thousands of proxies and jailbreaking techniques in coordinated campaigns to systematically extract American breakthroughs."


The Anthropic Receipts

The memo leans hardest on a February 2026 disclosure from Anthropic. In an internal investigation, Anthropic identified approximately 24,000 fraudulent accounts running coordinated extraction campaigns against its Claude models. Those accounts collectively generated more than 16 million exchanges with Claude before being detected and blocked.

The traffic was traced to three Chinese AI labs: MiniMax, Moonshot AI, and DeepSeek. MiniMax accounted for more than 13 million exchanges. Moonshot AI for over 3 million, focused on reasoning, tool use, and coding. DeepSeek for around 150,000, concentrated on logic and alignment. All three allegedly used commercial proxy services to bypass Anthropic's geographic restrictions on Chinese customers — a relatively cheap workaround at the scale described.


What Distillation Is, And Why It's Now A Policy Problem

Distillation is the process of training a smaller "student" model to mimic the outputs of a larger, more capable "teacher" model. American labs use distillation internally all the time — it's how OpenAI, Anthropic, and Google build cheaper inference variants of their flagship models.

The legal and political question NSTM-4 raises is what happens when distillation crosses the wall: when the student is being trained on a model the developer doesn't own, using accounts that explicitly violate terms of service, at industrial scale, against the wishes of a U.S. company. The OSTP memo argues that pattern is a national-security harm because it strips safety training, transfers years of American research at almost no cost, and makes the resulting Chinese models look more capable than they are.


The Timing

The day after NSTM-4 dropped, DeepSeek announced DeepSeek V4 — a new flagship model that, per early benchmarks, gets within a handful of points of frontier American systems at a fraction of the cost. Whether the launch and the memo were coordinated by either side is unknown. The proximity is not lost on anyone in Washington or Beijing.

President Trump and Xi Jinping are scheduled to meet in Beijing on May 14, 2026, with AI controls and semiconductor controls expected to dominate the agenda. The OSTP memo is widely read as the leading edge of the U.S. negotiating position — a public framing the administration wanted on the table before the summit started.


What Could Actually Be Done

The memo itself is a directive, not a regulation. The harder questions are operational. Retired NSA Director Paul Nakasone, one of the few public voices the administration listens to on AI security, has floated three categories of response: new export controls on the compute and chips Chinese labs need to retrain a distilled student; formal diplomatic protests through State and Commerce; and tailored technology restrictions — a polite way of saying targeted sanctions on labs like DeepSeek and MiniMax.

Each is harder than it sounds. Export controls on Chinese chip access have been in place for years, and Chinese frontier labs have continued to scale through Singapore, the Middle East, and other workarounds. Diplomatic protests against Beijing on tech-IP have produced almost nothing during this AI cycle. And direct sanctions on Chinese AI labs run straight into the reality that those labs are deeply embedded with Chinese state-backed funds — meaning sanctions become major foreign-policy events.


The Structural Problem

The American labs being distilled from are also the labs that need the largest commercial markets to fund their next wave of models. They want every developer on their API. They cannot easily wall off their own product without slowing their own growth, and they cannot easily detect a well-funded adversary running tens of thousands of stealth accounts through commercial proxies. Anthropic admitted in its own write-up that it took months to map the February campaign — months in which the data was already out.

The administration is also publicly committed to preserving legitimate open-source AI development. The line between a Chinese researcher fine-tuning Llama 4 on her laptop and a coordinated campaign of 24,000 commercial accounts is, by design, fuzzy. The space between those two cases is where the next two years of AI policy will actually get written.


Beijing's Response

The reaction from Beijing has been predictable. Chinese officials have rejected the framing, argued that distillation is a normal part of AI development, noted (correctly) that American labs use distillation internally, and characterized the memo as a political maneuver designed to set the agenda for the May 14 summit. On the last point, Beijing is not entirely wrong.


The Bigger Picture

For the AI industry inside the United States, NSTM-4 is the public confirmation of what has been quietly true for two years. The frontier is now being defended like infrastructure. The labs are defended like banks. The federal government is now treating extraction of model behavior as a category of intelligence loss — not a private terms-of-service dispute.

The age of an open Internet, where the cost of entry was curiosity and an API key, is closing on the AI side. What replaces it is going to look a lot more like dual-use export control than like a developer platform. After this week, that shift is no longer subtle. It is policy.


Subscribe to Sterling Intelligence for weekly breakdowns of what's actually happening in AI — no hype, no filler, just the signal.

— Jane Sterling

=== YOUTUBE_DESC === The White House just told the world that China is running an industrial-scale operation to clone American AI — not by stealing weights, but by typing into the chatbot. On April 23, 2026, OSTP Director Michael Kratsios released NSTM-4, "Adversarial Distillation of American AI Models," formalizing what U.S. labs have been quietly accusing Chinese labs of for months. In this episode, Jane Sterling breaks down NSTM-4 — what the memo says, the Anthropic receipts it leans on, what "distillation" actually is, why it's now a national-security category, and what the U.S. government can (and probably can't) do about it before the Trump-Xi summit on May 14. Key numbers covered: • 24,000 fraudulent accounts (Anthropic Feb 2026 disclosure) • 16 million Claude exchanges harvested • 13 million exchanges from MiniMax alone • 3 million+ from Moonshot AI • 150,000 from DeepSeek (focused on logic and alignment) • Three Chinese labs named in industry disclosures: DeepSeek, MiniMax, Moonshot AI • Trump–Xi summit: Beijing, May 14, 2026 • DeepSeek V4 launched the day after the OSTP memo We cover what NSTM-4 actually says, why distillation is the technique at the center of the China–U.S. AI fight, the Kratsios "fragile foundations" line, the Nakasone three-step response framework (export controls + diplomatic protests + tailored sanctions), the structural problem U.S. labs face in defending their APIs, the open-source line, Beijing's response, and what this means for AI policy heading into the summit. ⏱ Chapters 00:00 The White House just put it on paper 01:10 What NSTM-4 actually says 02:30 The Anthropic receipts: 24,000 accounts, 16M queries 03:50 What distillation actually is 05:10 Why it's now a national-security category 06:20 The summit timing nobody is talking about 07:10 What the U.S. can actually do 08:20 Why the labs can't easily defend themselves 09:10 The bigger picture 🔔 Subscribe to Sterling Intelligence for weekly breakdowns of what's actually happening in AI — no hype, no filler, just the signal. https://www.youtube.com/@SterlingIntelligence — Jane Sterling, Sterling Intelligence #NSTM4 #Kratsios #ChinaAI #AIPolicy #DeepSeek #MiniMax #MoonshotAI #Anthropic #AIDistillation #AINationalSecurity #TrumpXiSummit #SterlingIntelligence #JaneSterling #AIWeekly #ArtificialIntelligence #TechNews2026 === TITLES_HTML ===
  • Top Pick
    The White House Just Named China's AI Theft43 chars
    Concrete actor, concrete action, leaves the punchline open. Drives the click on curiosity gap, mobile-legible, no jargon.
  • Alternate 1
    Inside NSTM-4: The White House's AI Theft Memo46 chars
    Direct, news-style. Best for the policy-savvy audience that already follows AI export-control debates.
  • Alternate 2
    24,000 Fake Accounts. 16M Queries. One Memo.44 chars
    Number-first hook with the document as the punchline. Best for analyst audience that gets why the figures matter.
  • === KEYWORDS === NSTM-4, Kratsios, OSTP memo, White House AI memo, Adversarial Distillation, China AI theft, AI distillation attack, DeepSeek distillation, MiniMax Claude, Moonshot AI Claude, Anthropic 24000 accounts, Anthropic distillation report, Claude API extraction, Trump Xi summit, AI export controls, China AI policy, AI national security, frontier AI defense, US China AI war, AI cold war, DeepSeek V4, Llama 4 fine tuning, Paul Nakasone AI, AI policy 2026, AI news 2026, Sterling Intelligence, Jane Sterling, AI weekly, artificial intelligence === THUMBNAIL_HTML ===

    Jane's Appearance & Framing

    Expression. Quietly alarmed, jaw set, eyes steady — the look of someone reading a national-security memo and weighing the consequences. Not surprised, not theatrical. Steady tension.

    Head position. Squared to camera, very slight forward lean. Chin neutral, eye line level. Conveys "this is the line we just crossed."

    Wardrobe. Dark blazer, minimalist. No jewelry that catches light. Sterling Intelligence brand palette — black, charcoal, single muted-gold accent only.

    Eye direction. Direct to camera, locked. Alternate take: slight glance off-camera-right toward the document overlay.

    Lighting. Single key light from upper-left at ~4600K, deep shadow on the right jaw line. Minimal fill. Subtle rim light from behind-right to lift her off the dark background.

    Scene setup. Near-black charcoal government-corridor look. Faint American-flag motif at 8% opacity in upper-right (only visible on close inspection — it tags the story as U.S. policy without screaming it). Ghosted seal-of-the-OSTP impression at 12% opacity behind her shoulder. Shallow depth of field, Jane tack-sharp, background soft.

    Option 1 — Best (Document Angle)
    NSTM-4

    Position. Right third of frame, oversized monospace, document-style. Subtitle "ADVERSARIAL DISTILLATION" in 22px small caps directly underneath.

    Font. JetBrains Mono Bold for "NSTM-4" (monospace = government doc); Inter Black for the subtitle.

    Color scheme. "NSTM-4" in pure white with a faint red (#b91c1c) underglow. Subtitle in muted gold (#c8a84b). 3px black stroke for legibility.

    Accent detail. Tiny stamped "WHITE HOUSE OSTP" header above in 11px gold caps. Reads as official document, not clickbait.

    Option 2 — Numbers Angle
    24,000 ACCOUNTS
    16M QUERIES

    Position. Lower-left third, stacked on two lines. Close to Jane's shoulder so the eye travels face → text.

    Font. Bebas Neue Bold or Impact, condensed all-caps, tight tracking.

    Color scheme. "24,000 ACCOUNTS" in pure white. "16M QUERIES" in muted red (#b91c1c) at 110% scale of the line above. 3px black stroke.

    Accent detail. Gold sub-tag below: "ANTHROPIC vs. CHINA" in Inter Bold 16px, #c8a84b gold. Backs the numbers with the conflict.

    Option 3 — Direct Quote Angle
    "INDUSTRIAL-SCALE"

    Position. Centered upper band, quotation marks visible. Jane's face dominant lower two-thirds.

    Font. Inter Black all caps, tight tracking, with serif quotation marks.

    Color scheme. Quote in pure white. Quotation marks in muted gold (#c8a84b). 2px black stroke.

    Accent detail. Gold subtitle below: "— THE WHITE HOUSE, ON CHINA'S AI THEFT" in Inter Bold 18px, #c8a84b gold. Best for the policy audience.

    === HEYGEN_LOOK === A photorealistic headshot photo of a poised woman in her early 30s with a quietly-alarmed, measured expression, dark blazer, minimalist styling, no jewelry that catches light, head squared to camera with a very slight forward lean. Background: a near-black charcoal government corridor scene with a single faint American-flag motif (#b91c1c at 8% opacity) in the upper-right corner and a ghosted OSTP seal at 12% opacity behind her shoulder. Single hard key light from upper-left at ~4600K, minimal fill on the right at 15% intensity, deep shadow on the right jaw line, subtle rim light from behind-right separating her from the dark background. Direct eye contact with the camera. 3/4 shot, ultrarealistic, sharp focus, clean rendering, artifact-free, shallow depth of field — subject tack-sharp, background soft. Cinematic, austere, authoritative, restrained. === MOTION_LOWER_THIRD === name: Jane Sterling role: Policy & National Security org: Sterling Intelligence === MOTION_OUTRO === eyebrow: If this hit different — main: Subscribe. sub: New episodes every week. No filler. platform1: YouTube handle1: @SterlingIntel platform2: X / Twitter handle2: @SterlingIntel platform3: Newsletter handle3: sterling.ai === MOTION_STAT_1 === category: Anthropic Distillation Disclosure value: 24000 unit: desc1: Fraudulent accounts running coordinated extraction desc2: Anthropic · February 2026 badge: ▲ Industrial-scale campaign === MOTION_STAT_2 === category: Total Claude Exchanges Harvested value: 16 unit: M desc1: Across roughly 24,000 fake accounts desc2: Anthropic · February 2026 badge: ▲ Aggregate over weeks === MOTION_MULTI_1 === title: Three Chinese Labs · Anthropic Disclosure val1: 13M lbl1: MiniMax exchanges val2: 3M+ lbl2: Moonshot AI exchanges val3: 150K lbl3: DeepSeek exchanges === MOTION_STAT_3 === category: MiniMax Share of Campaign value: 13 unit: M desc1: More than 13 million Claude exchanges desc2: MiniMax · via commercial proxies badge: ▲ Largest single-lab volume === MOTION_STAT_4 === category: DeepSeek Targeted Volume value: 150000 unit: desc1: Tightly focused on logic and alignment desc2: DeepSeek · Anthropic disclosure badge: ▲ Smallest count, highest signal density === MOTION_RANK_1 === context: NSTM-4 Named Adversaries rank: 3 category: Chinese AI Labs Cited source: OSTP Memo · April 23, 2026 === MOTION_COMPARISON_1 === benchmark: AI Theft Framing model_a: Private-Sector Allegations (2025–2026) score_a: 0 model_b: Federal Government Memo (NSTM-4) score_b: 1 unit: declarative source: Source: White House OSTP · April 23, 2026 === SOURCES_HTML ===

    Official & Primary

    Media Coverage

    Analyst & Independent

    Prior Context