Video Script (9 min, clean transcript for captioning)
On March 20, 2026, the White House released something that is going to shape American AI development for the next decade: the National Policy Framework for Artificial Intelligence.
This is not a regulation. It is not a law. It is a set of legislative recommendations — a blueprint that the Trump administration is sending to Congress and saying: this is what we want federal AI law to look like. Build it.
And what they're asking for is going to create winners, losers, and a lot of legal complexity — depending on which side of several major debates you're on.
Let me walk through what's in it, because it covers a lot of ground.
The framework has seven major policy areas. Child safety. Infrastructure and energy. Intellectual property. Free speech. Federal preemption of state laws. Those are the ones that matter most right now, and I'm going to break down each of them.
Start with the one that will generate the most legal and political conflict: federal preemption.
The White House is recommending that Congress prohibit states from regulating AI development in most cases. The logic is straightforward from an industry standpoint: if every state can write its own AI law, you end up with fifty different compliance regimes, and the result is a patchwork that drives development overseas or into the hands of whoever can afford the compliance burden — which is the big incumbents, not startups or smaller players.
The recommendation is that outside of narrow exceptions, state laws targeting AI development should be preempted by federal law. States would keep the power to enforce generally applicable laws against AI developers and users — you can still prosecute fraud, still enforce consumer protection laws. But state-specific AI regulations would not be permitted.
This is a fight that's been building for years. California, New York, Illinois, and Texas have all been developing or have already passed AI-specific state laws. The White House framework is a shot across the bow of all of them.
Move to intellectual property. This one has been a fight in the courts since AI training became mainstream. The framework makes a clear statement: the administration's position is that training AI models on copyrighted material does not violate copyright law. That's a statement that will please every major AI lab and displease every major content creator, publisher, and media organization.
Critically, the framework also acknowledges that the courts have authority here — it doesn't try to override what courts are currently deciding, it states the administration's view. That's a softer version of what industry wanted, but it's still a signal about the direction of any legislation that comes out of this process.
On child safety: the framework recommends that AI services implement real safeguards against sexual exploitation and self-harm content, that parents get tools to manage children's access and privacy, and that age assurance requirements be established. This is the area where the White House is actually asking for more regulation, not less — and it has bipartisan support.
On energy: the recommendation is to streamline permitting for AI data center construction and allow AI developers to build on-site power generation. If you've followed the AI energy story — the nuclear plants being proposed specifically to power data centers — this provision is directly relevant. The government is trying to remove regulatory barriers to the physical infrastructure that AI at scale requires.
On free speech: the framework specifically limits the federal government's ability to pressure AI providers to change their outputs for political or ideological reasons. This is a direct response to concerns that government agencies were leaning on AI companies to suppress or modify certain categories of content. The provision is written to cut in both directions — left or right administration, the principle is that the government shouldn't be able to coerce AI content moderation.
Now — some context on what this framework actually is and isn't.
It's a recommendation to Congress. Congress has to write the actual legislation. There's a long road between a White House framework and a signed law, and Congress has a history of moving slowly on technology issues while the technology moves fast.
The preemption recommendation in particular is going to face significant pushback. States are not going to surrender their regulatory authority without a fight. And the argument that a federal light-touch approach is better than state experimentation is contested — the counter-argument is that states have historically been the laboratories of regulation, and that preempting state action before federal law is robust means you get a regulatory vacuum rather than a unified framework.
But the direction of travel matters even before the law exists. When the White House says "we think training AI on copyrighted material is fine," that affects how companies negotiate with publishers and creators. When the White House says "states shouldn't regulate AI development," that affects how companies plan their compliance posture and where they lobby.
The framework is a signal about what American AI policy is going to look like. It's a significantly more industry-friendly, innovation-oriented signal than what most of the world is seeing from their governments right now.
And that gap — between the American approach and the European approach — is going to be one of the defining competitive dynamics of the next decade.
Pay attention to what happens in Congress next.
Stay sharp.
— Jane Sterling, Sterling Intelligence
Annotated Script (with b-roll & cut cues)
On March 20, 2026, the White House released something that is going to shape American AI development for the next decade: the National Policy Framework for Artificial Intelligence.
[STAT CARD: "March 20, 2026 — National Policy Framework for AI"] [B-ROLL: stills:whitehouse]This is not a regulation. It is not a law. It is a set of legislative recommendations — a blueprint that the Trump administration is sending to Congress and saying: this is what we want federal AI law to look like. Build it.
[B-ROLL: stills:capitol]And what they're asking for is going to create winners, losers, and a lot of legal complexity — depending on which side of several major debates you're on.
[CUT] [TALKING HEAD — transition]Let me walk through what's in it, because it covers a lot of ground.
[VOICEOVER — scene 1] [B-ROLL: screen-capture:executive-order]The framework has seven major policy areas. Child safety. Infrastructure and energy. Intellectual property. Free speech. Federal preemption of state laws. Those are the ones that matter most right now, and I'm going to break down each of them.
[STAT CARD: "7 policy areas"] [/VOICEOVER] [TALKING HEAD — transition]Start with the one that will generate the most legal and political conflict: federal preemption.
[VOICEOVER — scene 2] [B-ROLL: news-studio]The White House is recommending that Congress prohibit states from regulating AI development in most cases. The logic is straightforward from an industry standpoint: if every state can write its own AI law, you end up with fifty different compliance regimes, and the result is a patchwork that drives development overseas or into the hands of whoever can afford the compliance burden — which is the big incumbents, not startups or smaller players.
[STAT CARD: "50-state compliance patchwork"] [B-ROLL: finance-charts]The recommendation is that outside of narrow exceptions, state laws targeting AI development should be preempted by federal law. States would keep the power to enforce generally applicable laws against AI developers and users — you can still prosecute fraud, still enforce consumer protection laws. But state-specific AI regulations would not be permitted.
[B-ROLL: stills:capitol]This is a fight that's been building for years. California, New York, Illinois, and Texas have all been developing or have already passed AI-specific state laws. The White House framework is a shot across the bow of all of them.
[STAT CARD: "CA · NY · IL · TX"] [/VOICEOVER] [CUT] [TALKING HEAD — transition]Move to intellectual property. This one has been a fight in the courts since AI training became mainstream. The framework makes a clear statement: the administration's position is that training AI models on copyrighted material does not violate copyright law. That's a statement that will please every major AI lab and displease every major content creator, publisher, and media organization.
[VOICEOVER — scene 3] [B-ROLL: courtroom]Critically, the framework also acknowledges that the courts have authority here — it doesn't try to override what courts are currently deciding, it states the administration's view. That's a softer version of what industry wanted, but it's still a signal about the direction of any legislation that comes out of this process.
[B-ROLL: company-logo:openai] [B-ROLL: company-logo:anthropic]On child safety: the framework recommends that AI services implement real safeguards against sexual exploitation and self-harm content, that parents get tools to manage children's access and privacy, and that age assurance requirements be established. This is the area where the White House is actually asking for more regulation, not less — and it has bipartisan support.
[B-ROLL: ai-abstract]On energy: the recommendation is to streamline permitting for AI data center construction and allow AI developers to build on-site power generation. If you've followed the AI energy story — the nuclear plants being proposed specifically to power data centers — this provision is directly relevant. The government is trying to remove regulatory barriers to the physical infrastructure that AI at scale requires.
[B-ROLL: data-center] [B-ROLL: screen-capture:federal-register]On free speech: the framework specifically limits the federal government's ability to pressure AI providers to change their outputs for political or ideological reasons. This is a direct response to concerns that government agencies were leaning on AI companies to suppress or modify certain categories of content. The provision is written to cut in both directions — left or right administration, the principle is that the government shouldn't be able to coerce AI content moderation.
[/VOICEOVER] [CUT] [TALKING HEAD — transition]Now — some context on what this framework actually is and isn't.
It's a recommendation to Congress. Congress has to write the actual legislation. There's a long road between a White House framework and a signed law, and Congress has a history of moving slowly on technology issues while the technology moves fast.
[VOICEOVER — scene 4] [B-ROLL: stills:capitol]The preemption recommendation in particular is going to face significant pushback. States are not going to surrender their regulatory authority without a fight. And the argument that a federal light-touch approach is better than state experimentation is contested — the counter-argument is that states have historically been the laboratories of regulation, and that preempting state action before federal law is robust means you get a regulatory vacuum rather than a unified framework.
[B-ROLL: courtroom]But the direction of travel matters even before the law exists. When the White House says "we think training AI on copyrighted material is fine," that affects how companies negotiate with publishers and creators. When the White House says "states shouldn't regulate AI development," that affects how companies plan their compliance posture and where they lobby.
[B-ROLL: finance-charts]The framework is a signal about what American AI policy is going to look like. It's a significantly more industry-friendly, innovation-oriented signal than what most of the world is seeing from their governments right now.
[B-ROLL: ai-abstract]And that gap — between the American approach and the European approach — is going to be one of the defining competitive dynamics of the next decade.
[STAT CARD: "US framework vs EU AI Act"] [/VOICEOVER] [CUT] [TALKING HEAD — sign-off]Pay attention to what happens in Congress next.
Stay sharp. — Jane Sterling, Sterling Intelligence
The White House released its National Policy Framework for Artificial Intelligence on March 20, 2026 — a set of legislative recommendations to Congress that will shape the future of American AI regulation, intellectual property law, child safety requirements, and the balance of power between federal and state governments.
In this video, Jane Sterling breaks down what the framework actually says, which provisions will generate the most conflict, what the AI industry wins and loses, and what comes next.
What The Framework Is
The National Policy Framework for Artificial Intelligence is not a regulation and not a law. It is a legislative blueprint — a set of recommendations from the Trump administration to Congress outlining what federal AI legislation should contain.
It was issued pursuant to President Trump's Executive Order of December 11, 2025, titled "Ensuring a National Policy Framework for Artificial Intelligence," which directed the administration to develop these recommendations.
The framework covers seven major policy areas:
1. Child safety and privacy
2. Communities and infrastructure
3. Intellectual property and creators
4. Free speech protection
5. Federal preemption of state AI laws
6. Innovation and research
7. National security applications
Federal Preemption: The Highest-Stakes Provision
The recommendation that will generate the most conflict is federal preemption of state AI laws.
The White House framework recommends that Congress prohibit states from:
- Regulating AI development in most circumstances
- Penalizing AI developers for third-party unlawful conduct involving their models
- Burdening activities involving AI that would be lawful if performed without AI
States would retain authority to:
- Enforce generally applicable laws (fraud, consumer protection, criminal law) against AI developers and users
- Exercise zoning authority over AI facilities
- Regulate their own use of AI for law enforcement and public services
The industry case for preemption is efficiency: a unified federal framework prevents a fifty-state compliance patchwork that drives up costs and disadvantages smaller companies that can't afford multi-jurisdiction compliance operations.
The case against preemption is democratic experimentation: state-level variation in AI regulation allows different jurisdictions to try different approaches and observe outcomes before locking in national policy. California, New York, Illinois, and Texas have each been developing or have already enacted AI-specific legislation. Federal preemption, if enacted, would supersede much of this work.
This provision will face significant opposition from state attorneys general, privacy advocates, and legislators from states that have invested in their own AI regulatory frameworks.
Intellectual Property: The Copyright Question
The framework addresses the most contested intellectual property question in AI: whether training AI models on copyrighted material constitutes copyright infringement.
The administration's position: training AI models on copyrighted material does not violate copyright law. This is the view that the major AI labs have argued in litigation. The framework signals that the administration agrees, and that federal legislation should reflect this.
Critically, the framework also acknowledges the judiciary's authority to assess copyright and fair-use questions. It doesn't attempt to override court decisions — it states a position and signals what the administration wants legislation to say.
The implications are significant for ongoing litigation between AI labs and publishers, news organizations, and individual creators. A legislative resolution that codifies training as fair use would end the uncertainty that currently hangs over every AI company's training pipeline.
Child Safety: Where The Administration Wants More Regulation
Not all of the framework is deregulatory. On child safety, the administration recommends that Congress require AI services to:
- Implement safeguards against content that sexualizes minors or promotes self-harm
- Provide parents with tools to manage children's privacy, screen time, and content exposure
- Establish age-assurance requirements to restrict minors from certain AI services
- Clarify that existing child privacy laws apply to AI systems
This is the section of the framework with the most bipartisan support. Both parties have been willing to impose child safety requirements on technology platforms, and AI-specific child safety legislation is likely to move faster than other parts of the framework.
Energy And Infrastructure
The framework recommends streamlining federal permitting for AI data center construction and allowing AI developers to build on-site power generation.
This provision responds to a real constraint. AI at scale requires enormous amounts of electricity. New data centers are being proposed across the United States, and the permitting process for both the facilities and the power infrastructure they require is slow. The framework is a signal that the administration wants to reduce those barriers.
The addition of on-site power generation as a permitted activity is particularly significant. AI companies building their own power infrastructure — rather than relying on the grid — need regulatory authorization to do so. The framework endorses that approach.
Free Speech And Content Moderation
The framework includes a provision limiting the federal government's ability to coerce AI providers to restrict or alter content for partisan or ideological reasons.
This is written as a structural constraint on government power, applicable regardless of which party holds the White House. The provision reflects documented concerns from multiple sides of the political spectrum about government pressure on technology companies to modify their content moderation approaches.
The framework directs Congress to provide avenues for legal redress where such government coercion occurs.
What Comes Next
The framework is a blueprint, not law. Several things need to happen before any of this becomes legally binding.
Congress must write legislation. The Republican majority in both chambers is generally aligned with the framework's innovation-oriented, light-touch approach, but individual provisions — especially preemption — will face internal Republican disagreement from states' rights advocates.
State governments will challenge preemption. California in particular has been a leader in technology regulation and is unlikely to accept federal preemption of its AI laws without litigation.
Courts are still deciding IP questions. Even if legislation addresses copyright training, the existing cases don't disappear immediately.
The gap between framework and law can be years. Congress has historically moved slowly on technology legislation while the technology moves fast.
The Global Context
The American framework's emphasis on federal unity, light-touch regulation, and innovation orientation stands in stark contrast to the European approach.
The EU AI Act, which entered force in 2024, takes a risk-based classification approach with significant compliance requirements for high-risk AI applications. European AI regulation is more prescriptive, more restrictive, and slower to adapt to capability changes.
The competitive implications of this divergence are significant. If American companies operate under a unified, innovation-oriented federal framework while European companies operate under more restrictive regulations, the AI development advantage concentrates in the United States.
That's the bet the White House is making. Whether it produces the intended result depends on whether Congress acts quickly enough for the framework to matter while the technology is still moving.
Subscribe to Sterling Intelligence for weekly coverage of AI policy and what it means for you.
New videos every week.
— Jane Sterling
Some links may be affiliate links. Commission at no cost to you.
YouTube Description
Titles
-
Top PickThe White House Just Rewrote The Rules For AI46 charsDeclarative, present-tense, zero jargon. Reads as news. Matches the hero headline on the page and works for a general news audience as well as the policy-literate viewer.
-
Alternate 1Trump's AI Framework: What Actually Changes44 charsFrames the video as explainer for viewers who already saw headlines and want the operational takeaway. "What actually changes" promises a concrete deliverable rather than punditry.
-
Alternate 2States Just Lost The AI Regulation Fight41 charsConflict-first hook for the policy audience. Leads with the highest-stakes provision (federal preemption) and signals that there are real winners and losers. Slightly more aggressive framing for the algorithm.
Keywords
Thumbnail Brief
Jane's Appearance & Framing
Expression. Steady, composed, eyebrows neutral. The face of a serious analyst delivering a policy update, not alarmed. Closed mouth, subtle tension at the jaw to convey weight.
Head position. Squared to camera, chin very slightly lowered for authority. Eye line level. Conveys "this is the news, listen carefully."
Wardrobe. Dark blazer, minimalist. No jewelry that catches light. Consistent with the Sterling Intelligence brand palette (black, charcoal, gold accent only).
Eye direction. Direct to camera. Alternate take: eyes cut slightly right toward the White House silhouette overlay.
Lighting. Key light from upper-left at ~4500K, soft fill on the right at 20% intensity. Deep shadow on the left jaw line for gravity. Subtle cool rim light from behind-right to lift her off the near-black background.
Scene setup. Near-black charcoal background with a faint desaturated American-flag red/blue gradient bleed from upper-right. Shallow depth of field — Jane tack-sharp. Optional ghosted White House silhouette at 12% opacity behind her right shoulder.
Position. Right third of the frame, stacked on two lines — "NEW" on top, "AI RULES" below in larger scale.
Font. Inter Black all caps, tight tracking.
Color scheme. "NEW" in white, "AI RULES" in gold (#c8a84b). 3px black stroke on every character for legibility at small sizes. Faint warm outer glow on "AI RULES" to pop against the dark background.
Accent detail. Small caps header above the text: "WHITE HOUSE · MARCH 2026" in 11px muted white. Reads as credible dateline, not clickbait.
Position. Lower-left third, stacked on two lines — "STATES" on top, "LOSE" below at 120% scale. Close to Jane's shoulder so the eye travels face → text.
Font. Bebas Neue Bold or Impact, condensed all-caps, tight tracking.
Color scheme. "STATES" in white, "LOSE" in bright red (#dc2626). 3px black stroke throughout. Faint outer glow on "LOSE".
Accent detail. Gold sub-tag below: "FEDERAL PREEMPTION — AI" in Inter Bold 16px, #c8a84b. Clarifies that the "loss" is specifically about AI regulation authority.
Position. Centered upper band, Jane's face dominant lower two-thirds of frame.
Font. Playfair Display Bold all caps, wide tracking (~100), stretched across frame width.
Color scheme. Base text in white, with "BLUEPRINT" overlaid with a transparent glassy gold (#c8a84b at 80%) to visually separate. 2px black stroke.
Accent detail. Red underline under "THE AI BLUEPRINT" at 4px. Gold subtitle below: "TRUMP · CONGRESS · 2026" in Inter Bold 18px. Positions the story as document-analysis rather than hot-take.
Sources & References
Official — White House & Federal Government
Media Coverage
- White House releases national AI policy framework, urging Congress to preempt state laws
- Trump Administration Sends AI Policy Blueprint to Congress
- White House AI framework would block state regulation and tilt copyright fight toward tech
- White House Unveils National AI Policy Framework
- States brace for fight over White House plan to preempt AI laws
- White House delivers AI policy wish list to Congress
- White House Sides With AI Labs On Copyright, Training Data