Video Script (9 min, clean transcript for captioning)
Here's a name you need to know if you don't already: Yann LeCun.
Turing Award winner. Former Chief AI Scientist at Meta. One of the people most responsible for the deep learning revolution that made modern AI possible. And for the last several years, one of the most vocal critics of the direction the AI industry is heading.
He just raised $1.03 billion in seed funding. That's not a Series A. That's not a growth round. That's a seed round — the first institutional money into a company — and it is the largest seed round in European history.
The company is called AMI Labs. Advanced Machine Intelligence. And what they're building is something that LeCun has been arguing we need for years — something he believes is fundamentally more important and more difficult than making better chatbots.
They're building world models.
Let me explain what that means, because it's the key to understanding why this matters.
Large language models — the technology behind ChatGPT, Claude, Gemini — learn by predicting the next word in a sequence. You feed them enormous amounts of text, they learn the statistical patterns in that text, and they get very good at producing text that sounds coherent and correct. That's a remarkable capability. It's also, according to LeCun, not how intelligence actually works.
His argument — and he's been making it publicly for years — is that real intelligence comes from understanding the physical world. From being able to reason about cause and effect. From building a model of how reality works and using that model to predict what happens next when you take an action. Not just predicting the next word in a sentence.
The architecture AMI Labs is building around this is called JEPA — Joint Embedding Predictive Architecture. Instead of training on text prediction, JEPA trains on the structure of sensory experience — what the world looks like, how it changes, what causes what. The goal is an AI that actually understands physical reality, not one that's learned to talk convincingly about it.
Why does this matter right now?
Because the applications that matter most in the next decade — robotics, industrial automation, healthcare — all require AI that can operate in the physical world with real reliability. And that is exactly where current large language models fall apart. They hallucinate. They don't have stable internal models of cause and effect. They can't reliably predict the physical consequences of actions. They're very good at language. They're not good at physics.
AMI Labs is targeting exactly that gap.
Now let's talk about the money, because it tells its own story.
$1.03 billion. $3.5 billion pre-money valuation. The round was co-led by Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions. Then you look at who else came in: NVIDIA. Temasek. Samsung. Toyota Ventures. Individual investors including Jeff Bezos, Mark Cuban, Eric Schmidt, and Tim Berne.
That is not a list of people making a speculative bet. That is a list of serious institutional capital and strategic investors who believe world models are real and that LeCun is the right person to build them. NVIDIA's investment is particularly notable — they're the infrastructure layer for essentially all of AI right now, and they just backed the company most explicitly arguing that the current dominant AI paradigm is incomplete.
The company is headquartered in Paris, with offices in New York, Montreal, and Singapore. LeCun left Meta to start this. That's not a small decision. He spent more than a decade building Facebook AI Research into one of the most respected AI research organizations in the world. He walked away from that to start something new.
The specific targets for AMI Labs — industrial, robotic, and healthcare applications — are not accidental. These are sectors where the physical world matters, where reliability matters, where hallucination is not just embarrassing but potentially catastrophic.
Here's the thing I want to sit with for a moment.
The current AI industry is having an enormous debate about scaling laws. The prevailing assumption for the last several years has been that you get better AI by making models bigger, training them on more data, and throwing more compute at the problem. That paradigm has produced remarkable results. GPT-5.4 this week, Claude Opus 4.7, Gemini Ultra — all products of that scaling approach.
LeCun has been one of the loudest voices saying this approach has a ceiling. That you can scale text prediction forever and still not get to genuine understanding of the physical world. That we need a different architecture, a different training paradigm, a different approach to what we're even trying to build.
A billion dollars suggests some very serious people think he might be right.
What happens next is genuinely uncertain. AMI Labs is early stage — this is seed money. They're years away from production deployments at scale. But the research direction, the funding, the team, and the investors all point toward a serious bet on a different future for AI.
If LeCun is right, the current wave of AI — as impressive as it is — is the preamble, not the story.
Pay attention to AMI Labs.
Stay sharp.
— Jane Sterling, Sterling Intelligence
Annotated Script (with b-roll & cut cues)
Here's a name you need to know if you don't already: Yann LeCun.
[B-ROLL: stills:lecun]Turing Award winner. Former Chief AI Scientist at Meta. One of the people most responsible for the deep learning revolution that made modern AI possible. And for the last several years, one of the most vocal critics of the direction the AI industry is heading.
[B-ROLL: finance-charts]He just raised $1.03 billion in seed funding. That's not a Series A. That's not a growth round. That's a seed round — the first institutional money into a company — and it is the largest seed round in European history.
[STAT CARD: "$1.03B seed — largest in European history"] [CUT] [TALKING HEAD — transition] [B-ROLL: company-logo:amilabs]The company is called AMI Labs. Advanced Machine Intelligence. And what they're building is something that LeCun has been arguing we need for years — something he believes is fundamentally more important and more difficult than making better chatbots.
They're building world models.
[VOICEOVER — scene 2] [B-ROLL: ai-abstract]Let me explain what that means, because it's the key to understanding why this matters.
[B-ROLL: company-logo:meta]Large language models — the technology behind ChatGPT, Claude, Gemini — learn by predicting the next word in a sequence. You feed them enormous amounts of text, they learn the statistical patterns in that text, and they get very good at producing text that sounds coherent and correct. That's a remarkable capability. It's also, according to LeCun, not how intelligence actually works.
His argument — and he's been making it publicly for years — is that real intelligence comes from understanding the physical world. From being able to reason about cause and effect. From building a model of how reality works and using that model to predict what happens next when you take an action. Not just predicting the next word in a sentence.
[B-ROLL: screen-capture:jepa-architecture-diagram]The architecture AMI Labs is building around this is called JEPA — Joint Embedding Predictive Architecture. Instead of training on text prediction, JEPA trains on the structure of sensory experience — what the world looks like, how it changes, what causes what. The goal is an AI that actually understands physical reality, not one that's learned to talk convincingly about it.
[/VOICEOVER] [TALKING HEAD — transition]Why does this matter right now?
[VOICEOVER — scene 3] [B-ROLL: robotics]Because the applications that matter most in the next decade — robotics, industrial automation, healthcare — all require AI that can operate in the physical world with real reliability. And that is exactly where current large language models fall apart. They hallucinate. They don't have stable internal models of cause and effect. They can't reliably predict the physical consequences of actions. They're very good at language. They're not good at physics.
AMI Labs is targeting exactly that gap.
[/VOICEOVER] [CUT] [TALKING HEAD — transition]Now let's talk about the money, because it tells its own story.
[VOICEOVER — scene 4] [B-ROLL: finance-charts]$1.03 billion. $3.5 billion pre-money valuation. The round was co-led by Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions. Then you look at who else came in: NVIDIA. Temasek. Samsung. Toyota Ventures. Individual investors including Jeff Bezos, Mark Cuban, Eric Schmidt, and Tim Berne.
[STAT CARD: "$3.5B pre-money valuation"] [B-ROLL: company-logo:nvidia]That is not a list of people making a speculative bet. That is a list of serious institutional capital and strategic investors who believe world models are real and that LeCun is the right person to build them. NVIDIA's investment is particularly notable — they're the infrastructure layer for essentially all of AI right now, and they just backed the company most explicitly arguing that the current dominant AI paradigm is incomplete.
[B-ROLL: stills:paris-office]The company is headquartered in Paris, with offices in New York, Montreal, and Singapore. LeCun left Meta to start this. That's not a small decision. He spent more than a decade building Facebook AI Research into one of the most respected AI research organizations in the world. He walked away from that to start something new.
The specific targets for AMI Labs — industrial, robotic, and healthcare applications — are not accidental. These are sectors where the physical world matters, where reliability matters, where hallucination is not just embarrassing but potentially catastrophic.
[/VOICEOVER] [CUT] [TALKING HEAD — transition]Here's the thing I want to sit with for a moment.
[VOICEOVER — scene 5] [B-ROLL: data-center]The current AI industry is having an enormous debate about scaling laws. The prevailing assumption for the last several years has been that you get better AI by making models bigger, training them on more data, and throwing more compute at the problem. That paradigm has produced remarkable results. GPT-5.4 this week, Claude Opus 4.7, Gemini Ultra — all products of that scaling approach.
[B-ROLL: stills:lecun]LeCun has been one of the loudest voices saying this approach has a ceiling. That you can scale text prediction forever and still not get to genuine understanding of the physical world. That we need a different architecture, a different training paradigm, a different approach to what we're even trying to build.
[STAT CARD: "$1,000,000,000 bet on a different paradigm"]A billion dollars suggests some very serious people think he might be right.
[/VOICEOVER] [B-ROLL: ai-abstract]What happens next is genuinely uncertain. AMI Labs is early stage — this is seed money. They're years away from production deployments at scale. But the research direction, the funding, the team, and the investors all point toward a serious bet on a different future for AI.
[CUT] [TALKING HEAD — sign-off]If LeCun is right, the current wave of AI — as impressive as it is — is the preamble, not the story.
Pay attention to AMI Labs.
Stay sharp. — Jane Sterling, Sterling Intelligence
The biggest seed round in European history just landed — and the person who raised it has spent the last several years telling the AI industry it's building the wrong thing.
Yann LeCun, Turing Award winner and former Chief AI Scientist at Meta, has raised $1.03 billion for AMI Labs — Advanced Machine Intelligence — at a $3.5 billion pre-money valuation. The round was backed by NVIDIA, Jeff Bezos, Mark Cuban, Eric Schmidt, Samsung, Toyota Ventures, and a coalition of top-tier venture funds.
In this video, Jane Sterling breaks down who LeCun is, what world models actually are, why this funding round is significant far beyond its size, and what it means for the future of AI.
Who Is Yann LeCun?
If you're not already familiar with Yann LeCun, here's the essential context.
LeCun is one of three researchers credited with creating the deep learning techniques that power essentially all modern AI. He, Geoffrey Hinton, and Yoshua Bengio shared the 2018 Turing Award — the Nobel Prize of computer science — for their foundational work on neural networks.
He spent more than a decade as Chief AI Scientist at Meta, where he built Facebook AI Research (FAIR) into one of the most respected AI research organizations in the world. He has published hundreds of influential papers. His work on convolutional neural networks is the basis of virtually every modern computer vision system.
He is also one of the most publicly skeptical voices about where the AI industry is currently headed.
What Is the Problem With Current AI?
LeCun's argument, made consistently and publicly over many years, is that large language models — the architecture behind ChatGPT, Claude, Gemini, and essentially every major AI product today — have a fundamental limitation.
They are trained on text. They learn to predict the next word in a sequence. They get very good at this. And because human language encodes an enormous amount of human knowledge and reasoning, they end up appearing very capable. But appearing capable and being capable are not the same thing.
LeCun's position is that these models do not have a genuine understanding of the physical world. They cannot reliably reason about cause and effect in physical systems. They don't have stable internal models of how reality behaves. They hallucinate — confidently asserting things that are wrong — because they are fundamentally pattern-matching engines, not reasoning systems.
For applications where this limitation is acceptable — writing assistance, summarization, code generation, question answering — current LLMs are genuinely powerful tools.
For applications where it is not acceptable — robotics, industrial automation, surgical assistance, autonomous vehicles — current LLMs are structurally unsuitable, regardless of how much larger you make them or how much more data you train them on.
LeCun has been arguing for years that fixing this requires a different approach entirely.
What Are World Models?
The concept at the center of AMI Labs' work is the world model.
A world model is an AI system that learns to represent and reason about physical reality — not by predicting text, but by building an internal model of how the world works, how it changes, and what the consequences of different actions are.
Think of it this way: a child learns that if you push a cup off a table, it falls. Not because they read about gravity, but because they experienced it, modeled it, and now have a reliable internal prediction for what happens next. That's a world model. The child's brain has built a representation of physical reality that allows it to predict the consequences of actions before taking them.
Current LLMs don't have this. They may be able to tell you about gravity in text. They cannot reliably model the physical consequences of gravity in a way that would allow a robot to navigate the world safely.
AMI Labs is building AI systems grounded in sensory experience — visual, tactile, spatial — rather than text. The goal is AI that understands physical reality the way a child eventually does, not AI that has learned to talk about physical reality because it was trained on books written by people who understand it.
The JEPA Architecture
The technical approach AMI Labs is building around is called JEPA — Joint Embedding Predictive Architecture.
JEPA was developed by LeCun and collaborators over several years of research at Meta FAIR before he departed. Instead of training on next-token prediction, JEPA trains a model to predict the representation of future states in an abstract embedding space — essentially learning to model what the world will look like next without needing to reconstruct every pixel.
This is technically important for several reasons. It is more efficient than reconstruction-based approaches. It focuses learning on the meaningful structure of the world rather than irrelevant surface details. And it is more naturally aligned with how biological intelligence is believed to work — predicting relevant future states rather than trying to reconstruct exact sensory inputs.
The Investors And What They Signal
The $1.03 billion round includes names that deserve individual attention.
NVIDIA invested. NVIDIA is the hardware backbone of essentially all AI training and inference. They are not making random bets. Their investment in AMI Labs signals that they believe world model AI will require significant compute — which it will — and that LeCun's approach is credible enough to back at scale.
Bezos Expeditions invested. Jeff Bezos has made several significant personal AI investments. His participation signals confidence in LeCun personally and in the research direction.
Toyota Ventures invested. Toyota's interest is not hard to understand — autonomous vehicles and robotics are exactly the physical-world applications where world models matter most and where current LLMs are most inadequate.
Samsung invested. Samsung manufactures devices, robots, and semiconductors. Their investment signals strategic interest in physical AI that goes beyond chat interfaces.
The presence of this investor mix — infrastructure (NVIDIA), capital (Bezos), automotive/robotics (Toyota), hardware (Samsung), and top-tier venture funds — is not a coincidence. These are all parties with strategic reasons to want physical-world AI to work.
Why This Moment
The timing of AMI Labs' launch and funding is not accidental.
The AI industry is currently at an inflection point. The scaling law approach — making models bigger, training on more data — has produced extraordinary results but is showing signs of diminishing returns at the frontier. The next wave of capability improvements may require architectural innovation rather than simply more compute.
LeCun has been waiting for this moment. His critiques of LLM-based AI have been consistent for years. What has changed is that the industry is now seriously asking whether he's right — and serious institutional capital is voting accordingly.
What To Watch
AMI Labs is early stage. This is seed money. Production deployments at scale are years away.
But the direction is clear: industrial automation, robotics, and healthcare AI built on world models rather than language models. If the approach works, it could unlock AI applications that current LLMs structurally cannot handle.
Watch for research publications from AMI Labs in the next 12-18 months. Watch for JEPA benchmark results against current LLM-based approaches on physical reasoning tasks. Watch for whether Toyota and other industrial investors begin deploying AMI-based systems in production environments.
This is a long game. LeCun is playing it.
Subscribe to Sterling Intelligence for weekly AI news — what's real, what matters, what's coming.
New videos every week.
— Jane Sterling
Some links may be affiliate links. Commission earned at no cost to you. I only recommend what's worth recommending.
YouTube Description
Titles
-
Top PickYann LeCun Just Raised $1B to Prove OpenAI Wrong48 charsPersonality-first hook plus a direct conflict frame. Names the protagonist, the stake, and the antagonist in one line. Maximum curiosity gap.
-
Alternate 1The $1B Seed Round That Breaks The LLM Consensus48 charsData-point-first framing for the analyst audience. "Breaks the consensus" signals contrarian analysis rather than launch coverage.
-
Alternate 2Why NVIDIA Just Bet Against Its Own Customers45 charsReframes through NVIDIA's paradox — funding the company arguing current-gen AI is a dead end. High curiosity for business and finance viewers.
Keywords
Thumbnail Brief
Jane's Appearance & Framing
Expression. Serious-composed, slight skeptical lift at one eyebrow. The look of someone relaying a big number with conviction, not surprise. Mouth closed, jaw relaxed.
Head position. Squared to camera, chin slightly dropped for authority. Eye line dead-level. Conveys "this is the real story" rather than reaction.
Wardrobe. Dark blazer over a charcoal tee. No jewelry that catches light. Sterling Intelligence palette only — black, charcoal, single gold accent.
Eye direction. Direct to camera, locked. Alternate take: eyes cut sharply left toward the $1B overlay for a "read the receipt" feel.
Lighting. Key light from upper-left at ~4800K, soft fill on the right at 25%. Deep shadow on the right jaw for dimension. Subtle rim light behind-left to lift her off the background.
Scene setup. Near-black charcoal background with a faint blue-violet gradient upper-right (subtle Meta/Paris nod). Shallow depth of field — Jane tack-sharp, background soft. Optional ghosted JEPA diagram motif at 12% opacity behind her shoulder.
Position. Right third of the frame, stacked. "LECUN" on top in white, "$1B" directly below in oversized gold.
Font. Inter Black for "LECUN" (authority); JetBrains Mono Bold for "$1B" (reads as data).
Color scheme. "LECUN" in pure white with 3px black stroke. "$1B" in gold (#c8a84b) at 140% scale with a faint white underglow and 3px black stroke.
Accent detail. Small caps header above: "LARGEST SEED IN EUROPE" in 11px gold. Red sub-tag below the dollar figure: "vs OPENAI" in #dc2626 Inter Bold 14px. Makes the conflict instantly readable.
Position. Lower-left third, three lines stacked tight — "OPENAI" top, "IS" middle small, "WRONG" bottom oversized.
Font. Bebas Neue Bold or Impact, condensed all-caps, tight tracking.
Color scheme. "OPENAI" in white, "IS" in muted gray (#888), "WRONG" in bright red (#dc2626) at 130% scale. 3px black stroke throughout. Faint outer glow on "WRONG".
Accent detail. Gold sub-tag below: "YANN LECUN — $1B — AMI LABS" in Inter Bold 16px #c8a84b. Backs the shock claim with the proof points.
Position. Centered upper band across the frame, then Jane's face dominant lower two-thirds.
Font. Inter Black all caps, wide tracking (~120), stretched across full frame width.
Color scheme. "WORLD" in white, "MODELS" overlaid in transparent glassy gold (#c8a84b at 80%). 2px black stroke throughout.
Accent detail. Red underline under "MODELS" at 4px. Smaller gold subtitle below: "NOT LLMS — $1B BET" in Inter Bold 18px. Positions the story as paradigm-first rather than personality-first.
Sources & References
Official — AMI Labs & LeCun
Media Coverage
- Yann LeCun's AMI Labs Raises $1 Billion in Largest European Seed Round
- Yann LeCun leaves Meta, launches AMI Labs with record European seed
- Yann LeCun's AMI Labs raises $1.03B to bet against the LLM consensus
- Yann LeCun bets $1 billion that LLMs are the wrong future for AI
- The AI Pioneer Who's Done With Chatbots
- France's AMI Labs closes $1.03B seed led by Cathay, Greycroft, Bezos Expeditions
- NVIDIA, Bezos, Samsung, Toyota all back Yann LeCun's new AI startup