Video Script (9 min, ~1,660 words)
SCENE ONE: THE ANNOUNCEMENTOn Wednesday April 22, 2026, Google walked onto a stage in Las Vegas and quietly rearranged the map of the AI industry.
They unveiled their eighth generation of AI chips. Not one chip. TWO chips. One built to train the biggest models on the planet. One built to serve those models to users at scale. And buried inside the press release was a customer list that almost nobody saw coming.
Anthropic. Meta. And OpenAI.
Yes. THAT OpenAI. The company that runs on Microsoft Azure, powered by Nvidia GPUs. The company that IS Nvidia's single most valuable proof point. That OpenAI just booked time on Google's chips. Not a test deployment. A real allocation of compute.
That is the story. Not the keynote. The customer list.
Let me rewind and explain what Google actually shipped, because the engineering is genuinely interesting.
For the last decade, AI chips have been general purpose. You train on the same accelerators you serve production inference on. One chip, many jobs. Nvidia built an empire on that model. Google just decided the model is broken.
The new chips SPLIT the workload. The TPU 8t is a training chip. Wire nine thousand six hundred of them together in a single superpod and you get roughly one hundred and twenty one exaflops of compute. Exaflops. In one pod.
The TPU 8i is a completely different animal. It is built for inference. For serving millions of users at the same time. It trades training throughput for memory bandwidth, latency, and price efficiency. Google claims eighty percent better price performance than Ironwood on low latency agent workloads.
This is a bet. A big one. Google is betting that the AI industry is splitting into two different machines. One machine that TRAINS. One machine that SERVES. And they are shipping silicon for both sides of that split while Nvidia is still shipping one general purpose flagship for everything.
And the customers just validated that bet in public.
Sundar Pichai announced the TPU 8 lineup at Google Cloud Next in Las Vegas. In the same keynote, Google launched the Gemini Enterprise Agent Platform, which retires the old Vertex AI branding and now claims more than eight million seats sold across twenty eight hundred enterprises.
Here is the wildest part. They also announced a partnership with, of all companies, Nvidia. Google and Nvidia agreed to co engineer networking so Nvidia's upcoming Vera Rubin GPUs run efficiently on Google Cloud. Google is shipping chips that compete directly with Nvidia AND selling Nvidia chips to its customers on the same cloud.
Google is not trying to REPLACE Nvidia. Google is trying to make Nvidia OPTIONAL. In a market where Nvidia has had pricing power because nobody else could do the job, optional is a dangerous word.
Alphabet stock climbed more than two percent on the day. Roughly sixty billion dollars in market cap added in one session on the back of a chip announcement.
Now about those numbers.
SCENE TWO: THE NUMBERSPay attention, because the architecture matters.
The TPU 8t superpod connects nine thousand six hundred chips using a 3D Torus interconnect. That gives you two petabytes of shared high bandwidth memory across the pod. Three times the compute per pod compared to Ironwood, the previous generation. The single chip peak is about twelve point six petaflops of four bit floating point compute, paired with two hundred and sixteen gigabytes of HBM running at six point five terabytes per second.
Across multiple pods, Google says its infrastructure now supports more than one MILLION TPUs acting as a single cluster. That is an order of magnitude beyond anything publicly deployed today. It is the scale you would need to train a model that makes GPT-5.5 look like a warm up lap.
The TPU 8i is smaller but in some ways more interesting. Ten point one petaflops of FP4 compute. Three hundred and eighty four megabytes of on chip SRAM, which is triple what Ironwood had. Two hundred and eighty eight gigabytes of HBM at eight point six terabytes per second. Why does that matter? Because inference on mixture of experts models is memory bound. More on chip SRAM and more bandwidth means lower latency at the same cost. Google specifically claims eighty percent better price performance than Ironwood on low latency agent workloads, and roughly double the performance per watt.
The nerd detail worth mentioning. These chips are built on TSMC's two nanometer process. The TPU 8t was co designed with Broadcom, codename Sunfish. The TPU 8i was co designed with MediaTek, codename Zebrafish. Both chips run on Google's own Axion ARM CPU as the host processor.
Google is now running the entire stack. Custom accelerator. Custom CPU. Custom interconnect. Custom data centers. Custom models. Custom cloud. Nvidia does not have that. Amazon's Trainium is close but does not yet have equivalent scale. Microsoft's Maia is further behind. This is a level of vertical integration that only Google has pulled off at this magnitude.
Now the customer allocations, because this is where the story gets genuinely serious.
Anthropic is taking three point five gigawatts of TPU capacity coming online by 2027. That makes Anthropic the largest publicly disclosed TPU customer in the world. Remember, Anthropic also announced a one hundred billion dollar compute commitment to AWS over ten years just two days earlier. The math there is worth sitting with. Anthropic is now stacking gigawatt scale deals across MULTIPLE cloud providers. They are hedging against everyone, including their own investors.
Meta signed what filings describe as a multi billion dollar multi year TPU deal back in February. Muse Spark, Meta's first proprietary closed weight model, and the next Llama generation training runs will reportedly use TPU 8 capacity at scale.
And then OpenAI. OpenAI confirmed it is taking TPU allocation on Google Cloud. This is the first time OpenAI has publicly booked compute on a non Nvidia substrate of this size. The story under the story is that OpenAI is aggressively hedging. Their leadership has said in writing that Microsoft has, quote, limited our ability to reach customers. Amazon became a major OpenAI investor in February. Now Google has silicon in their mix. OpenAI is becoming cloud agnostic by design.
Pricing. Google does not publish a public rate card for TPU the way Nvidia does for GPUs. TPU capacity is sold through enterprise negotiation. Early access begins in the third quarter of 2026. General availability later this year.
SCENE THREE: THE REAL STORYNow the part that matters.
Here is what most of the coverage is missing. This announcement is not really about Google picking up share against Nvidia. It is about whether the most profitable moat in modern technology history just started to leak.
Nvidia's market capitalization sits near five trillion dollars. That valuation is built on a single assumption. The assumption is that if you want to train or serve a frontier AI model, you buy Nvidia. That has been true for the entire modern AI era.
That assumption just cracked in public.
When OpenAI, the marquee Nvidia customer, the company whose models run almost exclusively on Nvidia silicon, publicly books capacity on Google TPUs at gigawatt scale, the story Wall Street has been telling itself starts to fray at the edges. Nvidia is no longer the only answer. It is not even the only answer inside OpenAI.
Analyst Patrick Moorhead, who has covered this space for decades, noted that he first predicted TPUs would eat into Nvidia's position back in 2016. That prediction did not pan out for a decade. It is panning out now.
Nvidia is absolutely fine in the short term. Their Blackwell generation is sold out. Rubin, the next generation, is locked in for 2026 and most of 2027. You cannot unwind those contracts. The near term revenue picture does not change.
What changes is the slope of the curve further out. Nvidia's valuation assumes it captures an oversized share of AI infrastructure spend for at least the next five years. If Google, Amazon, and to a lesser degree Microsoft are all shipping credible custom silicon that AI labs are actively buying in gigawatt quantities, that share starts to compress. And when valuations live on the future, compression of the future curve CRUSHES the present stock price.
There is a counterpoint worth stating honestly. Google's own engineers have admitted that, per chip, a top Nvidia GPU still beats a TPU on peak performance. The argument Google is making is not per chip. It is per dollar. Per watt. Per pod. Per fleet. For most customers, that is the argument that actually matters when you are buying compute by the gigawatt.
On social, Hacker News lit up the night the announcement dropped. One top thread framed it simply. Nvidia has been selling a general purpose tool. Google is selling a purpose built factory. One of those models survives a maturing market. The other has to keep inventing new markets to sustain its margins.
The regulatory layer matters too. The White House National AI Framework specifically called out compute concentration as a risk. Google is now a fully vertically integrated AI stack. Chips, cloud, models, agent platform. Anthropic, Meta, and OpenAI are all customers. Policymakers will notice.
And yet, Google is simultaneously partnering with Nvidia to run Rubin GPUs on Google Cloud. They are not trying to corner the market. They are trying to OWN the floor of the market. Whatever chip the customer wants, Google wants to sell it from a Google data center.
So where does this leave us.
Nvidia is still the undisputed king this year. Google just showed it has a real claim to the throne three years out. The AI labs are hedging aggressively across every vendor they can find, which tells you they are worried about vendor lock in at a level they are not talking about publicly. And the silent winner in all of this might actually be the customer, because for the first time in a decade there is genuine competition in the chip that runs the AI economy.
One announcement. Two chips. Three customers.
The AI hardware monopoly just ended.
Stay sharp.
Jane Sterling, Sterling Intelligence.
Annotated Script (with b-roll & cut cues)
SCENE ONE: THE ANNOUNCEMENTOn Wednesday April 22, 2026, Google walked onto a stage in Las Vegas and quietly rearranged the map of the AI industry.
They unveiled their eighth generation of AI chips. Not one chip. TWO chips. One built to train the biggest models on the planet. One built to serve those models to users at scale. And buried inside the press release was a customer list that almost nobody saw coming.
Anthropic. Meta. And OpenAI.
Yes. THAT OpenAI. The company that runs on Microsoft Azure, powered by Nvidia GPUs. The company that IS Nvidia's single most valuable proof point. That OpenAI just booked time on Google's chips. Not a test deployment. A real allocation of compute.
That is the story. Not the keynote. The customer list.
Let me rewind and explain what Google actually shipped, because the engineering is genuinely interesting.
For the last decade, AI chips have been general purpose. You train on the same accelerators you serve production inference on. One chip, many jobs. Nvidia built an empire on that model. Google just decided the model is broken.
The new chips SPLIT the workload. The TPU 8t is a training chip. Wire nine thousand six hundred of them together in a single superpod and you get roughly one hundred and twenty one exaflops of compute. Exaflops. In one pod.
The TPU 8i is a completely different animal. It is built for inference. For serving millions of users at the same time. It trades training throughput for memory bandwidth, latency, and price efficiency. Google claims eighty percent better price performance than Ironwood on low latency agent workloads.
This is a bet. A big one. Google is betting that the AI industry is splitting into two different machines. One machine that TRAINS. One machine that SERVES. And they are shipping silicon for both sides of that split while Nvidia is still shipping one general purpose flagship for everything.
And the customers just validated that bet in public.
Sundar Pichai announced the TPU 8 lineup at Google Cloud Next in Las Vegas. In the same keynote, Google launched the Gemini Enterprise Agent Platform, which retires the old Vertex AI branding and now claims more than eight million seats sold across twenty eight hundred enterprises.
Here is the wildest part. They also announced a partnership with, of all companies, Nvidia. Google and Nvidia agreed to co engineer networking so Nvidia's upcoming Vera Rubin GPUs run efficiently on Google Cloud. Google is shipping chips that compete directly with Nvidia AND selling Nvidia chips to its customers on the same cloud.
Google is not trying to REPLACE Nvidia. Google is trying to make Nvidia OPTIONAL. In a market where Nvidia has had pricing power because nobody else could do the job, optional is a dangerous word.
Alphabet stock climbed more than two percent on the day. Roughly sixty billion dollars in market cap added in one session on the back of a chip announcement.
Now about those numbers.
SCENE TWO: THE NUMBERSPay attention, because the architecture matters.
The TPU 8t superpod connects nine thousand six hundred chips using a 3D Torus interconnect. That gives you two petabytes of shared high bandwidth memory across the pod. Three times the compute per pod compared to Ironwood, the previous generation. The single chip peak is about twelve point six petaflops of four bit floating point compute, paired with two hundred and sixteen gigabytes of HBM running at six point five terabytes per second.
Across multiple pods, Google says its infrastructure now supports more than one MILLION TPUs acting as a single cluster. That is an order of magnitude beyond anything publicly deployed today. It is the scale you would need to train a model that makes GPT-5.5 look like a warm up lap.
The TPU 8i is smaller but in some ways more interesting. Ten point one petaflops of FP4 compute. Three hundred and eighty four megabytes of on chip SRAM, which is triple what Ironwood had. Two hundred and eighty eight gigabytes of HBM at eight point six terabytes per second. Why does that matter? Because inference on mixture of experts models is memory bound. More on chip SRAM and more bandwidth means lower latency at the same cost. Google specifically claims eighty percent better price performance than Ironwood on low latency agent workloads, and roughly double the performance per watt.
The nerd detail worth mentioning. These chips are built on TSMC's two nanometer process. The TPU 8t was co designed with Broadcom, codename Sunfish. The TPU 8i was co designed with MediaTek, codename Zebrafish. Both chips run on Google's own Axion ARM CPU as the host processor.
Google is now running the entire stack. Custom accelerator. Custom CPU. Custom interconnect. Custom data centers. Custom models. Custom cloud. Nvidia does not have that. Amazon's Trainium is close but does not yet have equivalent scale. Microsoft's Maia is further behind. This is a level of vertical integration that only Google has pulled off at this magnitude.
Now the customer allocations, because this is where the story gets genuinely serious.
Anthropic is taking three point five gigawatts of TPU capacity coming online by 2027. That makes Anthropic the largest publicly disclosed TPU customer in the world. Remember, Anthropic also announced a one hundred billion dollar compute commitment to AWS over ten years just two days earlier. The math there is worth sitting with. Anthropic is now stacking gigawatt scale deals across MULTIPLE cloud providers. They are hedging against everyone, including their own investors.
Meta signed what filings describe as a multi billion dollar multi year TPU deal back in February. Muse Spark, Meta's first proprietary closed weight model, and the next Llama generation training runs will reportedly use TPU 8 capacity at scale.
And then OpenAI. OpenAI confirmed it is taking TPU allocation on Google Cloud. This is the first time OpenAI has publicly booked compute on a non Nvidia substrate of this size. The story under the story is that OpenAI is aggressively hedging. Their leadership has said in writing that Microsoft has, quote, limited our ability to reach customers. Amazon became a major OpenAI investor in February. Now Google has silicon in their mix. OpenAI is becoming cloud agnostic by design.
Pricing. Google does not publish a public rate card for TPU the way Nvidia does for GPUs. TPU capacity is sold through enterprise negotiation. Early access begins in the third quarter of 2026. General availability later this year.
SCENE THREE: THE REAL STORYNow the part that matters.
Here is what most of the coverage is missing. This announcement is not really about Google picking up share against Nvidia. It is about whether the most profitable moat in modern technology history just started to leak.
Nvidia's market capitalization sits near five trillion dollars. That valuation is built on a single assumption. The assumption is that if you want to train or serve a frontier AI model, you buy Nvidia. That has been true for the entire modern AI era.
That assumption just cracked in public.
When OpenAI, the marquee Nvidia customer, the company whose models run almost exclusively on Nvidia silicon, publicly books capacity on Google TPUs at gigawatt scale, the story Wall Street has been telling itself starts to fray at the edges. Nvidia is no longer the only answer. It is not even the only answer inside OpenAI.
Analyst Patrick Moorhead, who has covered this space for decades, noted that he first predicted TPUs would eat into Nvidia's position back in 2016. That prediction did not pan out for a decade. It is panning out now.
Nvidia is absolutely fine in the short term. Their Blackwell generation is sold out. Rubin, the next generation, is locked in for 2026 and most of 2027. You cannot unwind those contracts. The near term revenue picture does not change.
What changes is the slope of the curve further out. Nvidia's valuation assumes it captures an oversized share of AI infrastructure spend for at least the next five years. If Google, Amazon, and to a lesser degree Microsoft are all shipping credible custom silicon that AI labs are actively buying in gigawatt quantities, that share starts to compress. And when valuations live on the future, compression of the future curve CRUSHES the present stock price.
There is a counterpoint worth stating honestly. Google's own engineers have admitted that, per chip, a top Nvidia GPU still beats a TPU on peak performance. The argument Google is making is not per chip. It is per dollar. Per watt. Per pod. Per fleet. For most customers, that is the argument that actually matters when you are buying compute by the gigawatt.
On social, Hacker News lit up the night the announcement dropped. One top thread framed it simply. Nvidia has been selling a general purpose tool. Google is selling a purpose built factory. One of those models survives a maturing market. The other has to keep inventing new markets to sustain its margins.
The regulatory layer matters too. The White House National AI Framework specifically called out compute concentration as a risk. Google is now a fully vertically integrated AI stack. Chips, cloud, models, agent platform. Anthropic, Meta, and OpenAI are all customers. Policymakers will notice.
And yet, Google is simultaneously partnering with Nvidia to run Rubin GPUs on Google Cloud. They are not trying to corner the market. They are trying to OWN the floor of the market. Whatever chip the customer wants, Google wants to sell it from a Google data center.
So where does this leave us.
Nvidia is still the undisputed king this year. Google just showed it has a real claim to the throne three years out. The AI labs are hedging aggressively across every vendor they can find, which tells you they are worried about vendor lock in at a level they are not talking about publicly. And the silent winner in all of this might actually be the customer, because for the first time in a decade there is genuine competition in the chip that runs the AI economy.
One announcement. Two chips. Three customers.
The AI hardware monopoly just ended.
Stay sharp.
Jane Sterling, Sterling Intelligence.
OpenAI just booked compute on Google chips — and that single fact may have cracked the most valuable moat in tech history. We break down the numbers, the customer list, and what it really means for Nvidia.
On April 22, 2026, Sundar Pichai unveiled Google's eighth-generation TPU lineup at Google Cloud Next '26 in Las Vegas: TPU 8t for training (co-designed with Broadcom, codename Sunfish) and TPU 8i for inference (co-designed with MediaTek, codename Zebrafish), both on TSMC's 2nm process and running on Google's own Axion ARM CPU host.
The headline specs: 9,600-chip superpods delivering ~121 FP4 exaFLOPS and 2 petabytes of shared HBM, 1M+ TPUs in a single cluster, 3x training throughput per pod vs Ironwood, 80% better inference price-performance, and ~2x perf-per-watt.
The bigger story: Anthropic is taking 3.5 gigawatts of TPU capacity. Meta signed a multi-billion-dollar multi-year deal in February. And OpenAI — historically the Microsoft/Nvidia anchor customer — just confirmed TPU allocation on Google Cloud. That is the first visible crack in the assumption that Nvidia is the only serious substrate for frontier AI.
Alphabet stock jumped ~2.1% on the announcement. Google also announced a partnership with Nvidia to deploy Vera Rubin GPUs on Google Cloud's A5X infrastructure, launched the Gemini Enterprise Agent Platform (replacing Vertex AI branding, 8M+ seats across 2,800+ enterprises), and signaled that general availability for TPU 8 lands later in 2026 with early access in Q3.
In this episode, Jane Sterling breaks down the architecture split between training and inference silicon, the full customer hedge happening across Anthropic, Meta, and OpenAI, the circular finance and compute concentration questions policymakers are starting to ask, and what the end of the Nvidia monopoly means for developers, enterprises, and investors.
⏱ Timestamps
00:00 Scene One — The Announcement
03:00 Scene Two — The Numbers
06:00 Scene Three — The Real Story
🔔 Subscribe to Sterling Intelligence for weekly AI coverage that cuts through the hype.
https://www.youtube.com/@SterlingIntelligence
No hype. No filler. Just the signal.
— Jane Sterling, Sterling Intelligence
#Google #TPU8 #Nvidia #OpenAI #Anthropic #Meta #GoogleCloud #CloudNext2026 #AIChips #AIInfrastructure #GeminiEnterprise #VeraRubin #Ironwood #AINews #SterlingIntelligence #JaneSterling #AIEconomy #BigTech
-
Top Pick
Google Built The Chip That Made OpenAI Switch47 chars
Leads with the most surprising fact of the announcement (OpenAI on Google silicon), frames it as a causal narrative, and creates a curiosity gap that compels the click for anyone who associates OpenAI with Nvidia GPUs.
-
Alternate 1
Nvidia Just Lost OpenAI. Google Is Why.41 chars
Hard, punchy, claim-first. "Lost OpenAI" is a provocation that will pull in the finance and chip-investing audience. Short enough to dominate any thumbnail layout.
-
Alternate 2
1 Million Chips. 2 Chips In One. Nvidia's Problem.51 chars
Numbers-forward title that leans on the 1M-chip cluster and the training/inference split. Appeals to the AI infrastructure and data-center audience specifically, where the spec sheet is the story.
Google TPU, TPU 8t, TPU 8i, Google Cloud Next 2026, Ironwood TPU, Nvidia, Vera Rubin, Blackwell, OpenAI, Anthropic, Meta, Sundar Pichai, Axion CPU, Broadcom, MediaTek, TSMC 2nm, Gemini Enterprise Agent Platform, Vertex AI, AI chips, AI accelerator, AI inference, AI training, mixture of experts, HBM, AI infrastructure, AI hypercomputer, Alphabet stock, AI news 2026, Sterling Intelligence, Jane Sterling
Jane's Appearance & Framing
Expression. Quietly knowing, half-smile held in reserve. One eyebrow lifted maybe 3mm on her right side. Mouth closed, lips flat with the faintest upward tension at one corner. Not shock. Not glee. The face you make when you've just noticed a detail buried in a press release that the headline writers missed.
Head position. Square to camera, chin level, subtle forward lean of about 5 degrees. Communicates "I've done the math, let me show you what I found." Not reactive. Authoritative.
Wardrobe. Dark structured blazer, graphite or deep navy. No jewelry. Sterling Intelligence house style. Visual gravity stays on the numbers and the face.
Eye direction. Direct to camera, locked. Alternate take: eyes flicked slightly toward the chip overlay / number stack, selling the "look at what they just did" read.
Lighting. Hard key light from upper-left, deep shadow on the right side of face. Color temp around 4500K. Warm rim light from behind on the hair to separate from the background. Subtle cyan practical glow far behind her right shoulder suggesting data center. Mood: server hall at 3am, not studio.
Scene setup. Background is near-black with a faint grid of green indicator lights at ~8% opacity far behind her (suggesting a TPU pod without being literal). Very shallow depth of field. Optional: a ghosted Google "G" mark at ~12% opacity camera-left behind her shoulder; balance with an Nvidia eye mark at the same opacity camera-right, crossed out with a thin gold strike-through. Makes the thesis readable in half a second.
Option 1 — Best (OpenAI Defection Angle)
OPENAI SWITCHED
Position. Bottom-third, left-aligned, large block stacked on two lines ("OPENAI" on top, "SWITCHED" below).
Font. Bebas Neue Bold or Impact, all caps, tight tracking, slight italic skew of 3 degrees for forward energy.
Color scheme. "OPENAI" in pure white (#ffffff) with a 3px black stroke. "SWITCHED" in gold (#c8a84b), 110% size with subtle outer glow. 3px black stroke around both blocks for legibility against any background.
Accent detail. Small red tag above the text: "TO GOOGLE" in Inter Bold, 16px, #dc2626 with 2px white stroke and a tiny arrow pointing at "OPENAI". The red contrasts the gold and signals the conflict-of-interest story the title sets up.
Option 2 — Scale Angle
1,000,000 CHIPS
Position. Centered upper-third. Oversized.
Font. JetBrains Mono Bold or a bold geometric sans (Eurostile) for the numeral to read as "spec sheet." Inter Black for "CHIPS."
Color scheme. "1,000,000" in pure white with a soft cyan glow behind it, commas in gold. "CHIPS" in muted gray underneath at 60% size. Thin gold underline at the baseline.
Accent detail. A single red word above: "ONE CLUSTER." in Inter Bold, 14px, #dc2626, all caps, with a thin red line running under it. Sells the superpod / fleet-scale angle for the infrastructure audience.
Option 3 — Nvidia Threat Angle
NVIDIA'S MOAT
Position. Centered, with a bold red strike-through running diagonally across "MOAT" at a 12 degree angle.
Font. Inter Black, all caps.
Color scheme. "NVIDIA'S" in pure white. "MOAT" in green (#22c55e), the color associated with Nvidia's brand, with a thick red (#dc2626) strike-through. 3px black stroke on all text.
Accent detail. Below the main text: "CRACKED." in Bebas Neue, 32px, gold (#c8a84b), with a thin lightning-bolt icon replacing the period. Best for the finance / stock-investing audience where the Nvidia valuation thesis is the story.
Media Coverage
-
Google Cloud launches two new AI chips to compete with Nvidia
TechCrunch · April 22, 2026
-
Google unveils chips for AI training and inference in latest shot at Nvidia
CNBC · April 22, 2026
-
Google dual tracks TPU 8 to conquer training and inference
The Register · April 22, 2026
-
Google doesn't pay the Nvidia tax. Its new TPUs explain why.
VentureBeat · April 22, 2026
-
Google launches Ironwood TPU and previews eighth-gen split into training and inference chips at TSMC 2nm
The Next Web · April 22, 2026
-
Google launches Gemini Agent Platform, eighth-generation TPUs
Computer Weekly · April 22, 2026
-
Google shares jump on new TPU 8 chips, enterprise agent platform, and partnership with Nvidia
Sherwood News · April 22, 2026
-
Google unveils 8th-gen TPUs, agent platform, and Workspace AI layer at Cloud Next '26
The Decoder · April 22, 2026
-
Google introduces Gemini Enterprise Agent platform, new AI chips, and more at Cloud Next '26
The Tech Portal · April 23, 2026
-
Google bets on agentic AI with AI Hypercomputer: 8th-Gen TPUs, Nvidia Rubin, Axion CPUs
WCCFTech · April 22, 2026