Brewing...

Skip to content

Zuck Cheats on Jensen: The Silicon Valley Herd Just Decided Nvidia is Optional

Tech Talks

Published on 25 November 2025

Meme of Patrick Star sitting alone in a server rack holding a massive Nvidia GPU with text 2026-2027 Where did everybody go, representing the Nvidia AI bubble burst and customers leaving for Google chips.

The news that Meta is planning to buy Google’s AI chips is the signal that the NVIDIA infinite growth narrative is dead. Here is the breakdown of why the whales like OpenAI and Meta are ditching Nvidia for Google TPUs and custom silicon, and why the AI bubble is about to leave Larry Ellison holding the bag in 2026 - 2027.

If you listened closely yesterday, you could actually hear the sound of Jensen Huang panic-scrolling Vinted to see how much his leather jacket is worth second-hand. Rumour has it he was last seen trying to trade it for a Google fleece and a "Noogler" hat.

The news that Facebook is in talks to rent and eventually buy Google’s TPUs is the final nail in the "infinite growth" narrative. Facebook is currently Nvidia’s absolute biggest whale, the company actively bulldozing half the planet to build data centres the size of small countries. Now they are shopping at Google.

We are deep in late 2025 and the reality is setting in. The whales are migrating, and they aren't swimming towards Nvidia.

The "Training" vs "Inference" Trap:

Before we laugh at the bagholders, let’s get the technical boring bit out of the way because it explains why Nvidia is about to hit a brick wall.

There are two stages to this AI grift. First is Training, which is building the brain. This requires massive, raw power and is Nvidia’s H100/Blackwell territory. It costs billions. Second is Inference, which is using the brain. This is you asking ChatGPT to write a passive-aggressive email to your landlord. This requires efficiency and low cost.

For the last three years, everyone was panic-buying Nvidia cards for training. But now the models are built. We are entering the inference era. Using a £30,000 Nvidia GPU to generate a picture of a cat is like using a Ferrari to deliver a Domino's pizza. It works, but the fuel costs will bankrupt you.

The Great Plateau:

Let's be real. When GPT-5 dropped back in August... It was better, sure. But was it "change civilisation" better? No.

We have hit the law of diminishing returns. Throwing another 100,000 GPUs at a model isn't making it 100% smarter anymore. It is making it 5% smarter for 500% of the cost. The training gold rush is slowing down because the shovels have become too expensive for the amount of gold we are actually finding.

The Traitors are Leaving the Ship:

Look at the guest list for Nvidia’s pity party. It is literally everyone who matters.

Facebook: They are pivoting to Google TPUs. For them to consider buying Google hardware is a massive shift. Google famously never sells its chips. This shows just how desperate Zuckerberg is to stop paying the Jensen Tax.

OpenAI: The poster child for the boom is now two-timing Microsoft. They are designing their own daily driver chips for inference and renting Google TPUs. They know the future is cheap inference rather than brand loyalty.

Microsoft: They are pushing their own Maia chips for 2026. Are they as fast as Nvidia’s? No. Do they need to be? Also no. They just need to run Outlook summaries without costing £4 per email. Efficiency is the new benchmark rather than raw speed.

Google: The smug winner. The release of Gemini 3 Pro and Nano Banana Pro proved a fatal point. You do not need Nvidia to be State of the Art. Google’s entire stack is home-cooked, it is cheaper, and it works better. The "Nvidia Moat" myth is officially dead.

The Bagholders: Oracle and The Ego

This leaves us with the "Greater Fools" left holding the silicon bags when the music stops in 2026.

First is xAI. Elon Musk is currently building the world’s largest cluster to run Grok. Why? Because his ego demands it. Let’s look at the numbers. Grok 4.1 launched just last week, claiming to beat Gemini in benchmarks. But the numbers that matters are the real shock.

Reports from earlier this year showed xAI burning through nearly $1 billion a month in expenditures to build out the "Colossus" cluster, all to chase a market share that is still sitting at less than 1%. Spending $13 billion a year to generate $500 million in revenue isn't a business plan, how's this guy "world's richest man" again? Unless Elon pivots to Tesla’s Dojo chips, he is just setting money on fire to measure contests with Zuck and Altman.

Then there is Oracle, the biggest sucker at the table. Larry Ellison is buying up every Nvidia chip he can find, building massive "superclusters", and renting them out on a "first come, first served" basis. Larry’s entire business plan is based on owning the Ferraris and hoping people pay £10/hour to rent them.

But what happens in 2026 when Google and Microsoft offer "good enough" vans for £2/hour? Who is going to rent Larry’s Ferraris? Oracle has no chips of their own. They have no GenAI models of their own. They are a glorified landlord in a market where all the tenants are buying their own houses. When the backlog clears and the hype dies, Oracle is going to be left with warehouses full of depreciating Blackwell chips that nobody can afford to turn on.

The Verdict

That recent Q3 earnings beat looks like the last hype-level win for Nvidia coming into 2026. It is a lagging indicator. It is the sound of them finally delivering the chips people panic-ordered eighteen months ago. The backlog is literally just a queue of people who are actively building an exit plan.

The smarter and pragmatic GenAI whales are starting to refuse the Jensen tax. With the bubble hovering right behind them, Nvidia will still have business, no doubt. They will keep selling the big rackets to those needing physical hardware, like nation states that are a couple of years behind time and suddenly decided they need 'AI'. But their infinite growth gold rush, once their backorder has been fulfilled, is quite over.

The customer base is shifting right before our eyes. The Smart Money is leaving. Meta, Amazon, and Google are moving to custom silicon and cheap inference. That leaves the Dumb Money. Nvidia’s future is now reliant on Larry Ellison trying to corner the rental market, and Saudi princes stacking H100s in the desert to look important.

Come 2026 - 2027, the backlog will clear, and the infinite demand will vanish as the hyperscalers switch on their own massive inference farms. More and more, it looks like Google might be the most insulated from this entire GenAI bubble decade. Everything they have is built using their own hardware and is fully integrated. Despite their awful track record of killing every product they touch and being generally incompetent at product management, their TPUs might just be the best pivot they made in the mid-2010s.

Share this post on:
Share this post on Bluesky Share this post via WhatsApp Share this post on Facebook Share this post on X Share this post via Telegram Share this post on Pinterest Share this post via email