Brewing...

Skip to content

The $50 trillion bubble leaks: How the Pentagon accidentally broke the AI industry

Tech Talks
🤡🎪🎈📯The Whole Circus
Published on 17 March 2026 ☕ 7 min read
Satirical US Department of War seal criticizing military use of generative AI. The design features a cybernetic bald eagle with glowing red eyes, a circuit board shield, and binary code sun rays. A central banner reads Palantir Approved, Operationalized via Claude and OpenAI Deals, and LLM Hallucination Liability Waived, mocking defense tech contracts.

If you want to understand the absolute shit show currently consuming the tech world, ignore the PR spin. Millions of blissfully oblivious people are currently deleting ChatGPT, downloading Anthropic's Claude, and patting themselves on the back for supporting a pro-ethics brand. Meanwhile, Big Tech executives are sweating bullets, and the entire generative AI financial bubble is deflating in real time.

This crisis is entirely driven by corporate hubris, astronomical cash burn, and a massive capability bluff being called by a US military establishment that possesses zero technical literacy.

Here is the full, unvarnished reality of what is actually happening.

The great capabilities bluff gets called:

For the last three years, the AI industry survived on insane, futuristic hype. Companies like OpenAI and Anthropic convinced investors to part with tens of billions of dollars by promising Artificial General Intelligence was right around the corner. They marketed their language models as infallible reasoning engines.

When you spend years telling Washington your software is a borderline omniscient super-brain, the Pentagon will eventually ask to use it like one.

The build-up started earlier than people realise. OpenAI quietly dropped its ban on military and warfare use back in January 2024. By late 2024, Anthropic had happily partnered with Palantir and AWS to get Claude approved for classified military networks. By mid-2025, the Pentagon handed out massive $200 million prototype contracts to OpenAI, Anthropic, Google, and xAI.

The collision happened in early 2026. Defense Secretary Pete Hegseth and the Trump administration demanded an "any lawful use" clause. This essentially required tech companies to strip away all safety guardrails so the military could use the models for mass domestic surveillance and fully autonomous lethal weapons.

Dario Amodei refused. He had to stand in front of the world and admit that current frontier AI hallucinates, lacks critical judgement, and simply does not have the technical capabilities to handle fully autonomous weapons without putting troops at risk. He was forced to puncture his own company's hype balloon.

The blacklist and the legal sleight of hand:

Because Anthropic refused to hand over the keys to an unpredictable technology, the US government threw an unprecedented tantrum. Hegseth officially designated Anthropic a "supply chain risk" to national security. This is a label historically reserved for hostile foreign adversaries like Huawei.

President Trump boasted about firing them like dogs and threatened massive civil and criminal consequences. Hegseth went on social media to claim that absolutely no contractor doing business with the military could conduct commercial activity with Anthropic.

This caused sheer panic across the enterprise sector. However, the official legal designation delivered on March 4 was actually much weaker. It only banned Claude's direct use on Department of Defense contracts. Hegseth lied online to cause maximum commercial damage to Anthropic, knowing the actual law would not let him go that far.

Peak delusional GenAI bullshit and a horrific school bombing:

Consumers reacted by triggering a massive exodus from ChatGPT. The #QuitGPT movement exploded, and Claude shot up to become the number one free app on the Apple App Store.

The absolute peak irony of this entire spectacle is just how uninformed the general public is. These millions of users migrating to Claude for ethical reasons could literally open the app, type "Did Claude help bomb a school in Iran?" into the prompt box, and read the brutal truth for themselves.

They are completely ignoring the fact that Anthropic fought hard for those $200 million military contracts. Claude is already sitting on classified Palantir servers, powering the Maven Smart System to process intelligence data and help the military find targets.

We now know exactly what happens when trigger-happy procurement officers believe Silicon Valley marketing. GenAI is fundamentally just a next-token predictor. It is highly advanced autocorrect. It has no spatial awareness and no real-world logic. Yet the US military treated it like a flawless tactical genius.

On 28 February, the US military launched a Tomahawk missile that struck the Shajareh Tayyebeh elementary school in Minab, Iran. At least 175 people were killed, the vast majority of them young girls. While President Trump initially tried to blame Iran, US military investigators have since confirmed it was an American strike.

How was the target selected? The US military used Anthropic's Claude, paired with Palantir's systems, to generate the target coordinates for the opening phase of the war. They fed messy, outdated intelligence data into a chatbot, the AI hallucinated a connection between a civilian school and a military target, and clueless human operators pulled the trigger.

This is peak delusional GenAI bullshit. The military took a glorified text generator, gave it access to a kill chain, and wiped out 175 civilians. Anthropic got kicked out of the Pentagon over a technical liability dispute, and an incredibly gullible public somehow crowned them the pacifist kings of Silicon Valley, entirely ignoring the blood already on the algorithm's hands.

To make matters perfectly hypocritical, the US military is quietly continuing to use Claude for target analysis in these ongoing operations. The government is publicly executing the company for PR points while privately relying on their hallucinating software to drop bombs.

OpenAI's desperation and the Altman miscalculation:

While Anthropic stood its ground over liability limits, Sam Altman immediately swooped in. OpenAI signed the contract to put a custom version of ChatGPT onto the military's GenAI.mil platform, instantly serving 3 million defence personnel.

Altman was utterly blindsided by the sheer ferocity of the backlash. He genuinely thought the controversy would blow over in a standard 48-hour Twitter storm. He had absolutely no idea the consumer boycott would become this intense. But he was desperate. OpenAI is staring down a catastrophic $14 billion cash burn hole this year. To justify their insane valuations, they desperately need the bottomless pockets and the good graces of the military-industrial complex.

The PR damage for OpenAI is catastrophic, and it just got fundamentally worse. Public records recently revealed that OpenAI President Greg Brockman donated a staggering $25 million to Trump's MAGA Inc. super PAC. He is one of their largest donors. OpenAI is literally bankrolling the exact political administration that is currently threatening to destroy their rivals.

The clueless engineer revolt:

Right now, OpenAI and Google are facing a massive internal revolt. Over the weekend of March 7, Caitlin Kalinowski, OpenAI's head of robotics, publicly resigned in protest over the rushed Pentagon deal. Nearly 900 engineers from across Big Tech have signed petitions backing Anthropic.

These developers are completely missing the irony of their own existence. They know firsthand that generative AI is a total joke when it comes to complex military targeting. Yet they are protesting the exact revenue stream required to sustain their industry. Without these massive government contracts to subsidise the insane compute costs, the venture capital dries up and their half-million-dollar salaries disappear overnight.

The real reason Big Tech is terrified:

Google, Microsoft, and AWS are loudly expressing solidarity with Anthropic. They do not care about ethics. They smell blood in the water.

If the Pentagon can legally use the "supply chain risk" label to force companies to strip away safety guardrails, Big Tech loses control over its own proprietary AI architectures. They know their models hallucinate. If the government forces them to take the training wheels off, and that AI subsequently causes another catastrophic friendly fire or civilian casualty incident, these mega-corporations will face astronomical legal and financial liability. They are hiding behind Anthropic because Anthropic is currently taking the bullets for a capability crisis that affects the entire industry.

The Verdict

This is the ultimate poetic irony of the crisis. By trying to outmaneuver each other, both companies have walked directly into a financial death trap.

Anthropic is currently choking on its own success. Millions of new #QuitGPT users are crashing their servers. Consumer traffic is a massive money-loser for AI labs because free users instantly burn compute cash. Anthropic just lost access to the largest software buyer in the world, and in exchange, they received an army of unprofitable consumers. Their upcoming Wilson Sonsini IPO, which was aiming for a $350 billion valuation, is now severely crippled because their Total Addressable Market has a massive federal ceiling placed over it.

OpenAI traded consumer trust for a $200 million government prototype contract. That money will barely cover a fraction of their annual losses, and government procurement pays notoriously slow. Altman needed every single paid customer he could get to cut the burn rate. Instead, he triggered a mass exodus and is now shifting OpenAI from a tech darling into a defence contractor, a sector that historically trades at much lower revenue multiples.

It might be an exaggeration to say the generative AI bubble has entirely burst today. The average retail investor is just as clueless as the #QuitGPT crowd, easily distracted by the shiny pro-humanity PR spin and completely ignorant of the actual hardware costs.

However, the institutional money sees the writing on the wall. The US military accidentally exposed the fatal flaw of the industry by calling the tech sector's bluff. Wall Street just watched the world's top engineers confess that the technology is highly unpredictable and fundamentally incapable of operating autonomously. If the AI requires a human babysitter for the military, it requires a human babysitter for a bank or a hospital. The Return on Investment math for AGI no longer makes any sense, and the illusion of an autonomous, trillion-dollar super-brain is completely dead.