Is AI Heading for a Subprime Moment? Maybe. But That's Not the Whole Story.

We are all experiencing the public AI boom. OpenAI, the poster child of this revolution, is burning through billions of dollars a year. Despite being valued higher than some countries' GDPs, it's still not profitable. Anthropic is in a similar boat. Even their downstream customers, such as Anysphere with Cursor, are struggling to maintain stable pricing, which is unusual for a product supposedly riding a tidal wave of demand.
Ed Zitron calls it the "subprime AI crisis," and whether or not you agree with his metaphor, it makes you focus! That analogy to the 2008 housing crash makes you sit up and take notice. It implies we've all over-leveraged a shiny idea we don't fully understand, convinced it'll only go up and to the right.
So... is he right?
Honestly, kind of. But there's a deeper story. One that isn't about collapse but about misalignment, hype cycles, and the uncomfortable adolescence of a world-changing technology.
The Numbers Don't Lie (But They Don't Tell the Full Truth Either)
OpenAI is leaking money like a punctured tanker. Reportedly, $5 billion in losses just this year alone. This is despite raising over $60 billion in private funding. Most people are unaware that ChatGPT itself incurs a significant per-interaction loss. Each query may only cost fractions of a cent, but multiply that by hundreds of millions of users, and you're paying for Everest-sized compute bills every month.
And these costs aren't going down. GPT-4o helped optimize things, but training, fine-tuning, inference, and hosting are still all expensive. Building general-purpose intelligence is like trying to solve a trillion-dollar equation with 2008 internet margins.
What's worse is that many companies downstream from OpenAI, including startups that rely on OpenAI's API to power their products, are now passing on their costs to users. Think of it like a food chain. When the lion gets hungry, everything below feels it.
That's what happened with Cursor. After raising nearly $1 billion, they had to increase their prices significantly. Users revolted. Reddit lit up. Software devs were calling it "garbage."
It's not a good look.
But before we declare that an AI crash is imminent, let's ask: What exactly are we measuring?
Valuation ≠ Value (And Vision ≠ Viability)
There's a peculiar phenomenon occurring in tech where valuation has become its own gravitational force. If you're worth $60 billion, then you must be doing something revolutionary, right?
Maybe. However, value creation doesn't always follow the same timeline as capital creation.
AI is being valued as if it were already Google, Apple, or Amazon. However, is it still more akin to YouTube in 2006, massive engagement but unclear monetization? We haven’t yet determined the best use cases. We’re still tossing spaghetti at the wall. And most of it slides off.
Currently, the primary beneficiaries of AI are the cloud infrastructure providers. Microsoft. Nvidia. AWS. That's where the margins live. Everyone else is selling hope. So yes, a "crisis" is brewing. But it's not necessarily a bubble in the classic sense. It's more like a business model misfire. VCs and founders assumed that value capture would naturally follow impressive demos.
Spoiler: it rarely does.
Does AI Actually Work?
Here's the uncomfortable question no one wants to ask in public: Is this stuff actually that useful? Yes, we are aware of productivity gains, co-pilots, coding helpers, translation, and summarization. (That's totally fair.) But are people paying for it at scale? There's a difference between liking something and needing it.
I know and appreciate how AI is an incredible tool in the right hands, and I recently had a discussion with Alistair McDermott about this very point. Alistair shared how people need to learn how to drive AI. Some people will learn how to drive AI, while others, a select few, will become expert drivers (think Formula One). Where AI meets experts, that is where the most valuable outputs occur. I shared how there will probably be three groups: drivers, expert drivers, and simply consumers of the outputs.
Generative AI today still solves for curiosity more than necessity. It's a helpful assistant. A creative muse. A summarizer of things you weren't going to read anyway. But how many companies are making mission-critical decisions based on it every day? (More and more?)
Most users haven't yet restructured their workflows around AI because the tech is still new. That's a valid point. However, the longer it takes, the more money is wasted, and the greater the pressure becomes to prove commercial viability. That's the gap Zitron is warning about.
The tech is remarkable. But it's not infrastructure yet. It's a layer on top of other things. One that could be peeled off pretty fast when the bill goes up.
Here's What Almost No One's Talking About
Let's talk about alignment. Not the AI ethics kind (important, but separate). I mean business alignment. There's a fundamental disconnect between the incentives of AI companies and those of their customers.
Take OpenAI. Their core goal, explicitly stated, is to build artificial general intelligence (AGI). Not revenue. Not product-market fit. Not developer happiness. Just AGI.
This makes them deeply unusual, almost missionary in their ambition. It's like selling people roads and cars today while secretly trying to invent teleportation.
When they launch new models, they don't always prioritize stability, clarity, or cost. They optimize for pushing the frontier. For impressing people. For showing they're ahead.
That's fine if you're a research lab. It's unusual if you're powering hundreds or thousands of businesses that demand consistency. (Or should businesses be more agile?)
This misalignment creates tension. Suddenly, your coding assistant jacks up its prices. Your AI co-writer gets weirdly slow. Your customer support bot is currently down for maintenance. Why? OpenAI (insert any other major AI players here...) decided it was time to unify models, reroute infrastructure, or focus on a better fine-tuning pipeline.
That's not evil. It's just difficult for anyone depending on them.
Where Does This Go?
Let's walk through a few scenarios:
1. A correction hits.
VCs get cold feet. Startups relying on LLM APIs implode or consolidate. Prices go up. Users churn. Fewer companies try to build on top of OpenAI or Anthropic. It's a mini AI winter. Not a total collapse but a pullback.
2. A killer app emerges.
It could be an AI-native search. (Watch Open AI closely here.) Or AI workflow agents. Or personal health copilots. Something finally proves sticky, scalable, and unavoidable. That creates a new revenue engine and recalibrates the hype into actual utility.
3. Open-source wins.
Cheaper, leaner, and smaller models, such as Mistral, LLaMA, and Phi, start outperforming closed models in practical use cases. Companies migrate to open-source stacks to reduce dependency and cost. OpenAI loses its moat.
4. Hybrid models emerge.
AI becomes invisible. Not a product but a feature. It quietly powers tools you already use. Microsoft is betting on this. So is Google. That's arguably the most sustainable path, but the least exciting.
Is AI Heading for a Subprime Moment?
Zitron is directionally right. We've massively overestimated short-term returns from AI, and we're starting to feel the pinch. Some of that is just the nature of hype cycles. Some of it is bad economics. And some of it is a sign we still don't know what this tech is best at.
However, labeling it a "subprime crisis" might be a bit overstated.
Unlike the 2008 crash, there's no toxic debt repackaged and sold to pension funds here. There's a significant amount of speculative investment in an incredibly powerful yet immature tool. That's risky. But it's also how innovation usually works.
The internet didn't make money for years. Neither did Google, Amazon, Twitter, or Facebook in their earliest days. The difference is that those companies didn't require billions in GPU clusters to exist.
So yes, things will shake out.
But AI isn't going away. It's just going to get less shiny and more useful. Eventually. Once we stop trying to make it replace everything and start figuring out what it's actually good for.
And we’ll learn how to build better tools instead of chasing the next shiny object and miracles.
Unless true AGI appears, then all bets are off.
I would love to hear your thoughts.
Are we heading toward an honest AI reckoning, or is it only upwards from here?
Chat more with Simon about this article
#AI #OpenAI #TechTrends #StartupReality #AGI #SubprimeCrisis #MachineLearning #VC #ArtificialIntelligence #GenerativeAI