AI feels cheap right now—but that may be tech’s biggest illusion.
From ChatGPT to Claude, users pay about $20 a month for tools that rank among the most expensive software ever built. Yet newly surfaced financials, reported by the WSJ ahead of potential IPOs, suggest that pricing may be unsustainably low.
The Math Isn’t Working — Yet
OpenAI is projected to spend roughly $121 billion on compute by 2028, with losses remaining steep despite rising revenue. Anthropic is smaller, but on a similar path. Crucially, inference—the cost of serving users—already eats up more than half of revenue at both firms. The more usage grows, the more costs scale.
For now, profitability isn’t the goal. Growth is.
Only a small fraction of users actually pay for these tools, meaning much of the usage is effectively subsidized as companies prioritize adoption and enterprise expansion.
Both companies are still in land-grab mode, prioritizing:
- User growth
- Enterprise adoption
- Ecosystem dominance
OpenAI, for example, has only a small fraction of users paying, meaning a large portion of compute costs are effectively subsidized.
This is a familiar playbook: scale first, monetize later.
When Pricing Reality Kicks In
That model works—until it doesn’t.
If inference alone consumes 50%+ of revenue, and training adds tens of billions more, current pricing likely does not reflect true cost. As compute costs rise and investors shift focus toward cash flow, pricing would become the key lever.
That likely won’t mean an overnight jump, but gradual increases through higher tiers, usage-based pricing, or tighter enterprise monetization.
Think:
- $20 → $30 → $50 for premium tiers
- Pay-per-use for heavy users
- Ads or hybrid models for free users
The trigger won’t be demand. It will be investor pressure.
AI is currently being sold like SaaS, but it’s being built like infrastructure.
At some point, those two realities will have to meet.
Image: Shutterstock
Recent Comments