Reports in late 2025 paint two different pictures of AI company economics: Anthropic projecting profitability by 2028, OpenAI reporting significant losses while growing revenue rapidly.
This has sparked debate about which approach is "better." But the honest answer is more complicated: we don't have enough information to judge, and both strategies involve significant uncertainties.
Here's what we actually know, what's uncertain, and what context is often missing from these discussions.
What's Being Reported
OpenAI
According to press reports citing Microsoft filings and other sources:
- Approximately $20 billion annual recurring revenue (November 2025)
- Significant quarterly losses reported (exact figures vary by source)
- Major infrastructure commitments planned over multiple years
- Profitability projected for 2029-2030
Anthropic
According to press reports:
- Cash burn declining as percentage of revenue (targeting 33% by 2026, 9% by 2027)
- Profitability projected for 2028
- Major enterprise deals (Google TPU partnership, enterprise deployments)
- Revenue not publicly disclosed but reportedly growing
Important Caveats About This Data
- These aren't audited financials. Numbers come from press reports citing unnamed sources, investor presentations, and inferred from public filings of partners
- Projections aren't predictions. "Profitability by 2028" is a target, not a guarantee
- We don't have full context. What's included in "losses"? How are revenues recognized? Accounting matters.
- Both companies are private. We're seeing selected information, not complete pictures
Two Different Strategies
The reported numbers suggest genuinely different approaches:
| Dimension | OpenAI (Reported) | Anthropic (Reported) |
|---|---|---|
| Growth vs. Profit | Prioritizing growth; accepting large losses | Targeting earlier profitability; tighter cost controls |
| Customer Focus | Strong consumer presence (ChatGPT) | More enterprise-focused |
| Infrastructure | Very large commitments (AWS, own data centers) | Significant but reportedly more diversified (Google TPU + Nvidia) |
| Timeline to Profit | 2029-2030 (reported) | 2028 (reported) |
The "Which Is Better?" Question
Many analyses frame this as OpenAI vs. Anthropic, implying one approach is clearly superior. But both strategies have historical precedents that succeeded—and failed.
Arguments for Aggressive Growth (OpenAI's Approach)
- Amazon precedent: Lost money for 20+ years, now worth over $1 trillion
- Network effects: ChatGPT's user base creates switching costs and data advantages
- Winner-take-most markets: If AI is winner-take-most, being biggest matters more than being profitable
- Infrastructure as moat: Massive infrastructure commitments could create barriers to entry
Arguments for Earlier Profitability (Anthropic's Approach)
- Sustainability: Companies that can self-fund have more control over their future
- Enterprise value: B2B relationships tend to be stickier than consumer
- Reduced dependency: Less reliance on continued investor enthusiasm
- Market uncertainty: If AI progress slows, profitable companies survive; loss-makers don't
What We Actually Don't Know
Which approach is right depends on factors we can't predict:
- How fast will AI capabilities improve? (Faster favors growth strategy; slower favors profitability)
- Will there be a "winner-take-most" dynamic? (Unknown)
- How long will investors fund losses? (Depends on macro conditions)
- What happens to competitive landscape? (Open source, new entrants, etc.)
Anyone claiming to know which strategy is "correct" is overconfident.
The Compute Access Question
A separate concern often raised: AI infrastructure is concentrated in a small number of countries, potentially creating lasting inequalities in AI access.
This concern is real and worth taking seriously. But it's separate from the question of which company strategy is "better"—both Anthropic and OpenAI are building infrastructure primarily in wealthy countries.
The Deeper Question
Yuval Noah Harari argues that AI represents something fundamentally new—autonomous decision-making systems that will reshape economies and societies. If that's true, the question isn't just "which AI company succeeds?" but "what kind of AI infrastructure serves humanity?"
Neither "grow fast and figure out profitability later" nor "get profitable as soon as possible" directly addresses questions about equitable access, AI governance, or long-term societal impact. Both are business strategies, not philosophies of technology development.
What This Debate Misses
Framing this as "responsible Anthropic vs. irresponsible OpenAI" (or vice versa) misses important nuances:
- Both companies are racing to build powerful AI. Different financial strategies, same general direction.
- Neither approach guarantees good outcomes. A profitable company can still build harmful AI; a loss-making company can still produce beneficial technology.
- "Safety" and "profitability" aren't the same axis. You can be profitable and unsafe, or unprofitable and safe, or any combination.
- We're comparing marketing as much as reality. Both companies emphasize aspects that make them look good.
An Honest Assessment
| Question | What We Know | What We Don't Know |
|---|---|---|
| Is OpenAI losing money? | Reports suggest significant losses | Exact figures; whether losses are investment or inefficiency |
| Is Anthropic more "responsible"? | They have tighter cost controls (reported) | Whether financial responsibility equals AI responsibility |
| Which strategy will win? | Both have historical precedents | Almost everything; depends on unknowable future |
| Are these companies good for society? | Both have positive and negative impacts | Long-term effects; who benefits vs. who loses |
The Bottom Line
Anthropic and OpenAI appear to be pursuing different business strategies. Anthropic seems to prioritize earlier profitability; OpenAI seems to prioritize growth even at significant losses.
Which approach is "better" depends on:
- What you're optimizing for (financial returns? societal benefit? AI safety?)
- Assumptions about the future that we can't verify
- Information we don't have access to
Be skeptical of confident claims about which company is "doing it right." Both are making educated bets in a highly uncertain market. Both could succeed. Both could fail. Both could succeed financially while producing negative societal outcomes. Both could fail financially while having produced significant benefits.
The honest position is humility about what we can know from reported financial metrics about complex technology companies operating in unprecedented conditions.
A Note on Our Analysis
The original version of this article used an elaborate political framework ("salon socialism") and fabricated dialogue to present a much more confident—and one-sided—analysis than the evidence supports.
We've rewritten it to be shorter, more honest about uncertainty, and clearer about what we actually know versus what we're speculating about. We compete with both companies discussed here. That doesn't disqualify our analysis, but it should inform how you read it.