Zoltar
Feature

No Moat, No Problem? Predictions on Future AI Competition

6 minute read
Solon Teal avatar
By
SAVED
The AI moat is gone. Open source, nation-states and niche models are reshaping a market too fast for any one player to control.

A few months after ChatGPT launched in 2022, an anonymous Google memo leaked and made waves: “ChatGPT doesn’t have a moat… and neither do we.”

At first, it sounded like hyperbole, but as generative AI evolves, we continue to see just how prescient that comment might be as models flip-flop on benchmarks. The constant pace of model releases, new benchmarks set, open-source breakthroughs and multi-billion-dollar capital injections suggests no single AI lab can maintain an enduring monopoly.

Yet “no moat” doesn’t necessarily imply no differentiation. Historical analogies — cars, electricity, the cloud — suggest AI models will coexist, each carving out distinct advantages. Below, we’ll examine how AI models might diverge, despite sharing common architectures and chasing similar benchmarks.

The 'No Moat' Memo in Brief

The anonymous Google memo argued three core points: 

  1. Commoditization Is Inevitable: AI techniques, code and data sets proliferate quickly. The next big leap is often open-sourced or replicated within months.
  2. Same Goal, Same Tools: AI companies generally compete for similar outcomes and strategies.
  3. Open Source as Fast-Follower: Constant technological innovations have allowed researchers to fine-tune and improve large language models (LLMs) at a fraction of the cost — chipping away at any scale advantage AI giants might have.

Fast-forward, and the memo’s core message stands. DeepSeek-V2 recently emerged as a fully open-weight LLM, rolling out “frontier” features for free. Copycat cycles have accelerated until novel features — like long-context reasoning or deep research — are replicated within weeks.

On paper, commoditization is inevitable: all models become interchangeable, triggering a brutal price war. Investors or observers may nervously channel Sequoia’s $600B AI question and ask: “What if the memo is right, and most existing AI companies burn their money without business success?”

No single model or company is guaranteed lasting dominance. But that doesn’t mean the AI future will be monolithic. On the contrary, many forces — from funding to branding — may support a dynamic market of AI giants and open source.

Related Article: The AI Vendor Evaluation Checklist Every Leader Needs

AI’s True Customers: Capital & Compute

If user-facing features won’t guarantee a moat, what will? One answer is compute, capital and exclusive partnerships. We see AI as a charming chatbot that can answer questions. Behind the curtain is a high-stakes battle for compute, energy and talent — the lifeblood of advanced AI.

  • Compute & GPU Bottlenecks: Training frontier models often requires tens of thousands of top-tier GPUs like Nvidia’s H100. Each GPU can cost tens of thousands of dollars, and that’s before factoring in the astronomical energy bill and complex infrastructure work.
  • Massive Training Costs: Companies can easily spend hundreds of millions of dollars on training, especially as the value of pre-training decreases. Inference at a massive scale is equally costly.
  • Evolving Architectures, Rising Costs: Thinking models like GPT-4 or Claude 3.7 rely on chain-of-thought reasoning, often generating far more tokens per query — making them significantly more expensive.
  • Talent Constraints: A small pool of top ML researchers commands seven-figure salaries. That human infrastructure is as vital (and scarce) as GPUs.

The real fight isn’t just for users — it’s for the supply chain. Consider some significant alliances:

  • OpenAI + Microsoft/SoftBank → Azure integration and locked-in GPU/compute deals
  • Anthropic + Amazon → Massive AWS investment and guaranteed cloud resources
  • Google Gemini + Google Cloud → Google’s homegrown ecosystem, deep pockets and proprietary data access

In the future, these partnerships could be a temporary moat for the duration of contractual agreements, allowing labs to train bigger, better models on more favorable terms. Other AI firms might rely on nation-state backing or trillion-dollar tech alliances to remain competitive.

Sovereign AI: The Rise of Nation-State Models

Beyond corporate partnerships, another force is shaping AI differentiation: national interests. In response to Deepseek’s launch, for example, US lawmakers have suggested various approaches to banning its use. If AI is the new industrial revolution, no significant power wants to outsource its infrastructure. They want local control over training data, alignment policy and model deployment.

Beyond China’s Deepseek, sovereign AI initiatives are gaining momentum worldwide:

  • European Models and Regulations: Mistral’s Le Chat is comparable to OpenAI’s 4o, and natively existing within the EU’s AI Act and regulatory structure.
  • Middle Eastern Sovereign Funds Are Investing Billions: Google recently announced a co-partnership with Saudi Arabia to develop a future regional AI hub, while sovereign wealth is increasing its AI investments.
  • Other Areas of Exploration: Calls increasing elsewhere for model sovereignty, with India and Korea emerging as key areas. 

Differentiation via the regulatory environment is a powerful force. A “China model” might reflect more state-driven curation, while an “EU model” could emphasize data privacy and trustworthy AI. Over time, we might see multiple official or “sovereign” LLMs shaped by local politics and cultural norms.

Use Case Specialization: Features, Features, Features

While frontier LLMs aim for broad capability, a few are staking out niche reputations:

  • Anthropic’s Claude: Popular for coding assistance, and a “safer,” constitution-based alignment approach.
  • OpenAI’s GPT: The most robust ecosystem, with a well-thought-out user experience — plus new voice modes that hint at consumer-friendly ambitions.
  • Google Bard & Gemini: Integrations with YouTube, Google Docs and NotebookLM, plus a strong foundation in search and personal productivity.

These niches may seem small now, but they could grow into massive industries as models advance. The concept of Jevon’s Paradox — wherein increased efficiency leads to increased consumption — suggests that an early specialization can rapidly become mainstream. By leveraging specialized partnerships, fine-tuned training and focused product pipelines devoted to a single domain, certain models may carve out formidable advantages in broad sectors.

AI Persona & Brand Identity: The Vibe Wars

Now, let’s talk about 'vibe' — a term increasingly central to AI differentiation and emphasized as a notable differentiator by OpenAI in its GPT-4.5 launch.

Like the auto industry, AI differentiation is often about feel. A car is “just” an engine on four wheels — but ask a devoted owner why they love their car, and they may cite the intangible “driving feel.” In AI, these intangible differences come through in writing style, tone and how the model reasons. People develop loyalties to a model’s persona.

Even with heavy prompt engineering — e.g., "Pretend you’re Grok 3.0" — one can’t replicate the “personality” baked into another model’s design. Anthropic, for example, has an extensive discussion of its model character design approach, and Grok has been explicitly marketed as “unfiltered.” User preference can hinge on these subtle differences, and in the future, one might pick a more expensive AI model that “clicks” with your team’s style, just as you might pick Slack over Microsoft Teams because Slack feels more intuitive.

Related Article: How to Evaluate and Select the Right AI Foundation Model for Your Business

High-End AI: The Ultimate Differentiator

But beyond branding and everyday AI tools, a new tier might emerge: ultra-premium AI services. Some advanced systems may cost significantly more per query due to massive compute and extended inference time. This may be especially true as the industry focuses on “thinking models.”

OpenAI charges $200 per month for its top-tier o1-Pro model (rumored at 100k users), while its experimental o3 model reportedly cost $8,000 to set a new AI benchmark. This means in the future:

  • A biotech firm might pay $5M for a model to evaluate 10,000 protein structures.
  • A hedge fund could license a top-tier LLM with real-time data for high-stakes market predictions.
  • A political campaign might hire an AI consultant to run near-constant messaging simulations across billions of data points.

This emerging two-tier AI market resembles how cloud services operate:

  • Consumer & SMB AI: Affordable, often ad-supported or offered at low monthly rates, good enough for general tasks.
  • Ultra-Premium AI: Incredibly capable, specialized hardware, and possibly days of “thinking time” for a single query — aimed at clients who stand to profit (or save) millions from accurate intelligence.

In that scenario, some labs may thrive on volume, while others focus on a small number of ultra-profitable, high-performance engagements.

No Perpetual Moat, But Many Growth Avenues

At face value, the “no moat” memo suggests that any advantage in AI is fleeting. Indeed, raw model performance does seem to be converging — and open-source or smaller labs can catch up faster than ever.

Learning Opportunities

Yet from brand identity to sovereign AI, from high-end analytical services to specialized niche models, there’s ample room for divergence, especially given a clear market paradox:

  • Core LLM Tech Commoditizes Fast: Breakthroughs spread or get replicated, meaning no single advantage lasts long.
  • Countervailing Forces Spur Fragmentation: Sovereign AI (geopolitical), specialized use cases (enterprise, coding, research), brand persona (the “vibe wars”) and capital alliances (compute + talent) all create distinct pockets in the market.

A hybrid outcome may be likely:

  • A few frontier providers, each with mega-partnerships, offering both budget-friendly and premium-luxury tiers.
  • Multiple national or regional AI models, each shaped by local regulations and cultural priorities.
  • A sprawling ecosystem of specialized or open-source solutions that threaten to chip away at incumbents — unless incumbents acquire them first.

Ultimately, “no moat” doesn’t mean “no business.” It means no single moat can insulate a company forever. AI businesses will likely mix and match multiple mini-moats — capital deals, specialized data, brand persona, premium offerings — to carve out profitable niches. In other words, the future of AI may be less about a single champion and more about a vast, evolving ecosystem where each model finds its angle — and tries to hold it just long enough to thrive.

About the Author
Solon Teal

Solon Teal is a product operations executive with a dynamic career spanning venture capitalism, startup innovation and design. He's a seasoned operator, serial entrepreneur, consultant on digital well-being for teenagers and an AI researcher, focusing on tool metacognition and practical theory. Teal began his career at Google, working cross functionally and cross vertically, and has worked with companies from inception to growth stage. He holds an M.B.A. and M.S. in design innovation and strategy from the Northwestern University Kellogg School of Management and a B.A. in history and government from Claremont McKenna College. Connect with Solon Teal:

Main image: bennymarty on Adobe Stock
Featured Research