Over $600 billion has poured into this latest AI bullrun, with another $500 billion earmarked for OpenAI’s “Stargate” initiative — one of the largest infrastructure build outs ever proposed. But, is there a competitive moat for anyone?
Previously, I examined whether frontier models possess sustainable competitive advantages. Since then, performance gaps have narrowed even as investment has surged: top models now trade places on benchmarks, with DeepMind’s latest offerings leapfrogging OpenAI and Anthropic in specific tests.
Yet benchmarks miss the jagged reality of model intelligence, where Olympiad-level math coexists with basic reasoning failures. While model weights may be commoditized, deployment context and product experience — memory, multimodality, trust and UX — remain defensible terrain.
This analysis revisits the “moat” question through a structured look at three major players: OpenAI, Anthropic and Google DeepMind. I’ll review their Q2 2025 developments, extract strategic signals and assess whether any genuine moats are emerging.
OpenAI: Widening the Adoption Gap Through Vertical & Product Focus
OpenAI's Q2 strategy centered on three key areas of expansion and consolidation:
Increasing Model Use Cases and Tool Diversity
OpenAI launched its new o3 "thinking" models, alongside image generation capabilities in 4o. Image generation in ChatGPT led to millions of new signups within days, leading to the introduction of temporary rate limits. CEO Sam Altman claimed that while it was fun to see people loving the bot's image generation capabilities, "our GPUs are melting."
it's super fun seeing people love images in chatgpt.
— Sam Altman (@sama) March 27, 2025
but our GPUs are melting.
we are going to temporarily introduce some rate limits while we work on making it more efficient. hopefully won't be long!
chatgpt free tier will get 3 generations per day soon.
The company also introduced GPT-4.1, optimized for agents and long-form prompt engineering, as well as smaller models like o4-mini for increased efficiency. Additionally, it enhanced coding tools through Codex. They also announced acquisitions of coding application Windsurf and hardware startup io, expanding their ecosystem reach.
Expanding Model Customization Capabilities
Memory functionality and personality customization emerged as significant strategic investments. OpenAI expanded its memory feature to include past conversations and enhance collaborative workflows. This update enables OpenAI to increasingly provide more contextually relevant answers to users over time. However, model personality tuning faced challenges when a May update made models overly agreeable, forcing a rollback and retraining of the model's response behaviors.
Investing in Vertical Markets
Enterprise expansion accelerated with targeted launches of ChatGPT for Education and OpenAI for Government. The education vertical gained traction with institutions like Mount Sinai, which offered bundled ChatGPT with privacy assurances and custom GPT configurations. The Government launch builds on a Jan. 2025 GPT Gov launch that includes a secure, compliance-ready version of ChatGPT for US government agencies, now supported by a $200 million Department of Defense contract.
OpenAI's Strategic Positioning
OpenAI is building competitive advantages in three critical areas:
Hyper-Growth and User Lock-In
With 500-800 million weekly active users on ChatGPT, OpenAI is leveraging its scale as a competitive moat. As the performance gap between leading models narrows, first-touch experiences become increasingly important for user retention, particularly when competing platforms lack features like conversational memory (or the ability to port over conversational histories from OpenAI).
Focus on Creativity and Enabling Science
Sam Altman has consistently emphasized science and creativity as key differentiators. Throughout Q2 interviews, he positioned "thinking models" as catalysts for reasoning and scientific discovery, with new discoveries driven by AI being a signal of “AGI.”
Industry rumors suggest the development of a $20,000/month PhD-level research agent, indicating continued investment in research tooling and data partnerships, especially as Meta moves to acquire Scale AI and intensifies its focus on foundational infrastructure control.
Systemic and Cohesive Execution
Of all the leading AI companies, OpenAI has maintained the most consistent product and partnership velocity. Even as competitors close in on benchmark performance, OpenAI’s advantage increasingly appears to be execution: fast iteration, a tight consumer UX and a clear bet on vertical distribution (government, education, enterprise) and specialized tools and hardware (Windsurf/io) as hedges against commoditization.
Moat Verdict: If model performance commoditizes, users may resist switching from their initial platform choice. Even without performance parity, OpenAI's comprehensive strategy may make switching unnecessary or costly, such as losing years of conversational memory and customized workflows. OpenAI remains well-positioned to maintain a sustained competitive advantage.
Related Article: OpenAI Veteran Maps the Next 3 Phases of AI — and How to Survive Them
Anthropic: Deepening Trust-First Strategy Through Enterprise Scale and Model Transparency
Here’s a quick recap of significant Q2 Anthropic events organized into three general categories:
Claude 4 Models Push Frontier Reasoning and Tool Integration
Anthropic released the Claude 4 model family in May, introducing Claude Opus 4 and Claude Sonnet 4, each featuring extended reasoning capabilities. Early benchmarks positioned Opus 4 among top-performing models for coding, summarization and long-context understanding. Anthropic also expanded Claude's built-in memory functionality and released developer-focused features.
Enterprise Momentum and Specialized Verticals Drive Growth
Claude saw significant traction in B2B applications, with Anthropic's annual revenue reportedly reaching $3 billion, driven primarily by productivity and developer use cases. Claude integrations into enterprise platforms, such as Slack and Zoom, continued to expand, while AWS Bedrock scaled its distribution capabilities. Anthropic also launched Claude Gov, a secure, fine-tuned model for US intelligence agencies, marking a strategic entry into high-trust, regulated market segments.
Research Leadership and Ecosystem Standards
Anthropic maintained strong research leadership with several significant publications. One paper examined the tendency of reasoning models to exhibit inconsistency between internal processing and external outputs, building on Q1's research into interpretability. Another explored efforts to design Claude’s conversational character and behavior. Simultaneously, the Model Context Protocol (MCP) proposed by Anthropic gained widespread adoption from Google, OpenAI and Microsoft as a universal standard for integrating large language models into third-party applications.
Strategic Commentary: Where Anthropic Stands Apart
Anthropic is building competitive advantages in three key areas:
Safety and Transparency as Durable Differentiators
Anthropic's core mission remains the responsible development and deployment of AI. This commitment manifests through Claude Gov's security features, ongoing interpretability research and market positioning as a more transparent and controllable AI alternative. With growing concerns about AI risks and OpenAI's recent controversies, Anthropic positions itself as having a more cautious, trust-focused approach to resonate with security-conscious organizations.
Enterprise-First, Consumer-Light Strategy
Anthropic's go-to-market approach contrasts sharply with OpenAI's mass-adoption model. Claude remains less widely known and lacks some consumer-optimized features, such as persistent memory, found in ChatGPT. However, through robust integrations, pricing competition and developer-centric features like Claude Code, Anthropic positions Claude as a preferred developer's AI solution. Software engineers represent one of Claude's heaviest user segments, with pricing plans specifically designed for this market.
Ecosystem Influence Through Research Excellence
Anthropic has emerged as the leading frontier lab in open research, much like OpenAI's position before the release of ChatGPT. MCP could become a foundational protocol for AI applications, equivalent to HTTP's role in web development, according to the team behind it.
Research into model personality and behavior design may prove crucial as more companies pursue always-on AI assistants. This research leadership enables Anthropic to attract top talent despite its smaller scale compared to competitors.
Moat Verdict: Anthropic's investments in research, transparency and responsible AI create a differentiated environment for research talent and establish precise brand positioning for users and enterprises. Choosing Anthropic often represents a deliberate decision rather than a default choice, which may foster stronger long-term loyalty.
Google DeepMind: Leveraging Ecosystem Scale and Infrastructure
Google DeepMind's Q2 strategy emphasized three core initiatives:
Establish a new State of the Art, Led by Gemini
Ever since ChatGPT launched, Google has seemed behind. Launched in Q2, Google DeepMind’s Gemini 2.5 family — its strongest models to date — now rivals Claude and ChatGPT, consistently topping performance leaderboards. While leaderboards conceal the jagged frontier of intelligence among models, this was a significant success for Google, as it demonstrated that it, too, can lead in AI model performance.
AI Everywhere: From Workspace to Android to Open-Source Gemma
Google aggressively expanded AI integration across its product ecosystem. Gemini integration across Google Workspace tools accelerated across Meet, Drive and Docs, while Google announced on-device AI support for high-end Android phones at its annual I/O 2025 developer conference, along with dozens of other updates.
On the open-source front, Google released Gemma 3 in June — a multimodal model with fewer than 10 billion parameters, designed for mobile efficiency.
📣 Gemma 3n is fully released for developers to build with multimodal capabilities, fine-tune models for on-device apps, and access improved text support for 140 languages and multimodal understanding for 35 languages ↓https://t.co/qOtnaAaESG
— Google for Developers (@googledevs) June 30, 2025
Gemini now serves 400 million users, while the open-source Gemma ecosystem has generated over 160 million model downloads. Though still trailing OpenAI's user base, Google leverages its massive existing platform scale.
Veo 3 Advances AI Video Generation and Sets New Standards
Google unveiled its latest video generation tool, Veo 3, at I/O 2025. Unlike earlier text-to-video models, Veo 3 generates photorealistic video content with synchronized audio, including dialogue, sound effects and ambient noise, entirely from text prompts. The realism of Veo 3's outputs has generated both excitement and concern, with some AI-generated videos achieving viral status to unknowing users.
Google DeepMind's Strategic Advantages
Google DeepMind is positioning itself for long-term success in two key areas:
Ecosystem Lock-in Through Unprecedented Reach
Google's strategy centers on broad AI integration, making Gemini a default feature across its massive product ecosystem. From Search (now delivering AI-powered answers to 1.5 billion monthly users) to Android and Workspace, Google leverages its platform scale to make AI interaction a seamless, everyday experience. This widespread distribution could create a self-reinforcing moat, even if users engage with Google's models more passively than with dedicated applications like ChatGPT.
Data and Infrastructure Advantage
Google's unique assets, including YouTube's vast video library and decades of indexed web data, enable training on exceptionally diverse multimodal datasets. This translates into technical advantages, exemplified by Veo 3's leap ahead of video generation competitors. Furthermore, Google's substantial AI infrastructure investments, including custom TPUs and global data centers that even OpenAI may begin using, represent a durable infrastructure inroads for AI scaling and serving future large scale world-model AIs.
Moat Verdict: Google's primary challenge remains product innovation across its complex ecosystem and developing startup momentum comparable to Anthropic and OpenAI. However, Veo 3 demonstrated that Google's decades-long investments in data and infrastructure can yield significant model breakthroughs with enough focus. If Google DeepMind can consistently lead model innovation, Google's extensive platform presence may create irresistible market gravity.
Related Article: Anthropic Accused of Massive Data Theft in Reddit Lawsuit
Final thoughts: Understanding the Three-Way Dynamic
Here are three final ways to frame this dynamic:
Google DeepMind: The Company With the Most to Lose
As a subsidiary of Alphabet, the largest company with proprietary data assets, massive existing user bases and extensive integration opportunities, AI remains "Google's game to lose" — particularly if AI represents an extension of the existing internet paradigm. However, if AI creates entirely new online experiences where the operating system becomes the prompt interface or the always-on assistant, both OpenAI and Anthropic have substantial opportunities to capture disproportionate value.
Open Source as Strategic Leverage
Meta is already deploying open source as a competitive weapon against proprietary frontier models, and open source may become an increasingly important strategic tool. Alphabet's open-source Gemma received recent updates, while OpenAI plans to release an open-source model later this year. This suggests that open-source offerings may increasingly serve as marketing and growth tactics rather than existential competitive threats.
we are going to take a little more time with our open-weights model, i.e. expect it later this summer but not june.
— Sam Altman (@sama) June 10, 2025
our research team did something unexpected and quite amazing and we think it will be very very worth the wait, but needs a bit longer.
Market Size and Opportunity
Fewer than one billion people currently use large language models, yet significant technological and social changes are already occurring. We remain in the early stages of AI adoption. As user behavior matures and model capabilities converge, the structure of organizational competitive advantages will continue evolving, often faster than anticipated.
Rather than competing directly on model performance alone, each company is carving distinct strategic positions that may prove complementary rather than zero-sum.