The past few weeks have provided a remarkable natural experiment in AI development dynamics. OpenAI releases what appears to be a breakthrough technology, and Google promptly demonstrates superior capabilities:
OpenAI's Sora demonstrated remarkable text-to-video generation, only to be superseded by Google's Veo 2 with notably higher quality output
OpenAI's o1 introduced novel "thinking" capabilities, followed within weeks by Google's Gemini 2.0 Flash Thinking implementing similar functionality
Gemini 2.0 has now surpassed both GPT-4 and Claude Sonnet across a broad range of benchmarks
This pattern reveals something fundamental about the nature of competitive advantage in artificial intelligence. To understand why this Google dominance was inevitable, we need to examine a broader principle: the myth of algorithmic moats.
Algorithmic Moats
It has frequently been said that part of Silicon Valley's success is the lack of non-compete clauses for employees. This allowed trade secrets to proliferate rapidly in the Bay Area, creating more efficient competition dynamics and allowing many engineers to learn from each other, rather than restricting learnings and competitive advantages to a single firm.
However, I rarely see this same line of argument applied to business moats. If this holds true, then it implies that algorithms alone cannot provide a durable moat to a business. Employees can easily leave one company and take all of its hard-won knowledge to a competitor, allowing the competitor to catch up.
Consider a thought experiment: You discover a revolutionary new algorithm. How long can you maintain that advantage? In a world of mobile talent and reverse engineering, the half-life of algorithmic secrets approaches zero as their value approaches infinity.
This creates what we might call the algorithm diffusion principle: Any sufficiently valuable algorithm will spread through the industry at a rate proportional to its perceived importance. Silicon Valley's prohibition on non-compete clauses accelerates this process, creating an upper bound on how long any single player can maintain algorithmic superiority.
Hence, algorithms only provide moats insofar as they facilitate the construction of another type of moat. When we talk about algorithmic moats, we're really discussing two separate concepts: the technical implementation details that can be replicated, and the emergent properties that arise from being first to market with those implementations.
Consider Google's own history with PageRank. While revolutionary for its time, the core insight – that incoming links could be weighted by the importance of their source – was relatively straightforward to replicate once published. What made Google dominant wasn't PageRank itself, but rather the virtuous cycle it enabled: better search results → more users → more data → even better search results. The algorithm was merely the catalyst for building a data moat.
This pattern repeats across the technology landscape. Spotify's recommendation algorithms, while sophisticated, aren't what prevent users from switching to Apple Music or YouTube Music. Instead, it's the years of accumulated listening history, carefully curated playlists, and social sharing features that create switching costs. The algorithms enable these benefits, but they aren't the moat themselves.
Moats on the Path to AGI
The implications of the lack of direct algorithmic moats become clear when we consider AGI development as a function of three primary variables:
Algorithmic innovation (A)
Computational resources (C)
Training data quality and quantity (D)
We might express AGI capability as: AGI_capability = A * f(C,D)
Where f(C,D) represents the effective utilization of compute and data. The algorithm diffusion principle suggests that A will quickly equilibrate across major players. Therefore, the decisive factor becomes f(C,D).
This is where Google's position becomes overwhelming. Consider their structural advantages:
Data Supremacy:
Google Search: The world's most comprehensive map of human knowledge and intent
YouTube: The largest repository of human audio-visual communication
Google Books/Scholar: A near-complete corpus of formal human knowledge
Android/Gmail: Vast behavioral and communication datasets
Compute Dominance:
Custom TPU architecture optimized for AI workloads
Vertical integration from silicon to software
World-class data center infrastructure
Decades of distributed systems optimization
These advantages compound non-linearly. Having twice the data and twice the compute doesn't yield four times the capability – it might yield eight times or more due to emergent properties in large-scale systems.
The recent pattern of Google rapidly matching and exceeding OpenAI's innovations perfectly illustrates this dynamic. When OpenAI develops a new technique, Google can quickly replicate it (algorithm diffusion) and then apply it with vastly superior resources, achieving better results almost immediately.
This creates what game theorists would call a dominant strategy for Google: Wait for algorithmic innovations, replicate them with superior resources, and achieve better results than the original inventors. The math becomes almost deterministic.
One might object that breakthrough algorithms could create discontinuous advantages that trump resource differences. However, the observed scaling laws in neural networks suggest otherwise. The smooth power-law relationships we've seen indicate that resource advantages compound predictably rather than being disrupted by algorithmic breakthroughs.
In retrospect, the tech industry's focus on OpenAI and other startups represents a failure to reason from first principles. In a world where algorithmic innovations cannot be contained, the player with overwhelming advantages in compute and data will inevitably emerge victorious. Google's position isn't just strong – it's strategically dominant in a game-theoretic sense.
The universal rule of algorithmic diffusion suggests a surprising corollary: The most effective strategy for other players might not be to compete directly with Google, but rather to focus on specialized domains where Google's general-purpose advantages are less relevant. This could, ironically, lead to a more specialized and diverse AI ecosystem than many currently predict.