OpenAI turns to Google's AI chips to power its products, The Information reports

Major Shift in OpenAI's Technology Strategy

OpenAI, the company behind leading artificial intelligence tools such as ChatGPT, has started renting Google's artificial intelligence chips to support its growing product lineup, signaling a significant change in its approach to computing infrastructure. This marks the first substantial instance in which OpenAI has opted for non-Nvidia chips for its AI operations, indicating a move away from its longstanding dependence on Nvidia’s graphics processing units (GPUs) and backer Microsoft’s data centers[1].

Strategic Collaboration: OpenAI and Google

Until early 2025, OpenAI relied almost exclusively on Microsoft to supply its data center infrastructure. However, as demand for AI-powered applications like ChatGPT has soared, OpenAI encountered capacity constraints that delayed some of its product launches. In response, OpenAI signed a new deal with Google Cloud, tapping into Google's in-house tensor processing units (TPUs) and cloud services to expand its computing capacity for both training and deploying advanced AI models[3][4].

Expanding Cloud Partnerships and Technology Diversification

  • OpenAI has traditionally been Nvidia's largest customer for GPUs, but escalating demands called for diversification of its compute sources.
  • By adopting Google’s TPUs, OpenAI seeks to lower costs associated with inference—the process where AI models generate predictions or insights using pre-trained knowledge[1].
  • Google’s TPUs, previously reserved for its internal projects, are now being offered to select external customers, with tech giants like Apple and AI startup competitors such as Anthropic also utilizing these chips[1].
  • The collaboration follows a string of major infrastructure deals by OpenAI in 2025, including partnerships with CoreWeave, Oracle, and SoftBank as part of its $500 billion Stargate initiative[3][4].

Competitors Collaborate for Growth

This deal highlights a unique partnership between two AI industry rivals. While both OpenAI and Google are actively developing foundation models for search and other applications, OpenAI’s decision to use Google’s hardware underscores the unprecedented needs of large-scale AI projects. The adoption of Google’s chip technology could also set a precedent for other AI companies seeking alternatives to Nvidia’s dominant GPU platform[1][5].

Limitations and Industry Impact

Although OpenAI now rents some of Google's TPUs via Google Cloud, reports indicate that Google is not providing its most advanced TPU hardware to its competitor. Even so, the arrangement is regarded as a substantial victory for Google in the ongoing cloud and AI chip competition, challenging Microsoft’s exclusive relationship with OpenAI and offering a potentially more cost-effective alternative to Nvidia[1][5].

The Road Ahead

OpenAI’s integration of Google’s AI chips exemplifies the escalating demand for cutting-edge computing power among AI leaders and the evolving landscape of cloud partnerships. As OpenAI continues to diversify its technology stack, the industry may see further collaborations—and competitive shifts—among the sector’s top players[1][3].

Latest AI News

Stay Informed with the Latest news and trends in AI