OpenAI Reportedly Partnering with Google Cloud for Computational Needs

OpenAI, the company behind the popular AI model ChatGPT, is reportedly planning to leverage Google Cloud's services to meet its escalating demands for computing capacity. This strategic move, finalized in May after several months of discussions, signifies a notable collaboration between two major competitors in the rapidly evolving artificial intelligence landscape. The primary driver for this partnership is OpenAI's growing need for "compute" – the industry term for computing power required for training large language models (LLMs) and for inference, which involves processing information to enable users to interact with these models.
This development underscores the immense computational requirements for advancing and deploying AI technologies, which are reshaping competitive dynamics within the sector. For OpenAI, this partnership is part of a broader strategy to diversify its compute sources beyond its principal backer, Microsoft. This strategy also includes ambitious projects like the high-profile Stargate data center. The collaboration is seen as a significant win for Google's cloud division, which will provide supplementary computing capacity to OpenAI's existing infrastructure, aiding in both the training and operation of its sophisticated AI models.
The decision comes at a time when OpenAI's ChatGPT poses a substantial challenge to Google's long-standing dominance in the search engine market. Despite this competitive backdrop, Google executives have suggested that the AI race may not be a winner-take-all scenario. Since the launch of ChatGPT in late 2022, OpenAI has faced continuous pressure to scale its computing resources. The company's financial success is evident, with an annualized revenue run rate surging to $10 billion as of June 2025, positioning it to meet its full-year targets amidst the booming adoption of AI.
OpenAI has been actively seeking to bolster its infrastructure. Earlier this year, it partnered with SoftBank and Oracle on the $500 billion Stargate infrastructure program and secured deals worth billions with CoreWeave for additional compute. Furthermore, OpenAI is reportedly on track to finalize the design of its first in-house chip this year, a move aimed at reducing its reliance on external hardware providers. The partnership with Google represents another step in lessening its dependency on Microsoft's Azure cloud service, which had been its exclusive data center infrastructure provider until January. Sources indicate that previous attempts to form an arrangement with Google were hindered by OpenAI's existing contractual obligations with Microsoft. Concurrently, Microsoft and OpenAI are in negotiations to revise the terms of their multibillion-dollar investment, including Microsoft's future equity stake in the AI firm.
For Google, this deal aligns with its strategy of expanding the external availability of its proprietary Tensor Processing Units (TPUs), which were historically reserved for internal use. This initiative has already attracted high-profile clients such as Apple, as well as AI startups like Anthropic and Safe Superintelligence, both of which are competitors to OpenAI and were founded by former OpenAI leaders. Google Cloud, which accounted for 12% of Alphabet's 2024 revenue with $43 billion in sales, aims to position itself as a neutral provider of computing resources. This strategy is intended to outmaneuver rivals like Amazon and Microsoft in becoming the preferred cloud provider for the growing number of AI startups with significant infrastructure needs.
Alphabet, Google's parent company, faces market pressure to demonstrate financial returns on its substantial AI-related capital expenditures, projected to reach $75 billion this year. This is balanced against the need to protect its core business from competing AI offerings and navigate antitrust scrutiny. Google's DeepMind AI unit also directly competes with OpenAI and Anthropic in developing advanced AI models and integrating them into consumer applications. While selling computing power to rivals might seem counterintuitive as it reduces Google's own chip supply and bolsters competitors, it also helps capacity-constrained rivals and generates revenue. However, Google itself reported a lack of sufficient capacity to meet its cloud customers' demands in the last quarter, as stated by CFO Anat Ashkenazi. The OpenAI deal will further complicate how Alphabet CEO Sundar Pichai allocates capacity between Google's enterprise and consumer business segments.
Despite ChatGPT's significant lead in monthly users over Google's competing chatbots and analysts' predictions of a potential reduction in Google's search market share, Pichai has expressed confidence that OpenAI will not usurp Google's business dominance. The collaboration also comes as OpenAI acknowledges its pressing need for more compute power, highlighted by CEO Sam Altman's comment about GPUs "melting" after the release of the image generation feature powered by GPT-4o, and a recent widespread outage of ChatGPT servers. This collaboration could see OpenAI shifting some server requirements for its models, including ChatGPT and Sora, to Google's infrastructure. While the financial specifics of the OpenAI-Google Cloud deal remain undisclosed, it is presumed to be substantial. This partnership is particularly intriguing given the fierce competition between the two tech giants, who are constantly releasing new AI models, expanding chatbot functionalities, and vying for market share through various strategies, including API pricing adjustments.