In a pivotal move for the AI industry, OpenAI is collaborating with Broadcom to design and deploy 10 gigawatts of custom AI accelerators, signaling a strategic shift towards in-house hardware to meet immense demand and drive unprecedented performance in large language models.
The global race to secure unparalleled computing power for artificial intelligence has officially entered a new, intensified phase. OpenAI, the visionary company behind ChatGPT, has announced a landmark partnership with semiconductor giant Broadcom to develop its very own custom AI accelerators. This collaboration isn’t just about reducing reliance on existing suppliers like Nvidia and AMD; it’s a strategic maneuver to embed the intelligence of frontier models directly into silicon, aiming to reshape the entire AI hardware landscape.
A Strategic Shift Towards In-House Hardware
Historically, OpenAI has depended on external suppliers to power its vast infrastructure. However, the surging demand for computing resources, driven by more than 800 million weekly users across ChatGPT and its enterprise products, necessitates a change. This new partnership with Broadcom marks a fundamental shift in strategy, with OpenAI taking on the critical role of chip design, while Broadcom manages production and hardware integration.
The ambition is grand: to design and deploy 10 gigawatts (GW) of custom AI accelerators. To put this into perspective, 1 GW is enough energy to power approximately 700,000 homes in the US, meaning this initiative could power the equivalent of over 8 million US households or five times the electricity produced by the Hoover Dam. These specialized processors are engineered to handle the complex mathematical operations fundamental to AI models, distributing compute power across OpenAI’s own facilities and partner data centers globally. The first systems are slated for deployment in the latter half of 2026, with full completion anticipated by the end of 2029.
Embedding Intelligence Directly into Silicon
The core philosophy behind this move is clear: achieve superior performance by integrating insights gained from building advanced models directly into the hardware. Sam Altman, OpenAI’s co-founder and CEO, emphasized the collaboration’s strategic importance. “Partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI’s potential and deliver real benefits for people and businesses,” Altman stated. He added that developing their own accelerators contributes to a broader ecosystem crucial for pushing the frontier of AI.
Greg Brockman, President of OpenAI, further elaborated on the technical advantages. “By building our own chip, we can embed what we’ve learned from creating frontier models and products directly into the hardware, unlocking new levels of capability and intelligence,” he explained. Interestingly, OpenAI leveraged its own AI models to optimize the chip design process, reportedly achieving “massive area reductions” that surpassed traditional human engineering methods.
Broadcom’s Role and the Ethernet Advantage
For Broadcom, this partnership reinforces its growing dominance in the AI ecosystem. The new rack systems will notably utilize Ethernet networking, a widely adopted standard for connecting computers, in contrast to alternatives like InfiniBand, often favored in high-performance computing clusters. Hock Tan, Broadcom’s President and CEO, views this agreement as a validation of the Ethernet approach. “OpenAI has been a leader in the AI revolution since the ChatGPT moment – and we are thrilled to co-develop and deploy 10 GW of next-generation accelerators and network systems to pave the way for the future of AI,” Tan remarked.
The systems will integrate Broadcom’s comprehensive suite of connectivity technology, including Ethernet switches and optical links for high-speed data transfer between racks. Charlie Kawwas, President of Broadcom’s Semiconductor Solutions Group, highlighted the synergy: “Custom accelerators combine remarkably well with standards-based Ethernet scale-up and scale-out networking solutions to provide cost and performance optimised next generation AI infrastructure.” He added, “The racks include Broadcom’s end-to-end portfolio of Ethernet, PCIe and optical connectivity solutions, reaffirming our AI infrastructure portfolio leadership.” The companies have already finalized co-development and supply agreements, with a term sheet now in place for system deployment into production.
The Bigger Picture: The AI Chip Race and Future Implications
This collaboration builds on an 18-month foundation of quiet development between the two companies. It also aligns with OpenAI’s recent flurry of infrastructure commitments, totaling approximately 33 GW of compute capacity through deals with Nvidia, AMD, Oracle, and now Broadcom. Currently, OpenAI operates on just over 2 GW of capacity, underscoring the monumental scale of these new initiatives.
The move by OpenAI mirrors similar efforts by other cloud-computing giants such as Alphabet-owned Google and Amazon.com, who are also developing custom chips to reduce dependence on costly and supply-constrained Nvidia processors. However, analysts caution that designing custom chips is not without its challenges; similar efforts by Microsoft and Meta have faced delays or struggled to match Nvidia’s performance, as reported by Reuters. Despite this, Broadcom’s stock has soared, reflecting investor confidence in its role in the generative AI boom, with its valuation reportedly surpassing $1.5 trillion.
While the financial terms of the deal were not disclosed, concerns have been raised by analysts like Gil Luria of D.A. Davidson about the “AI bubble,” noting OpenAI’s vast commitments (approaching $1 trillion) compared to its current revenue ($15 billion). This often involves “circular financing,” where partners both invest in OpenAI and supply it with technology, a phenomenon highlighted by AP News.
Regardless of the financial intricacies, the strategic imperative for OpenAI remains clear: control its destiny. As Hock Tan succinctly put it, “If you do your own chips, you control your destiny.” Sam Altman remains optimistic about the future, envisioning that the “gigantic amount of computing infrastructure” made possible by this partnership will enable “very high-quality intelligence delivered very fast and at a very low price,” which the world will rapidly absorb and utilize for incredible new applications.