(Image source: REUTERS)
OpenAI has added another heavyweight to its trillion-dollar AI supply chain. The company announced late Sunday a strategic partnership with Broadcom to co-develop and deploy a 10-gigawatt-scale AI accelerator cluster designed by OpenAI, marking its latest move to secure long-term computing capacity.
Under the agreement, OpenAI will work with Broadcom to roll out racks of AI accelerator chips and networking systems beginning in the second half of 2026, with full deployment expected by end-2029. The company’s in-house AI chips, built on ARM architecture in collaboration with ARM and Oracle, are part of OpenAI’s broader effort to diversify beyond Nvidia’s GPUs.
The move follows a string of mega-deals between OpenAI and major chipmakers. Together with previous partnerships with Nvidia and AMD, OpenAI’s total planned AI accelerator capacity now exceeds 26 gigawatts, forming what analysts call a “circular trading” ecosystem worth more than $1 trillion—a closed loop of hardware, software, and cloud contracts among the world’s most powerful AI players.
Meanwhile, Nvidia is tightening its grip on the AI infrastructure race. In the early hours of October 14, it announced collaborations with Meta and Oracle to upgrade their AI data center networks using its Spectrum-X Ethernet switchtechnology. Oracle will also build a giga-scale AI factory powered by Nvidia’s new Vera Rubin architecture, connected through Spectrum-X.
Adding to the momentum, Hong Kong-listed chipmaker Innoscience (02577.HK) revealed on Monday that Nvidia will back its all-GaN power solution for 800V DC power architectures—a breakthrough expected to boost energy efficiency and power density in next-generation AI data centers while cutting carbon emissions.
“Trillion-parameter models are transforming data centers into gigawatt-scale AI factories,” said Jensen Huang, Nvidia’s founder and CEO. “Spectrum-X is not just faster Ethernet—it’s the nervous system of the AI factory, enabling hyperscalers to connect millions of GPUs into one giant computer.”
The announcements sparked a rally in AI-related stocks. On Friday, Broadcom surged nearly 10%, while Nvidia rose 2.82% and Amazon gained 1.71% on Wall Street. In Hong Kong trading on Monday, Innoscience jumped over 16% to HK$89.90, with turnover reaching HK$174 million.
OpenAI Officially Announces In-House AI Chip: Broadcom Builds Core with GPT Model ASIC
On Monday evening, OpenAI released a 28-minute podcast detailing the progress of its collaboration with Broadcom.
Charlie Kawwas, President of Broadcom’s Semiconductor Solutions Group (second from left); Broadcom President and CEO Hock Tan (third from left); OpenAI Co-founder and CEO Sam Altman (second from right); OpenAI Co-founder and President Greg Brockman (first from right)
Sam Altman announced that OpenAI has entered a strategic partnership with Broadcom to develop and deploy a 10-gigawatt (GW) custom AI chip system designed by OpenAI.
Under the deal, OpenAI will lead chip and system design, while Broadcom will collaborate on development and deployment. The two companies plan to jointly engineer AI infrastructure that integrates Broadcom’s chips and Ethernet solutions, enabling both vertical and horizontal scaling across data centers.
To put the scale in perspective: one gigawatt equals one million kilowatts—enough to power roughly 100,000 households or an entire nuclear reactor. Ten gigawatts, meanwhile, is comparable to the peak energy demand of New York City.
Altman revealed that OpenAI and Broadcom have been co-designing the new custom chip for the past 18 months and recently began building a complete bespoke system leveraging OpenAI’s own models. “You can look at AI infrastructure construction from many perspectives,” Altman said. “This may well be the largest collaborative industrial project in human history.”
“Once we understood the scale of capability and reasoning power the world would need, we started to wonder if we could build a chip dedicated entirely to that workload,” he added. “Broadcom is clearly the best partner in the world for this.”
Broadcom President and CEO Hock Tan said OpenAI is rethinking technology from “the transistor level all the way up to what happens when someone asks ChatGPT a question.” Optimizing the entire stack, he said, will unlock “tremendous efficiency gains—leading to better performance, faster models, and more affordable AI.”
“If you manufacture your own chips, you control your own destiny,” Tan said.
OpenAI clarified that the Broadcom partnership does not include any investment or equity components, distinguishing it from OpenAI’s previous deals with Nvidia and AMD. Neither company disclosed financial terms, and an OpenAI spokesperson declined to comment on how the chips will be financed.
Greg Brockman, OpenAI’s co-founder and president, said even a 10-gigawatt system won’t be enough to power the company’s long-term vision for Artificial General Intelligence (AGI). “Compared to what we need to achieve,” Brockman said, “this is just a drop in the bucket.”
Charlie Kawwas, president of Broadcom’s Semiconductor Solutions Group, said the scale of what’s being built is unprecedented. “Take railroads—it took about a century for them to become critical infrastructure. The internet took about 30 years. But I believe this won’t take even five.”
Mandeep Singh, an analyst at Bloomberg Industry Research, noted that OpenAI appears to be following Google’splaybook: Google uses Broadcom technology to manufacture its TPU chips more efficiently. “Given Google’s success, OpenAI’s choice of Broadcom over suppliers like Marvell makes strategic sense,” Singh said.
“In the late 1990s, circular trading usually centered on advertising and cross-selling among startups, with companies buying services from each other to inflate perceived growth,” said Paulo Carvao, Senior Fellow at Harvard Kennedy School. “Today’s AI companies have tangible products and customers, but their spending is still outpacing their profits.”
Stacy Rasgon, an analyst at Bernstein Research, commented this week that OpenAI CEO Sam Altman has the power to either crash the global economy for a decade or lead us to the promised land. At this point, we still don’t know which outcome will prevail.
Jensen Huang: The U.S.-China Chip Gap Is Only a Few Nanoseconds
Innoscience, a gallium nitride (GaN) power semiconductor supplier based in Suzhou, China, has drawn significant attention after announcing a new collaboration with Nvidia.
Innoscience on Tuesday said it will work with Nvidia to jointly support an 800-volt direct current (VDC) power architecture, a critical step in enabling Nvidia’s next-generation GPU roadmap. The partnership makes Innoscience the only Chinese chip company selected as a power partner in Nvidia’s latest supply chain expansion.
Founded in 2017, Innoscience specializes in the research, development, and manufacturing of third-generation GaN semiconductors. Its products span discrete devices, integrated circuits, wafers, and modules. By the end of 2023, the company held the world’s largest market share in GaN power discrete devices—42.4%—according to industry data.
On the right is Luo Weiwei, founder and chairwoman of Innoscience
Innoscience’s founder, Luo Weiwei, often referred to as China’s “Queen of Chips,” spent 15 years at NASA, where she rose from project manager to chief scientist. Inspired by the rapid progress of GaN technologies after Navitas Semiconductor launched the world’s first GaN power chips in 2014, Luo returned to China to establish Innoscience and pursue technological self-reliance in advanced semiconductors.
From its inception, Innoscience adopted an IDM (Integrated Device Manufacturer) model, covering design, manufacturing, and sales across the entire value chain. In November 2017, the company built China’s first 8-inch silicon-based GaN epitaxy and chip production line. By 2020, the yield rate of its 8-inch chip products had reached 92%.
As the first company worldwide to achieve mass production of 8-inch silicon-based GaN wafers, Innoscience remains the only manufacturer capable of supplying a full voltage spectrum of such products at industrial scale. Its technology enables 80% more dies per wafer and reduces per-chip costs by 30% compared to 6-inch GaN wafers—making it a global leader in cost efficiency and scalability.
Innoscience went public on the Hong Kong Stock Exchange on December 30, 2024, raising HK$1.45 billion in its IPO. The company’s market capitalization has since approached HK$30 billion, fueled in part by growing speculation about its role in Nvidia’s supply chain.
In May 2025, Nvidia announced that the explosive growth of AI workloads is driving power demand in data centers to unprecedented levels. Traditional 54V rack systems can no longer support the emerging megawatt-scale AI factories. Starting in 2027, Nvidia plans to transition its data centers to 800V HVDC power infrastructure, capable of powering 1-megawatt IT racks and beyond.
To accelerate this shift, Nvidia has begun collaborating with major players across the data center electrical ecosystem—including Innoscience, which will supply all-GaN power solutions for Nvidia’s next-generation high-voltage systems.
Innoscience’s shares have rallied sharply this year, climbing from around HK$40 to a peak of HK$106, lifting its market capitalization above HK$70 billion.
In October, the company announced a proposed share placement to raise HK$1.56 billion, issuing 20.7 million new H shares — about 3.9% of its enlarged H-share capital and 2.3% of its total enlarged issued capital — at HK$75.58 per share. CITIC Securities is acting as the sole global coordinator, with CITIC Securities and Haitong International serving as joint placement agents.
According to the company, about 31% of the proceeds will go toward capacity expansion and product upgrades, 24% toward repaying interest-bearing debt, and 45% toward working capital and general corporate purposes.
Innoscience has now unveiled the world’s first full-link 800V DC GaN power solution, becoming a partner of Nvidia and offering the only comprehensive GaN (gallium nitride) solution covering 800V to 0.6V.
Data from Frost & Sullivan shows that the GaN power semiconductor market is growing rapidly, with global market size projected to reach RMB 50.1 billion by 2028 — about 10.1% of the overall power semiconductor market. GaN materials offer advantages such as high frequency, voltage resistance, and electron mobility, making them increasingly attractive for electric vehicles, data centers, and photovoltaic power systems.
At this year’s Nvidia GTC conference, CEO Jensen Huang introduced the concept of the “AI factory” — a new type of productivity infrastructure integrating AI development with industrial processes. The AI factory encompasses the entire AI workflow, from data collection and training to fine-tuning and large-scale inference, converting energy and computing power into the mass production of AI tokens. Huang described it as the third major social infrastructure after electricity and the internet.
Nvidia has also announced the creation of an 800V High Voltage Direct Current (HVDC) power supply alliance, aimed at developing next-generation AI data centers capable of supporting 1 megawatt (MW) of power per rack by 2027.
Recently, Huang praised China’s chip industry as “vibrant, entrepreneurial, and deeply competitive,” noting its abundant engineering talent and manufacturing potential. “China is only a few nanoseconds behind the United States, so we must compete,” he said. He added that the US should allow its technology sector to compete globally — including in China — to extend American influence through innovation. “Active competition among foreign companies investing, competing, or collaborating in China is in China’s interest. They also want to go beyond China and engage in global competition.”