NEWS  /  Analysis

Microsoft's First AI Superfactory Comes Online as Datacenter Buildout Ramped Up

By  LiDan  Nov 13, 2025, 4:34 a.m. ET

The two-story designed Fairwater AI datacenters are directly connected to each other – and eventually to others under construction throughout the U.S. It boasts Nvidia's GB200 NVL72 rack-scale systems that can scale to hundreds of thousands of Nvidia Blackwell GPUs.

AsianFin -- Microsoft Corp. is launching a new class of datacenter as the software giant is ramping up artifcial intelligence (AI) infrastructure buildout.

Credit:Microsoft

Credit:Microsoft

Microsoft on Wednesday announced its first AI superfactory was brought online to accelerate AI breakthroughs and train new models on a scale that has previously been impossible. The superfactory is delivered as a set of two-story structure in Atlanta, Georgia, the second in Microsoft’s Fairwater family. The new purpose-built Fairwater site of Azure AI datacenter is connected to our first Fairwater site in Wisconsin.  

The Fairwater-site superfactory shares the same architecture and design as Microsoft’s investment in Wisconsin in September. The later was labelled by Microsoft as the world’s most powerful AI datacenter. But these aren’t simply isolated buildings densely packed with sophisticated silicon and cooling techniques that use almost zero water.

These Fairwater AI datacenters are directly connected to each other – and eventually to others under construction throughout the U.S. – with a new type of dedicated network allowing data to flow between them extremely quickly. This enables Fairwater sites located in different states to work together as an AI superfactory to train new generations of AI models far more quickly, accomplishing jobs in just weeks instead of several months.

Microsoft said the network will connect multiple sites with hundreds of thousands of the most advanced graphic processing units (GPUs) running AI workloads, exabytes of storage and millions of CPU cores for operational compute tasks.

The new Fairwater AI datacenters have a unique design that differentiates them from previous generations. It features a new chip and rack architecture that delivers the highest throughput per rack of any cloud platform available today. It boasts Nvidia’s GB200 NVL72 rack-scale systems that can scale to hundreds of thousands of Nvidia Blackwell GPUs. Its two-story design allows for greater GPU density and intelligent networking enables fast communication among GPUs. It also features an advanced liquid cooling that consumes almost zero water in its operations and a new dedicated network linking it to AI compute clusters at other sites.

“This is about building a distributed network that can act as a virtual supercomputer for tackling the world’s biggest challenges in ways that you just could not do in a single facility,” said Alistair Speirs, Microsoft general manager focusing on Azure infrastructure.

 “A traditional datacenter is designed to run millions of separate applications for multiple customers,” Speirs added. “The reason we call this an AI superfactory is it’s running one complex job across millions of pieces of hardware. And it’s not just a single site training an AI model, it’s a network of sites supporting that one job.”

Microsoft didn’t specify the cost of the superfactory in Fairwater site, but the company had disclosed aggressive AI spending in late October when it released financial results for its first fiscal quarter ended September 30. 

Microsot posted a steeper climb in spending than Wall Street anticipated, fueling anxieties about the high costs of providing AI infrastructure. The September quarter saw captial expenditure, or Capex, including finance leases, an indication of data center spending, refreshed a record at $34.9 billion, while analysts projected to be $30.06 billion. The Capex was up about $10 billion, or more than 60%, from the previous record set during the previous quarter, and surged 74.5% from a year ago.

Microsoft CEO Satya Nadella said his company will continue to increase our investments in AI across both capital and talent as its planet-scale cloud and AI factory, together with Copilots across high value domains, is driving broad diffusion and real-world impact.

Hood said at an earnings call that the Capex will increase again in the current fiscal quarter. She told analysts Microsoft still failed to address a computing capacity crunch despite massive spending on AI datacenters. Demand for Azure services is “significantly ahead of the capacity we have available,” she said. “I thought we were going to catch up,” Hood added. “We are not.”

Please sign in and then enter your comment