Microsoft Accelerates Data Center Build to Support ChatGPT
Water works to pump Columbia River water to 750K SF hub in Washington.
Microsoft, which has surged into the lead in artificial intelligence with its $10B investment in ChatGPT and Open AI, is taking a leadership position in building the new and expanded data center infrastructure that will be needed to support artificial intelligence.
The Redmond, WA-based tech giant disclosed this week that it is planning a new 750K SF hyperscale data center campus in Quincy, WA, about 20 miles from Microsoft’s core Washington data center hub.
Known as the Malaga project, the 102.5-acre campus will encompass three 250K SF server farms on a site Microsoft acquired for $9.2M last April. Construction begins in June, with delivery of the first building in 18 months; the campus is expected to be completed by 2027.
The powerful microprocessors used in artificial intelligence infrastructure will require a cooling system at the Quincy complex that pumps water directly from the Columbia River to the data centers.
Microsoft is planning to build a $2.3M water works installation to pump water from river to cool the advanced microprocessors at the campus—the company estimates that each of the three buildings on the campus will require at least 121,000 gallons of water each day.
Earlier this month, Microsoft announced a $176M expansion of its data center hub in San Antonio. The cloud computing giant also purchased for $42M a 30-acre industrial site in the Hoffman Estates area of Chicago for data center development.
The data center infrastructure build-out in recent years to support the explosion of cloud computing, video streaming and 5G networks will not be sufficient to support the next-level digital transformation that has begun in earnest with the widespread adoption of artificial intelligence.
In fact, AI will require a different cloud-computing framework for its digital infrastructure—one that will redefine current data center networks, in terms of where certain data center clusters are located and what specific functionality these facilities have.
The data-crunching needs of AI platforms are so vast, OpenAI—creator of ChatGPT, which it introduced in November—would not be able to keep running the brainy word-meister without hitching a ride on Microsoft’s soon-to-be upgraded Azure cloud platform.
The micro-processing “brain” of artificial intelligence platforms—in this case the data center infrastructure that will support this digital transformation—will, like human brains, be organized into two hemispheres, or lobes. And yes, one lobe will need to be much stronger than the other.
One hemisphere of AI digital infrastructure will service what is being called a “training” lobe: the computational firepower needed to crunch up to 300B data points to create the word salad that ChatGPT generates. The training lobe ingests data points and reorganizes them in a model, a reiterative process in which the digital entity continues to refine its “understanding,” basically teaching itself to absorb a universe of information and to communicate the essence of that knowledge in precise human syntax.
The training lobe requires massive computing power and the most advanced GPU semiconductors, but little of the connectivity that now is the imperative at data center clusters supporting cloud computing services and 5G networks.
The infrastructure focused on “training” each AI platform will have a voracious appetite for power, mandating the location of data centers near gigawatts of renewable energy and the installation of new liquid-based cooling systems as well as redesigned backup power and generator systems, among other new design features.
The other hemisphere of an AI platform’s brain, the digital infrastructure for higher functions—known as the “inference” mode—supports interactive “generative” platforms that field your queries, tap into the modeled database and respond to you in a convincing human syntax seconds after you enter your questions or instructions.