Delivering on the promise of superior AI for our prospects requires supercomputing infrastructure, providers, and experience to deal with the exponentially growing measurement and complexity of the most recent fashions. At Microsoft, we’re assembly this problem by making use of a decade of expertise in supercomputing and supporting the most important AI coaching workloads to create AI infrastructure able to large efficiency at scale. The Microsoft Azure cloud, and particularly our graphics processing unit (GPU) accelerated digital machines (VMs), present the muse for a lot of generative AI developments from each Microsoft and our prospects.
“Co-designing supercomputers with Azure has been essential for scaling our demanding AI coaching wants, making our analysis and alignment work on methods like ChatGPT potential.”—Greg Brockman, President and Co-Founding father of OpenAI.
Azure’s strongest and massively scalable AI digital machine sequence
Immediately, Microsoft is introducing the ND H100 v5 VM which allows on-demand in sizes starting from eight to hundreds of NVIDIA H100 GPUs interconnected by NVIDIA Quantum-2 InfiniBand networking. Prospects will see considerably quicker efficiency for AI fashions over our final era ND A100 v4 VMs with revolutionary applied sciences like:
- 8x NVIDIA H100 Tensor Core GPUs interconnected by way of subsequent gen NVSwitch and NVLink 4.0
- 400 Gb/s NVIDIA Quantum-2 CX7 InfiniBand per GPU with 3.2Tb/s per VM in a non-blocking fat-tree community
- NVSwitch and NVLink 4.0 with 3.6TB/s bisectional bandwidth between 8 native GPUs inside every VM
- Intel 4th Era Xeon Processors, codenamed Sapphire Rapids
- PCIE Gen5 host to GPU interconnect with 64GB/s bandwidth per GPU
- 16 Channels of 4800MHz DDR5 DIMMs
Delivering exascale AI supercomputers to the cloud
Generative AI purposes are quickly evolving and including distinctive worth throughout almost each business. From reinventing search with a brand new AI-powered Microsoft Bing and Edge to AI-powered help in Microsoft Dynamics 365, AI is quickly changing into a pervasive element of software program and the way we work together with it, and our AI Infrastructure will likely be there to pave the best way. With our expertise of delivering multiple-ExaOP supercomputers to Azure prospects around the globe, prospects can belief that they’ll obtain true supercomputer efficiency with our infrastructure. For Microsoft and organizations like Inflection, NVIDIA, and OpenAI which have dedicated to large-scale deployments, this providing will allow a brand new class of large-scale AI fashions.
“Our concentrate on conversational AI requires us to develop and practice among the most complicated giant language fashions. Azure’s AI infrastructure supplies us with the mandatory efficiency to effectively course of these fashions reliably at an enormous scale. We’re thrilled concerning the new VMs on Azure and the elevated efficiency they may carry to our AI growth efforts.”—Mustafa Suleyman, CEO, Inflection.
AI at scale is constructed into Azure’s DNA. Our preliminary investments in giant language mannequin analysis, like Turing, and engineering milestones reminiscent of constructing the primary AI supercomputer within the cloud ready us for the second when generative synthetic intelligence turned potential. Azure providers like Azure Machine Studying make our AI supercomputer accessible to prospects for mannequin coaching and Azure OpenAI Service allows prospects to faucet into the ability of large-scale generative AI fashions. Scale has all the time been our north star to optimize Azure for AI. We’re now bringing supercomputing capabilities to startups and corporations of all sizes, with out requiring the capital for enormous bodily {hardware} or software program investments.
“NVIDIA and Microsoft Azure have collaborated by means of a number of generations of merchandise to carry main AI improvements to enterprises around the globe. The NDv5 H100 digital machines will assist energy a brand new period of generative AI purposes and providers.”—Ian Buck, Vice President of hyperscale and high-performance computing at NVIDIA.
Immediately we’re saying that ND H100 v5 is accessible for preview and can turn into an ordinary providing within the Azure portfolio, permitting anybody to unlock the potential of AI at Scale within the cloud. Signal as much as request entry to the brand new VMs.