Microsoft’s first AMD MI200 series of accelerated graphics cards: five times faster than the N card
In the field of data center and AI, NVIDIA's GPU has obvious advantages, and AMD is now catching up step by step. Recently, Microsoft announced the first purchase of AMD's MI200 series accelerated graphics cards for large-scale AI training in the cloud.
At Build 2022, Microsoft CTO Kevin Scott announced that Azeure will be the first public cloud service to deploy AMD's flagship MI200 series Gpus for large-scale AI acceleration.
When the MI200 series was released in November, AMD said it was five times faster than NVIDIA's A100, especially in FP64 computing.
The MI200 series is upgraded to a new CDNA2 computing architecture with an upgraded 6nm FinFET process, 58 billion transistors, and 2.5D EFB bridging technology, the industry's first multi-die integrated package (MCM) with two internal cores.
The new family comes in two models, the Instinct MI250X integrates 220 cell units, 14,080 stream processor cores at a maximum frequency of 1.7GHz, and has 880 second-generation matrix cores with peak performance of: FP16 half-precision 383TFlops, FP32 single precision /FP64 single precision 47.9TFlops, FP32 single precision /FP64 double precision matrix 95.7TFlops, INT4/INT8/BF16 383TFlops.
Memory/video memory with 8192-bit 128GB HBM2e, frequency 1.6ghz, peak bandwidth 3276.8GB/s, and support for all-chip ECC.
The OAM module supports PCIe 4.0 x16 and passive heat dissipation (system heat dissipation). The typical power consumption is 500W and the peak power consumption is 560W.
The Instinct MI250 was streamlined to a 208-cell, 13312-stream processor core, and all performance metrics also complied with a drop of about 5.5%, with all other specs unchanged.