X12 GPU with PCI-E

High Performance and Flexibility for AI/ML and HPC Applications

High performance AI/ML and HPC-optimized solution

Optimized for graphics and rendering
applications

Double the CPU to GPU throughput with
PCI-E 4.0

Dual socket Intel® Xeon® Scalable processors
up to 270W

NVIDIA GPUs supported

NVIDIA certified system

4U 10-GPU

Flexible Root Configuration, PCI-E GPU System

High density systems for double-width, full length PCI-E GPUs.

  •  1U: support up to four PCI-E GPUs
  •  2U: supporting up to six PCI-E GPUs
  •  4U: supporting up to ten PCI-E GPUs

NVMe for lower latency with higher throughput.

New level of compute performance with Intel Xeon Scalable processors.

Key Applications
  • AI/ML
  • Deep Learning Training and 
  • Inference
  • High-performance Computing (HPC)
  • Rendering Platform for High-end 
  • Professional Graphics
  • Best-in-Class VDI Infrastructure 
  • Platform

X12 GPU with HGX

High Performance and Flexibility for AI/ML and HPC Applications

Dense and scalable multi-GPU powerhouse
supports the latest HGX A100 8 SXM4 GPUs

Next generation of NVIDIA NVLink™, with double
the GPU-to-GPU direct bandwidth, almost 10X
higher than PCI-E 4.0

New NVIDIA NVSwitch that is 2X faster than the
previous generation

Networking up to 200G, GPUDirect RDMA and
GPUDirect Storage

AIOM slot (OCP 3.0 compliant) support

NVIDIA certified system

4U HGX A100 8-GPU

Integrated Performance

Maximum Acceleration X12 GPU System

With Supermicro’s advanced architecture and thermal design, including liquid cooling and custom heatsinks, our 4U GPU system drive NVIDIA’s latest HGX A100 8-GPU baseboard, can deliver up to 6x AI training performance and 7x inference workload capacity and highest density in a flexible 4U system. 

Supermicro’s unique AIOM slots (OCP 3.0 compliant) and a slew of PCI-E 4.0 slots of these systems enhance the multi-GPU communication and high-speed data flow between systems at a large scale.

The X12 GPU systems feature the latest technology stacks such as 200G networking, NVIDIA NVLink and NVSwitch, 1:1 GPUDirect RDMA, GPUDirect Storage, and NVMe-oF on InfiniBand.

Key Applications
  • AI/ML
  • Deep Learning Training and 
  • Inference
  • High-performance Computing (HPC)
  • Building Block for Scalable AI 
  • Infrastructure

My OffCanvas Menu

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.