AMD Instinct MI210 GPU Server

HPC-Ready GPU Power for Demanding AI, Scientific & Engineering Workloads — Built for Density, Speed & Performance

Bare Metal Server

AMD Instinct MI210 GPU Server Price

Loading pricing table...

Don't see what you're looking for?

🚀 Core Specifications

  • Architecture: AMD CDNA 2 (Aldebaran, 6 nm).
  • Compute Units / Stream Processors: 104 CUs, 6,656 stream processors.
  • Memory: 64 GB HBM2e on 4096‑bit bus (1.6 TB/s) with ECC.
  • Clock Speed: 1.0 GHz base → 1.7 GHz boost
  • Performance:
    • FP64: 22.6 TFLOPS
    • FP32: 22.6 TFLOPS
    • FP16/BF16/INT8: 181 TFLOPS/181 TOPS
  • Form Factor: Dual‑slot, PCIe 4.0 ×16, 300 W TDP, passive cooling.
  • Infinity Fabric I/O: Up to 3× 300 GB/s P2P links enabling 4‑GPU hives with 600 GB/s.
  • Virtualization: ECC memory + PCIe fabric, ROCm 5 support.
AMD Instinct MI210 GPU

⚡ Why Choose Instinct MI210?

HPC & AI leader in FP64

Delivers 2.3× higher FP64 per watt versus Nvidia A100.

Massive HBM & I/O density

64 GB of HBM2e and 1.6 TB/s memory bandwidth in a PCIe card , unmatched in mainstream servers.

Scale with peer links

Build high-bandwidth GPU clusters via Infinity Fabric, 600 GB/s cross‑GPU in a hive.

Energy-efficient dual-slot design

300 W TDP supports passive cooling, ideal for dense rack deployments.

Open compute stack

Seamless with ROCm, optimized containers, and Infinity Hub tools.

🎯 Ideal Use Cases

🔬 Scientific & Engineering HPC

Ideal for FP64 workloads like CFD, molecular dynamics, climate modeling, fast, efficient, and dense.

🤖 AI Training & Mixed‑Precision Compute

Supports FP32, BF16, INT8, great for mid-sized transformer fine-tuning, language models, and reinforcement learning.

💾 Large In-Memory Compute & Simulations

Ideal for database caching, graph analytics, and processing large datasets in memory.

🧪 GPU Hive Acceleration

Up to 4 MI210s peer-linked for high-bandwidth multi-GPU training and compute bursts.

☁️ Private Cloud & Cloud Repatriation

Replace cloud GPU nodes with powerful, cost-effective bare metal featuring ECC and open-stack software.

  • 0.00

CPU

1vCore

    • RAM
    • 1GB
    • Storage
    • 20GB
    • Traffic
    • 500GB
    • Location/Setup
    • NL

  • 0.00

CPU

2vCore

    • RAM
    • 2GB
    • Storage
    • 40GB
    • Traffic
    • 500GB
    • Location/Setup
    • NL

  • 0.00

CPU

4vCore

    • RAM
    • 4GB
    • Storage
    • 80GB
    • Traffic
    • 500GB
    • Location/Setup
    • NL

  • 0.00

CPU

8vCore

    • RAM
    • 8GB
    • Storage
    • 160GB
    • Traffic
    • 1000GB
    • Location/Setup
    • NL

  • 0.00

CPU

16vCores

    • RAM
    • 16GB
    • Storage
    • 320GB
    • Traffic
    • 1000GB
    • Location/Setup
    • NL
hosting advice logo

4.8

4.7

4.9

hostadvice logo

4.9

Deep Dive & FAQs

Typically 2–4 in PCIe racks, with peer linking for high-bandwidth compute clusters.

Yes, dedicated peer links deliver up to 300 GB/s between GPUs, far exceeding PCIe capabilities.

Absolutely, ROCm 5 and Infinity Hub offer optimized frameworks, container images, and HPC tooling.

Yes, for large models, graph datasets, and full double-precision workflows, without compromising high throughput.

No, dual-slot design and 300 W TDP work ideally in airflow-optimized rack servers designed for HPC.

Not sure exactly what you need?
No problem! Our talented engineers are here to help!

We will consult, architect, migrate, manage and do whatever it takes to help your business grow and succeed.

Get in touch today!

Get in touch today!