Versatile AI & HPC Power with 24 GB HBM2, 933 GB/s Bandwidth & MIG Partitioning — Ideal for Efficient Multi-Tenant & Scalable Workloads
Don't see what you're looking for?
AI + HPC Hybrid Workhorse
Ideal for mainstream model training (BERT, transfer learning) and double-precision HPC workloads.
Industry-Leading Efficiency
Achieves 10x higher performance than T4 on TF32 models, with 2x boost using mixed precision
Secure & Scalable Virtualization
Leverage MIG to securely partition one physical GPU into four workloads, perfect for multi-tenant or varied application demands.
Energy Savvy Deployment
With 165 W TDP and passive cooling, it's perfect for efficient, dense rack configurations.
🧠 AI Training & Finetuning
Speed up BERT-Large pre-training and fine-tuning with TF32 acceleration.
🤖 Large-Scale Inference
Deploy multi-model inference sets using MIG or NVLink clusters.
📌 HPC & Simulation Workloads
Support FP64 workflows, CFD, weather modeling, scientific computing.
📊 Data Analytics & ETL Pipelines
Boost Spark or RAPIDS-enabled data processing with accelerated GPU cores.
☁️ Cloud Repatriation & Multi-Tenant AI
Replace cloud GPU instances, optimize hardware with partitioning, cut costs.
CPU
1vCore
CPU
2vCore
CPU
4vCore
CPU
8vCore
No Setup Fee
| vCores | RAM | Storage | Traffic | Location | PRICE/MO. | ||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| 1vCore | 1GB | 20GB | 500GB | NL | ₹0.00 | $0.00 | €5 | Order Now | Order Now | Order Now | |
| 2vCore | 2GB | 40GB | 500GB | NL | ₹0.00 | $0.00 | €10 | Order Now | Order Now | Order Now | |
| 4vCore | 4GB | 80GB | 500GB | NL | ₹0.00 | $0.00 | €20 | Order Now | Order Now | Order Now | |
| 8vCore | 8GB | 160GB | 1000GB | NL | ₹0.00 | $0.00 | €40 | Order Now | Order Now | Order Now | |
| 16vCores | 16GB | 320GB | 1000GB | NL | ₹0.00 | $0.00 | €80 | Order Now | Order Now | Order Now |
Deep Dive & FAQs
Up to 4 isolated A30 GPU slices, each with dedicated memory and cores, great for concurrent workloads.
Yes, you can use NVLink bridged pairs for combined GPU memory and faster interconnects.
Absolutely, passive cooling architecture supports high-density deployment in data center environments.
Yes, ideal for DevOps, multiple model serving, or isolated inference deployments within one GPU.
Yes, verified support for CUDA 11+, TensorRT, cuDNN, RAPIDS, HPC libraries, MLPerf benchmarks, and enterprise frameworks.
Not sure exactly what you need?
No problem! Our talented engineers are here to help!
We will consult, architect, migrate, manage and do whatever it takes to help your business grow and succeed.
Data Center: -
-
4.8