GH200 vs MI300X

Open Advisor →

Side-by-side GPU comparison: specs, memory, compute performance, and live cloud pricing.

Verdict

MI300X has 96 GB more VRAM, making it better suited for large models and long context windows. For compute-bound workloads like training, MI300X delivers 1.3× higher FP16 throughput. At $1.99/hr vs $2.29/hr, MI300X is the more cost-efficient choice for inference. MI300X supports a broader range of models (659 vs 637 from this catalog), giving more flexibility.

Specifications

GH200MI300X
VRAM96 GB192 GB
VRAM TypeHBM3HBM3
Memory Bandwidth4.0 TB/s5.3 TB/s
FP16 Performance990 TFLOPS1307 TFLOPS
ManufacturerNVIDIAAMD
FP8 SupportYesYes
FP4 SupportNoNo

Price / Performance

Based on cheapest single-GPU on-demand pricing. Lower $/TFLOP = better compute value; lower $/GB = better memory value.

GH200MI300X
$/hr (cheapest)$2.29$1.99✓ best
$/TFLOP (compute value)$0.0023$0.0015✓ best
$/GB VRAM (memory value)$0.0239$0.0104✓ best

Cloud Pricing

Cheapest on-demand price per provider (single GPU).

GH200

ProviderOn-demandSpotRent
Lambda$2.29/hr

MI300X

ProviderOn-demandSpotRent
RunPod$1.99/hr$1.49/hrRent

Model Compatibility

Models from the catalog that fit on each GPU, grouped by required precision.

GH200 (637 models)

MI300X (659 models)

You might also compare…

Pricing data refreshed hourly · Last updated April 11, 2026 · Browse all comparisons