A100 80GB vs H200 SXM

Open Advisor →

Side-by-side GPU comparison: specs, memory, compute performance, and live cloud pricing.

Verdict

H200 SXM has 61 GB more VRAM, making it better suited for large models and long context windows. For compute-bound workloads like training, H200 SXM delivers 3.2× higher FP16 throughput. At $1.39/hr vs $2.35/hr, A100 80GB is the more cost-efficient choice for inference. H200 SXM supports a broader range of models (657 vs 636 from this catalog), giving more flexibility.

Specifications

A100 80GBH200 SXM
VRAM80 GB141 GB
VRAM TypeHBM2eHBM3e
Memory Bandwidth2.0 TB/s4.8 TB/s
FP16 Performance312 TFLOPS990 TFLOPS
ManufacturerNVIDIANVIDIA
FP8 SupportNoYes
FP4 SupportNoNo

Price / Performance

Based on cheapest single-GPU on-demand pricing. Lower $/TFLOP = better compute value; lower $/GB = better memory value.

A100 80GBH200 SXM
$/hr (cheapest)$1.39✓ best$2.35
$/TFLOP (compute value)$0.0045$0.0024✓ best
$/GB VRAM (memory value)$0.0174$0.0167✓ best

Cloud Pricing

Cheapest on-demand price per provider (single GPU).

A100 80GB

ProviderOn-demandSpotRent
RunPod$1.39/hr$0.82/hrRent
Google Cloud$1.85/hr$1.47/hr
Microsoft Azure$3.67/hr$0.40/hr

H200 SXM

ProviderOn-demandSpotRent
Vast.ai$2.35/hrRent
Nebius$3.50/hr$1.45/hr

Model Compatibility

Models from the catalog that fit on each GPU, grouped by required precision.

A100 80GB (636 models)

H200 SXM (657 models)

You might also compare…

Pricing data refreshed hourly · Last updated April 11, 2026 · Browse all comparisons