H100 NVL vs H200 SXM

Open Advisor →

Side-by-side GPU comparison: specs, memory, compute performance, and live cloud pricing.

Verdict

H200 SXM has 47 GB more VRAM, making it better suited for large models and long context windows. For compute-bound workloads like training, H200 SXM delivers 1.2× higher FP16 throughput. At $1.80/hr vs $2.35/hr, H100 NVL is the more cost-efficient choice for inference. H200 SXM supports a broader range of models (657 vs 637 from this catalog), giving more flexibility.

Specifications

H100 NVLH200 SXM
VRAM94 GB141 GB
VRAM TypeHBM3HBM3e
Memory Bandwidth3.9 TB/s4.8 TB/s
FP16 Performance835 TFLOPS990 TFLOPS
ManufacturerNVIDIANVIDIA
FP8 SupportYesYes
FP4 SupportNoNo

Price / Performance

Based on cheapest single-GPU on-demand pricing. Lower $/TFLOP = better compute value; lower $/GB = better memory value.

H100 NVLH200 SXM
$/hr (cheapest)$1.80✓ best$2.35
$/TFLOP (compute value)$0.0022✓ best$0.0024
$/GB VRAM (memory value)$0.0191$0.0167✓ best

Cloud Pricing

Cheapest on-demand price per provider (single GPU).

H100 NVL

ProviderOn-demandSpotRent
Vast.ai$1.80/hrRent
RunPod$2.39/hr$1.25/hrRent
Nebius$2.95/hr$1.25/hr
Lambda$3.29/hr
Google Cloud$4.20/hr$1.14/hr
Amazon Web Services$6.88/hr$2.81/hr
Microsoft Azure$6.98/hr$6.98/hr

H200 SXM

ProviderOn-demandSpotRent
Vast.ai$2.35/hrRent
Nebius$3.50/hr$1.45/hr

Model Compatibility

Models from the catalog that fit on each GPU, grouped by required precision.

H100 NVL (637 models)

H200 SXM (657 models)

You might also compare…

Pricing data refreshed hourly · Last updated April 11, 2026 · Browse all comparisons