Side-by-side GPU comparison: specs, memory, compute performance, and live cloud pricing.
Verdict
H200 SXM has 47 GB more VRAM, making it better suited for large models and long context windows. For compute-bound workloads like training, H200 SXM delivers 1.2× higher FP16 throughput. At $1.80/hr vs $2.35/hr, H100 NVL is the more cost-efficient choice for inference. H200 SXM supports a broader range of models (657 vs 637 from this catalog), giving more flexibility.
Specifications
| H100 NVL | H200 SXM | |
|---|---|---|
| VRAM | 94 GB | 141 GB |
| VRAM Type | HBM3 | HBM3e |
| Memory Bandwidth | 3.9 TB/s | 4.8 TB/s |
| FP16 Performance | 835 TFLOPS | 990 TFLOPS |
| Manufacturer | NVIDIA | NVIDIA |
| FP8 Support | Yes | Yes |
| FP4 Support | No | No |
Price / Performance
Based on cheapest single-GPU on-demand pricing. Lower $/TFLOP = better compute value; lower $/GB = better memory value.
| H100 NVL | H200 SXM | |
|---|---|---|
| $/hr (cheapest) | $1.80✓ best | $2.35 |
| $/TFLOP (compute value) | $0.0022✓ best | $0.0024 |
| $/GB VRAM (memory value) | $0.0191 | $0.0167✓ best |
Cloud Pricing
Cheapest on-demand price per provider (single GPU).
Model Compatibility
Models from the catalog that fit on each GPU, grouped by required precision.
H100 NVL (637 models)
H200 SXM (657 models)
You might also compare…
Pricing data refreshed hourly · Last updated April 11, 2026 · Browse all comparisons