Side-by-side GPU comparison: specs, memory, compute performance, and live cloud pricing.
Verdict
B100 has 96 GB more VRAM, making it better suited for large models and long context windows. For compute-bound workloads like training, B100 delivers 1.8× higher FP16 throughput. B100 supports a broader range of models (659 vs 637 from this catalog), giving more flexibility.
Specifications
| B100 | GH200 | |
|---|---|---|
| VRAM | 192 GB | 96 GB |
| VRAM Type | HBM3e | HBM3 |
| Memory Bandwidth | 8.0 TB/s | 4.0 TB/s |
| FP16 Performance | 1750 TFLOPS | 990 TFLOPS |
| Manufacturer | NVIDIA | NVIDIA |
| FP8 Support | Yes | Yes |
| FP4 Support | Yes | No |
Price / Performance
Based on cheapest single-GPU on-demand pricing. Lower $/TFLOP = better compute value; lower $/GB = better memory value.
| B100 | GH200 | |
|---|---|---|
| $/hr (cheapest) | — | $2.29 |
| $/TFLOP (compute value) | — | $0.0023 |
| $/GB VRAM (memory value) | — | $0.0239 |
Cloud Pricing
Cheapest on-demand price per provider (single GPU).
B100
No cloud pricing available.
GH200
| Provider | On-demand | Spot | Rent |
|---|---|---|---|
| Lambda | $2.29/hr | — |
Model Compatibility
Models from the catalog that fit on each GPU, grouped by required precision.
B100 (659 models)
GH200 (637 models)
You might also compare…
Pricing data refreshed hourly · Last updated April 11, 2026 · Browse all comparisons