Side-by-side GPU comparison: specs, memory, compute performance, and live cloud pricing.
Verdict
For compute-bound workloads like training, P100 delivers 1.2× higher FP16 throughput.
Specifications
| A16 | P100 | |
|---|---|---|
| VRAM | 16 GB | 16 GB |
| VRAM Type | GDDR6 | HBM2 |
| Memory Bandwidth | 0.4 TB/s | 0.7 TB/s |
| FP16 Performance | 18 TFLOPS | 21 TFLOPS |
| Manufacturer | NVIDIA | NVIDIA |
| FP8 Support | No | No |
| FP4 Support | No | No |
Price / Performance
Based on cheapest single-GPU on-demand pricing. Lower $/TFLOP = better compute value; lower $/GB = better memory value.
| A16 | P100 | |
|---|---|---|
| $/hr (cheapest) | — | $1.46 |
| $/TFLOP (compute value) | — | $0.0689 |
| $/GB VRAM (memory value) | — | $0.0912 |
Cloud Pricing
Cheapest on-demand price per provider (single GPU).
A16
No cloud pricing available.
P100
| Provider | On-demand | Spot | Rent |
|---|---|---|---|
| Google Cloud | $1.46/hr | $0.14/hr |
Model Compatibility
Models from the catalog that fit on each GPU, grouped by required precision.
A16 (502 models)
P100 (502 models)
You might also compare…
Pricing data refreshed hourly · Last updated April 11, 2026 · Browse all comparisons