A10G vs V100

Open Advisor →

Side-by-side GPU comparison: specs, memory, compute performance, and live cloud pricing.

Verdict

A10G has 8 GB more VRAM, making it better suited for large models and long context windows. For compute-bound workloads like training, A10G delivers 1.3× higher FP16 throughput. At $0.18/hr vs $1.01/hr, V100 is the more cost-efficient choice for inference. A10G supports a broader range of models (566 vs 502 from this catalog), giving more flexibility.

Specifications

A10GV100
VRAM24 GB16 GB
VRAM TypeGDDR6HBM2
Memory Bandwidth0.6 TB/s0.9 TB/s
FP16 Performance36 TFLOPS28 TFLOPS
ManufacturerNVIDIANVIDIA
FP8 SupportNoNo
FP4 SupportNoNo

Price / Performance

Based on cheapest single-GPU on-demand pricing. Lower $/TFLOP = better compute value; lower $/GB = better memory value.

A10GV100
$/hr (cheapest)$1.01$0.18✓ best
$/TFLOP (compute value)$0.0283$0.0064✓ best
$/GB VRAM (memory value)$0.0419$0.0112✓ best

Cloud Pricing

Cheapest on-demand price per provider (single GPU).

A10G

ProviderOn-demandSpotRent
Amazon Web Services$1.01/hr$0.36/hr

V100

ProviderOn-demandSpotRent
Vast.ai$0.18/hrRent
Google Cloud$2.48/hr$0.54/hr
Microsoft Azure$3.06/hr$0.34/hr
Amazon Web Services$3.06/hr$0.36/hr

Model Compatibility

Models from the catalog that fit on each GPU, grouped by required precision.

A10G (566 models)

V100 (502 models)

You might also compare…

Pricing data refreshed hourly · Last updated April 11, 2026 · Browse all comparisons