A100 40GB vs A10G

Open Advisor →

Side-by-side GPU comparison: specs, memory, compute performance, and live cloud pricing.

Verdict

A100 40GB has 16 GB more VRAM, making it better suited for large models and long context windows. For compute-bound workloads like training, A100 40GB delivers 8.8× higher FP16 throughput. A100 40GB supports a broader range of models (612 vs 566 from this catalog), giving more flexibility.

Specifications

A100 40GBA10G
VRAM40 GB24 GB
VRAM TypeHBM2GDDR6
Memory Bandwidth1.6 TB/s0.6 TB/s
FP16 Performance312 TFLOPS36 TFLOPS
ManufacturerNVIDIANVIDIA
FP8 SupportNoNo
FP4 SupportNoNo

Price / Performance

Based on cheapest single-GPU on-demand pricing. Lower $/TFLOP = better compute value; lower $/GB = better memory value.

A100 40GBA10G
$/hr (cheapest)$0.93✓ best$1.01
$/TFLOP (compute value)$0.0030✓ best$0.0283
$/GB VRAM (memory value)$0.0232✓ best$0.0419

Cloud Pricing

Cheapest on-demand price per provider (single GPU).

A100 40GB

ProviderOn-demandSpotRent
Vast.ai$0.93/hrRent
Google Cloud$1.61/hr$1.17/hr
Lambda$1.99/hr

A10G

ProviderOn-demandSpotRent
Amazon Web Services$1.01/hr$0.36/hr

Model Compatibility

Models from the catalog that fit on each GPU, grouped by required precision.

A100 40GB (612 models)

A10G (566 models)

You might also compare…

Pricing data refreshed hourly · Last updated April 11, 2026 · Browse all comparisons