A100 80GB vs L40

Open Advisor →

Side-by-side GPU comparison: specs, memory, compute performance, and live cloud pricing.

Verdict

A100 80GB has 32 GB more VRAM, making it better suited for large models and long context windows. For compute-bound workloads like training, A100 80GB delivers 1.7× higher FP16 throughput. At $0.99/hr vs $1.39/hr, L40 is the more cost-efficient choice for inference. A100 80GB supports a broader range of models (636 vs 624 from this catalog), giving more flexibility.

Specifications

A100 80GBL40
VRAM80 GB48 GB
VRAM TypeHBM2eGDDR6
Memory Bandwidth2.0 TB/s0.9 TB/s
FP16 Performance312 TFLOPS181 TFLOPS
ManufacturerNVIDIANVIDIA
FP8 SupportNoYes
FP4 SupportNoNo

Price / Performance

Based on cheapest single-GPU on-demand pricing. Lower $/TFLOP = better compute value; lower $/GB = better memory value.

A100 80GBL40
$/hr (cheapest)$1.39$0.99✓ best
$/TFLOP (compute value)$0.0045✓ best$0.0055
$/GB VRAM (memory value)$0.0174✓ best$0.0206

Cloud Pricing

Cheapest on-demand price per provider (single GPU).

A100 80GB

ProviderOn-demandSpotRent
RunPod$1.39/hr$0.82/hrRent
Google Cloud$1.85/hr$1.47/hr
Microsoft Azure$3.67/hr$0.40/hr

L40

ProviderOn-demandSpotRent
RunPod$0.99/hr$0.50/hrRent

Model Compatibility

Models from the catalog that fit on each GPU, grouped by required precision.

A100 80GB (636 models)

L40 (624 models)

You might also compare…

Pricing data refreshed hourly · Last updated April 11, 2026 · Browse all comparisons