A40 vs H100 SXM

Open Advisor →

Side-by-side GPU comparison: specs, memory, compute performance, and live cloud pricing.

Verdict

H100 SXM has 32 GB more VRAM, making it better suited for large models and long context windows. For compute-bound workloads like training, H100 SXM delivers 6.6× higher FP16 throughput. At $0.44/hr vs $1.80/hr, A40 is the more cost-efficient choice for inference. H100 SXM supports a broader range of models (636 vs 624 from this catalog), giving more flexibility.

Specifications

A40H100 SXM
VRAM48 GB80 GB
VRAM TypeGDDR6HBM3
Memory Bandwidth0.7 TB/s3.4 TB/s
FP16 Performance150 TFLOPS990 TFLOPS
ManufacturerNVIDIANVIDIA
FP8 SupportNoYes
FP4 SupportNoNo

Price / Performance

Based on cheapest single-GPU on-demand pricing. Lower $/TFLOP = better compute value; lower $/GB = better memory value.

A40H100 SXM
$/hr (cheapest)$0.44✓ best$1.80
$/TFLOP (compute value)$0.0029$0.0018✓ best
$/GB VRAM (memory value)$0.0092✓ best$0.0225

Cloud Pricing

Cheapest on-demand price per provider (single GPU).

A40

ProviderOn-demandSpotRent
RunPod$0.44/hr$0.20/hrRent

H100 SXM

ProviderOn-demandSpotRent
Vast.ai$1.80/hrRent
RunPod$2.39/hr$1.25/hrRent
Nebius$2.95/hr$1.25/hr
Lambda$3.29/hr
Google Cloud$4.20/hr$1.14/hr
Amazon Web Services$6.88/hr$2.81/hr
Microsoft Azure$6.98/hr$6.98/hr

Model Compatibility

Models from the catalog that fit on each GPU, grouped by required precision.

A40 (624 models)

H100 SXM (636 models)

You might also compare…

Pricing data refreshed hourly · Last updated April 11, 2026 · Browse all comparisons