L4 vs V100 32GB

Open Advisor →

Side-by-side GPU comparison: specs, memory, compute performance, and live cloud pricing.

Verdict

V100 32GB has 8 GB more VRAM, making it better suited for large models and long context windows. For compute-bound workloads like training, L4 delivers 4.3× higher FP16 throughput. V100 32GB supports a broader range of models (577 vs 566 from this catalog), giving more flexibility.

Specifications

L4V100 32GB
VRAM24 GB32 GB
VRAM TypeGDDR6HBM2
Memory Bandwidth0.3 TB/s0.9 TB/s
FP16 Performance121 TFLOPS28 TFLOPS
ManufacturerNVIDIANVIDIA
FP8 SupportYesNo
FP4 SupportNoNo

Price / Performance

Based on cheapest single-GPU on-demand pricing. Lower $/TFLOP = better compute value; lower $/GB = better memory value.

L4V100 32GB
$/hr (cheapest)$0.39
$/TFLOP (compute value)$0.0032
$/GB VRAM (memory value)$0.0163

Cloud Pricing

Cheapest on-demand price per provider (single GPU).

L4

ProviderOn-demandSpotRent
RunPod$0.39/hr$0.22/hrRent
Google Cloud$0.56/hr$0.16/hr
Amazon Web Services$0.80/hr$0.13/hr

V100 32GB

No cloud pricing available.

Model Compatibility

Models from the catalog that fit on each GPU, grouped by required precision.

L4 (566 models)

V100 32GB (577 models)

You might also compare…

Pricing data refreshed hourly · Last updated April 11, 2026 · Browse all comparisons