Side-by-side GPU comparison: specs, memory, compute performance, and live cloud pricing.
Verdict
L4 has 6 GB more VRAM, making it better suited for large models and long context windows. For compute-bound workloads like training, L4 delivers 14.4× higher FP16 throughput. L4 supports a broader range of models (566 vs 510 from this catalog), giving more flexibility.
Specifications
| L4 | M4 (24 GB) | |
|---|---|---|
| VRAM | 24 GB | 18 GB |
| VRAM Type | GDDR6 | LPDDR5X |
| Memory Bandwidth | 0.3 TB/s | 0.1 TB/s |
| FP16 Performance | 121 TFLOPS | 8 TFLOPS |
| Manufacturer | NVIDIA | Apple |
| FP8 Support | Yes | No |
| FP4 Support | No | No |
Price / Performance
Based on cheapest single-GPU on-demand pricing. Lower $/TFLOP = better compute value; lower $/GB = better memory value.
| L4 | M4 (24 GB) | |
|---|---|---|
| $/hr (cheapest) | $0.39 | — |
| $/TFLOP (compute value) | $0.0032 | — |
| $/GB VRAM (memory value) | $0.0163 | — |
Cloud Pricing
Cheapest on-demand price per provider (single GPU).
Model Compatibility
Models from the catalog that fit on each GPU, grouped by required precision.
L4 (566 models)
M4 (24 GB) (510 models)
You might also compare…
Pricing data refreshed hourly · Last updated April 11, 2026 · Browse all comparisons