_

Compare

Independent GPU cloud pricing, updated daily. 9 providers, 167+ GPUs compared. Free, no signup.

9

GPU Providers

167

GPU Models

305

LLM Models

Daily

Price Updates

H100 SXM 80GB $1.79/hr (CUDO Compute) | H100 SXM 80GB $1.79/hr (CUDO Compute) |

Popular GPUs

All GPUs →
GPU ModelVRAMProvidersFrom
H100 SXM 80GB80GB7$1.79 /hr
H200 SXM 141GB141GB7$2.30 /hr
Blackwell B200192GB4$4.69 /hr
L424GB3$0.17 /hr
RTX A500024GB3$0.27 /hr
A100 80GB80GB2$1.29 /hr
A100 PCIe 80GB80GB2$1.35 /hr
A4048GB2$0.39 /hr
Data verified daily 9 GPU cloud providers 167 GPU models Independent pricing

Why Compare GPU Cloud Pricing?

GPU cloud pricing changes daily. Spot prices fluctuate hourly. The same GPU can vary by 2–3x between providers depending on availability and billing type. As of March 2026, nodepedia tracks real pricing across 9 GPU cloud providers and 167 GPU models so you can find the cheapest option for your workload — try the Cost Calculator to estimate your spend.

What You Can Do

How Data Is Collected

An AI agent extracts pricing from provider websites daily. No data is self-reported by providers. Every price is pulled directly from the source, validated against historical patterns, and flagged if it looks anomalous. You get the same prices you'd see if you visited each provider yourself — just all in one place.

Who Uses nodepedia?

ML engineers comparing cloud options for training runs. Startups evaluating which GPU provider fits their budget. Researchers who need a specific GPU and want to find the lowest price. Anyone renting cloud GPUs who wants to stop overpaying by checking one site instead of a dozen.

Frequently Asked Questions

What is the cheapest GPU cloud provider?
Pricing changes daily and depends on the GPU model, billing type (spot vs on-demand), and availability. There is no single cheapest provider — it varies by workload. Use the Cost Calculator to compare current pricing across all 9 providers tracked by nodepedia.
How much does it cost to rent an H100 GPU?
H100 pricing varies significantly by provider and billing type. Spot instances are cheaper but can be interrupted, while on-demand instances guarantee availability at a premium. Check the H100 pricing page for current rates across all tracked providers.
What GPU do I need to run an LLM locally?
It depends on the model size, quantization level, and whether you need training or inference. A 7B parameter model at Q4 quantization fits on a 6 GB GPU, while a 70B model may need 40+ GB of VRAM. Use the Workload Recommender to match your model to compatible GPUs.
How does nodepedia collect pricing data?
An AI agent visits provider pricing pages daily and extracts current rates automatically. Prices are not self-reported by providers. Every data point is pulled directly from the source, validated against historical patterns, and flagged if anomalous.
Can I compare GPU cloud providers side by side?
Yes. nodepedia offers head-to-head comparisons for every provider pair, covering pricing, GPU availability, and billing options. You can also build custom comparisons with the Comparison Builder tool.
What is the difference between spot and on-demand GPU pricing?
Spot instances use spare GPU capacity at a discount but can be interrupted when demand rises. On-demand instances guarantee availability at a higher price. Spot pricing can be 50–80% cheaper, making it ideal for fault-tolerant workloads like training with checkpoints. The Cost Calculator shows both pricing types for easy comparison.
Is nodepedia free to use?
Yes. All pricing data, tools, and guides on nodepedia are completely free with no signup required. All data is independently collected — rankings and pricing are never influenced by providers.