Pre-launch status

Publishing begins this quarter.

The Compute Price Index launches with full data, full methodology, and full source transparency. Subscribers receive the inaugural issue the morning it publishes. Until then, this page shows what the Index measures, the panel structure, and the specific providers covered in each tier.

Publication cadence
Weekly
Every Monday morning
Hardware tracked
5 GPUs
H100 SXM/PCIe · H200 · B200 · A100
Provider panel
19
Across 5 tiers
Inaugural issue
Subscribe for date
Delivered by email

What the Index measures.

The Compute Price Index measures the price of AI compute as it is actually offered to a typical buyer in the public market. The Index has two methodologically distinct sections.

The direct rental section measures the on-demand hourly rate published by sixteen major GPU cloud providers across four tiers — hyperscalers, specialized neoclouds, aggregators, and cost-positioned neoclouds. These rates are observed directly on each provider's public pricing page and normalized to per-GPU-per-hour for comparability. This is the measured benchmark. It is the rate against which most other tiers — reserved contracts, committed-use agreements, enterprise deals — are priced.

The inference-implied section covers three major token-priced inference providers. These platforms charge per million tokens rather than per GPU-hour, so the Index publishes a derived per-GPU-hour equivalent using a transparent formula and a stated throughput assumption. This section is labeled separately and never conflated with the measured rates. It exists because token-priced inference is an economically significant slice of the market, and ignoring it would understate the structure of how AI compute is actually priced today.

Why nineteen providers

The panel is sized to be operationally sustainable as a manual weekly collection while still being statistically meaningful and structurally representative. Nineteen is large enough that the median is stable to single-provider movements, and small enough that every data point can be sourced, verified, and quality-checked every week. The panel is reviewed quarterly and updated as the market evolves. The full methodology document describes the procedure in complete detail.

What the Index does not include

A growing share of GPU compute is sold through enterprise contracts where pricing is confidential. Major operators including Nscale, IREN, IBM Cloud's enterprise tier, and Fluidstack are economically significant — sometimes operating at hyperscaler scale — but their pricing is not measurable through public sources. Rather than misrepresent confidential quotes as observable rates, the Index acknowledges these operators in editorial coverage but excludes them from the measured calculation. A list of acknowledged non-public providers is published below.

How the Index will be used

For buyers, the Index provides a defensible market reference point for procurement discussions — the basis against which to negotiate reserved contracts, evaluate quotes, and benchmark renewals. For builders, it tracks the input cost that sits underneath every AI business model. For investors, it measures a structural variable that moves the unit economics of every company in the AI value chain. For anyone operating in this market, it answers a question that, until now, has been surprisingly hard to answer with rigor: what did AI compute actually cost this week.

The panel.

19 providers · 5 tiers · Reviewed quarterly
Tier 1 · Measured
Hyperscale
4 providers
AWS Microsoft Azure Google Cloud Oracle Cloud Infrastructure

The four largest cloud platforms. Hyperscalers represent the upper bound of the public-rate market. Their on-demand rates are typically the highest in the panel, reflecting full-stack platform pricing rather than pure GPU rental.

Tier 2 · Measured
Specialized neoclouds
5 providers
CoreWeave Lambda Labs Nebius Crusoe Hyperstack

Purpose-built AI infrastructure providers. Specialized neoclouds were built around AI workloads and typically offer lower on-demand rates than hyperscalers because they operate with narrower scope and lower platform overhead.

Tier 3 · Measured
Aggregator and marketplace
4 providers
RunPod Vast.ai Paperspace TensorDock

Capacity aggregators and peer marketplaces. Aggregators source capacity from a network of operators rather than running primary infrastructure. Their published rates often establish the lower bound of the publicly-quoted market.

Tier 4 · Measured
Cost-positioned neoclouds
3 providers
GMI Cloud Cudo Compute Spheron

Specialized providers competing primarily on price. Operating similar infrastructure to Tier 2 but with aggressive headline-rate positioning. Typically represent the most competitive on-demand rates among non-marketplace providers.

Tier 5 · Derived
Inference-implied
3 providers
Together AI Fireworks AI Replicate

Token-priced inference providers. These platforms charge per million tokens rather than per GPU-hour. The Index publishes a derived per-GPU-hour equivalent using a disclosed throughput assumption (Llama 3.1 70B at 1,000 tokens per second on H100 SXM). Methodology and assumptions are stated in every issue. This section is methodologically distinct from the measured tiers above and is never conflated with them.

Acknowledged · Not measured

Major providers we cover in editorial, not in the Index.

These operators are economically significant in the AI compute market but do not publish on-demand rates suitable for inclusion in a measurement-based Index. We cover them in our editorial analysis using publicly disclosed contract sizes, deployed capacity, and reported revenue — not by treating confidential quotes as observable rates.

Nscale
Why significant: Disclosed Microsoft contract for 200,000 GB300 GPUs. Sovereign AI infrastructure buildout across UK and Norway.
Enterprise contracts only; no public on-demand rates
IREN
Why significant: Disclosed 140,000 GPU deployment target by end of 2026. Microsoft contract reportedly valued at $9.7 billion.
Long-term contracts; AI Cloud public availability still ramping
IBM Cloud
Why significant: Major hyperscaler; began H100 offerings in late 2024 with substantial enterprise customer base.
Limited public on-demand H100 pricing; primarily enterprise/contract
Fluidstack
Why significant: Listed in major neocloud market rankings. Active partnerships with hyperscalers for capacity.
Enterprise/contract pricing focus

Get the Index in your inbox.

Subscribers receive the inaugural Compute Price Index the morning it publishes, and every weekly update thereafter, alongside the original analysis that explains what the numbers mean. Free.