Last week, while preparing this publication for launch, we asked a question we knew the answer to. We asked the major AI search engines and reviewed the leading comparison blogs for what an NVIDIA H100 SXM costs to rent on Lambda Labs right now. The answers we found were, in order: $2.40 per hour, $2.99 per hour, and references to "around $2-3 per hour." Lambda's own pricing page that morning showed $3.99 per hour at the eight-GPU bundle rate, rising to $4.29 per hour for a single-GPU instance.
Every published number we found was wrong. Some by twenty-five percent. Some by sixty percent. None were obviously wrong — each came from a real-looking source with a clean URL and a recent timestamp. A reasonable person, asking a reasonable question, would have walked away with a number that, if used to negotiate a real procurement contract, would have cost them or earned them tens or hundreds of thousands of dollars depending on which side of the trade they were on.
Numbers cited across the public ecosystem versus the actual rate on Lambda's pricing page:
This is the market for AI compute pricing in 2026. It is not that the data is hidden — Lambda publishes their rates openly on their own website. It is that the data is buried under a sediment of stale aggregator blogs, SEO-optimized comparison content, and AI-generated summaries trained on those same stale sources, all citing each other in a closed loop that drifts further from reality each quarter.
The buyers of this compute are not casual customers. They are companies spending millions of dollars per year, sometimes per month, on infrastructure that powers their entire business. They negotiate contracts, build models, and tell their boards what AI is going to cost them based on numbers that are, as often as not, wrong.
This is the gap. This is why we are here.
What we are building.
Sorso View is a publication for the people who move the AI compute economy. The flagship product is the Compute Price Index, a weekly benchmark covering nineteen major GPU cloud providers across five structurally distinct tiers — hyperscale, specialized neoclouds, aggregators and marketplaces, cost-positioned operators, and inference-implied. Every Monday morning, we publish the median, range, and week-over-week movement for each tracked hardware configuration, alongside the analysis that explains what the numbers actually mean.
The Index is not the first attempt to measure GPU pricing. We are aware of the institutional indices being sold to hedge funds via Bloomberg Terminal. We are aware of the comparison sites that aggregate provider rates as lead-generation funnels. We are not building any of those things.
We are building the trade publication for AI compute economics. The audience we serve is not the quantitative analyst at Citadel building a hedging strategy. It is the CFO of a Series B startup spending two million dollars a year on compute. It is the infrastructure VP at a mid-size enterprise planning the AI buildout. It is the procurement leader who has to defend a contract decision to a board next quarter. It is the early-stage investor trying to underwrite an AI infrastructure thesis.
These people do not have Bloomberg terminals. They have inboxes. They have Monday mornings. They have decisions to make and not enough hours to make them well. They need a trusted source that translates the noise of the market into a signal they can act on.
Most of the AI compute market does not transact in basis points. It transacts in spreadsheet rows. The Index is built for the people who fill those spreadsheets.
The four commitments.
A benchmark is a promise. The promise is that the number we publish this week was measured the same way as the number we publish next week, and can be verified by anyone willing to check the sources. We have published the full methodology document on this site. The summary version is four commitments, each of which we will be measured against in every weekly issue.
-
i.Sourcing
Every data point we publish is traceable to a public source with a recorded URL and timestamp. No anonymous tips. No private feeds. No unsourced estimates. If we cannot cite it, we do not publish it.
-
ii.Consistency
The same provider panel, the same hardware definitions, the same normalization rules, every week. When the methodology changes, we disclose the change in the issue it takes effect, and we explain the reason.
-
iii.Distinction
Direct rental rates are measured. Inference-implied rates are derived. These are labeled separately and never conflated. Readers always know which they are reading.
-
iv.Correction
When we are wrong, we correct on the record. Every correction is documented in the issue containing it, with a plain-language explanation of what was wrong, what is right, and how the error occurred.
What we will not do.
A responsible publication declares its scope. The Compute Price Index does not measure private contract pricing, because that pricing is confidential and customer-specific. We do not measure realized enterprise rates, because they are derivable from public financials but not directly observable in real time. We do not measure committed-use or reserved pricing, because the terms vary too much across buyers to support a stable median.
A growing share of GPU compute is sold through enterprise contracts where pricing is opaque. Major operators including Nscale, IREN, IBM Cloud's enterprise tier, and Fluidstack are economically significant — sometimes operating at hyperscaler scale — but their pricing is not measurable through public sources. We acknowledge them in editorial coverage. We do not pretend to measure them.
The Index measures what most of the market actually transacts at: the on-demand, publicly-quoted, single-instance rate. That rate is the reference point against which every other tier is priced. It is the most transparent and continuously measurable signal in this market. It is what we track.
Why now.
The AI compute market in 2026 is in a moment that does not happen often. Capital expenditures across the sector are projected to exceed four trillion dollars between now and 2030. The hardware refresh cycle from Hopper to Blackwell to Rubin is compressing into eighteen-month intervals. Hyperscaler pricing is converging downward as competition intensifies. Specialized neoclouds are raising debt against GPU collateral in a way the industry has not seen since the data center build-out of the late 1990s. The numbers move every week and most of the people transacting at scale do not know what the numbers were last week.
This is the period during which a benchmark gets built. A benchmark is not announced; it is established. It is established by being right, week after week, until the people who matter start citing it without thinking. Five years from now, a CFO will say "according to the Compute Price Index, H100 rates moved twelve percent this quarter," and no one in the room will ask which Compute Price Index they mean. That is what we are building toward.
The work between now and then is straightforward. We publish every Monday. We measure with rigor. We name what we see. We correct ourselves openly when we are wrong. We build the historical record one week at a time. The compounding starts on the first issue and never stops.
A benchmark is not a product you ship. It is a promise you keep, week after week, until the people who matter start citing it without thinking.
What happens next.
The inaugural Compute Price Index issue publishes in the coming weeks. Between now and then, we are completing the first full cycle of data collection across the nineteen-provider panel, baselining the methodology, and finalizing the analysis framework that will accompany every weekly issue. Subscribers who join during this pre-launch window receive the first issue the morning it publishes, alongside notes from the editorial team on what we learned during the build.
In the meantime, we will publish a small number of editorial pieces explaining the structure of this market — how rate cards actually work, why aggregator pricing data is unreliable, how to read a neocloud's financials, what reserved contract math looks like in practice. These articles do not require Index data; they are the foundational reading for anyone who wants to understand what we will be measuring once we begin to measure it.
If you have made it this far, you are the audience. Subscribe below. We will see you on Monday morning.
— Sorso View Editorial
Toronto · April 27, 2026