Four companies — Amazon, Alphabet, Meta, and Microsoft — have collectively committed to spending between six hundred and seven hundred billion dollars on artificial intelligence infrastructure in 2026 alone. The combined number rivals the entire annual gross domestic product of Sweden. Add Oracle and Meta's recent guidance revisions, and the figure approaches seven hundred billion. Add the spending of OpenAI, Anthropic, xAI, and the specialized neoclouds — CoreWeave, Nebius, IREN, Lambda, Crusoe — and the cumulative commitment over the next three to five years stretches past four trillion dollars.
These are not projections of what AI infrastructure spending might become. These are commitments already made, capital expenditure budgets already approved, debt already issued, contracts already signed. The build is not a forecast. It is a present-tense fact, occurring in real time across data centers, power plants, and semiconductor fabrication lines distributed across four continents.
The question worth asking — the question that does not get asked often enough in the breathless coverage of every new training cluster announcement — is what, exactly, this money is being bet on. Not in the technical sense. The technical answer is well-documented. What is being bet on, in the deeper sense, is a particular theory about how artificial intelligence will reshape economic activity, and a particular timeline for that reshaping to produce returns commensurate with the capital being deployed. Both the theory and the timeline could be wrong.
Reported and projected capex commitments for 2026, sourced from public earnings guidance and analyst consensus:
The CreditSights research note from late 2025 captured the scale most clearly. The combined capital expenditure of the top five hyperscalers in 2026 represents approximately 2.2% of US gross domestic product — a single line item, on a small group of corporate balance sheets, equivalent to one out of every forty-five dollars of American economic output. Roughly seventy-five percent of that, by industry consensus, is dedicated to AI-specific infrastructure rather than traditional cloud services. Four hundred and fifty billion dollars, in one year, on chips and racks and cooling systems and electrical substations, deployed under the assumption that the demand will be there to consume what gets built.
What they are actually buying.
The capital is not flowing where most people assume. The image in the public mind — of vast warehouses filled with humming servers training ever-larger language models — is partially correct and increasingly out of date. Inference workloads now account for an estimated sixty to seventy percent of total AI compute demand at the major hyperscalers, up from roughly forty percent in 2024. The build is shifting from training infrastructure to deployment infrastructure. From research-grade compute to commercial-grade compute. From the question of how to build the models to the question of how to serve them, at scale, to the billions of users who are now expected to use them.
This is a structural shift that matters. Training infrastructure can sit on a single campus, optimized for raw computational throughput, fed by whatever electrical grid happens to be nearby. Inference infrastructure has to be distributed close to users, has to handle latency-sensitive requests at scale, has to be redundant across geographic regions, and has to operate continuously rather than in scheduled training runs. The capital requirements are different. The geographic footprint is different. The political and regulatory exposures are different. And the demand assumptions underlying the build are entirely different.
Training capex is a bet on the size of the addressable market for foundation models — a small number of laboratories, governments, and enterprises willing to spend billions on the most capable systems. Inference capex is a bet on something much larger and much more uncertain: that AI usage will become as pervasive as web search, that it will become embedded in every software product, every business workflow, every consumer interaction, and that the resulting compute demand will absorb the infrastructure being built fast enough to justify the build. The first bet has roughly a hundred customers. The second bet has roughly eight billion.
The hyperscalers are not actually building for the world that exists now. They are building for the world they need to materialize within the next thirty-six months in order for the capital to make sense. This is the real wager. Not whether AI is real — it is — but whether AI adoption can scale fast enough, monetize deeply enough, and embed itself widely enough to absorb a build whose first phase alone exceeds the cumulative inflation-adjusted cost of the Apollo program.
The financing structure that nobody is discussing.
The hyperscalers have, until recently, funded their capital expenditure primarily from operating cash flow. This is no longer the case. According to analysis from CreditSights, aggregate 2026 capex for the top five — even after accounting for buybacks and dividends — now exceeds projected free cash flow. The funding gap is being closed through external debt markets. Big tech companies issued one hundred billion dollars of bonds in the first quarter of 2026 alone to finance AI capital expenditure. Morgan Stanley and JPMorgan project the technology sector may need to issue an additional one and a half trillion dollars in new debt over the coming years to complete the announced infrastructure plans.
This is a new financial structure for an industry that has historically operated with conservative balance sheets. Hyperscaler liabilities-to-assets ratios remain healthier than the broader S&P 500, but the trajectory is unmistakable. Investor sentiment is beginning to register the shift. The five-year credit default swap on Oracle has more than tripled since September of last year. Trading volumes on hyperscaler bonds have surged. The market is repricing the perceived risk of the AI infrastructure buildout, not because the operating businesses are deteriorating — they are not — but because the magnitude and concentration of the capital commitments are unprecedented for these companies, and the customer concentration of some commitments (Oracle's exposure to OpenAI being the most-discussed example) introduces concentration risk that is difficult to model.
Goldman Sachs has flagged that AI assets typically depreciate at a rate of roughly twenty percent per year, which implies the hyperscalers are facing an annual depreciation expense exceeding four hundred billion dollars at the current build rate. This is greater than the combined operating profit of the top hyperscalers in 2025. The math works only if the revenue from AI services scales fast enough to absorb the depreciation while the infrastructure is still being deployed. If the revenue arrives more slowly than projected, the depreciation continues to compound regardless. The infrastructure does not get less expensive while the demand catches up.
The hyperscalers are not building for the world that exists now. They are building for the world they need to materialize within thirty-six months in order for the capital to make sense.
What happens if the bet is wrong.
There are roughly three scenarios for how this resolves. The first is the scenario that all of the public capex announcements are tacitly betting on: AI adoption scales rapidly, inference demand absorbs the infrastructure being built, hyperscaler revenue from AI services grows fast enough to validate the depreciation expense, and the buildout produces returns commensurate with the capital deployed. In this scenario, the hyperscalers consolidate their position as essential infrastructure providers for an economy increasingly mediated by AI, the debt is refinanced or amortized through cash flow, and the build pays for itself over a five-to-seven-year horizon. This scenario is plausible. It is also the scenario that requires the most things to go right simultaneously.
The second scenario is overcapacity. AI adoption proceeds, but more slowly than the build assumes. The infrastructure gets built, but the demand to fill it does not arrive at the projected pace. Capacity utilization stays below profitable thresholds for longer than the financing structure can comfortably tolerate. Hyperscaler operating margins compress under depreciation expense not yet matched by revenue. The least-disciplined participants — the neoclouds operating on debt collateralized by GPU inventory, the secondary players who entered the market late — are forced into restructuring, asset sales, or outright bankruptcy. The infrastructure does not disappear; it gets absorbed at distressed valuations by stronger players. The economic damage is concentrated in the AI infrastructure value chain rather than diffused across the broader economy. This is the scenario that resembles the late-1990s telecommunications buildout: massive overcapacity, painful corrections in the affected sector, but eventual absorption as demand catches up over a five-to-ten-year period. It is, historically, the most common outcome of investment cycles of this magnitude.
The third scenario is more difficult to talk about clearly because the discourse around AI has become so polarized. In the third scenario, the foundational thesis itself is partially wrong. Not that AI is fake or useless, but that the specific economic theory underlying the current buildout — that AI will become embedded in essentially every software product, every business workflow, every consumer interaction, generating compute demand commensurate with that ubiquity — proves to be a less profitable kind of true than the build assumes. AI tools become widely used but not in ways that justify the per-user compute cost the build implies. Adoption broadens but monetization narrows. The technology delivers on its promises in some domains (coding, research, creative work) and falls meaningfully short in others (customer service, autonomous agents, enterprise workflow automation). The result is a real technology, real adoption, and a real economic impact — but a profit pool meaningfully smaller than the capital commitments assume. In this scenario, the correction is broader and longer than scenario two, because the asset base does not become valuable on a five-to-ten-year horizon. Some of the infrastructure gets retrofitted for other purposes. Some of it gets written down. The economic damage extends beyond the immediate AI infrastructure sector into the financing chains that supported it, the equity holders who underwrote it, and the macroeconomic assumptions that justified the capital intensity in the first place.
The probability distribution across these three scenarios is genuinely uncertain. Confident assertions in either direction — that the buildout will obviously pay off, or that it is obviously a bubble — should be treated with skepticism by serious observers. The honest position is that scenario one requires several improbable things to go right; scenario two is the historically typical outcome for capital cycles of this scale; and scenario three is unusual but not unprecedented, with the dot-com bust and the Japanese real estate bubble offering imperfect but instructive parallels.
What it means for the rest of us.
If you do not work at a hyperscaler, do not invest in AI infrastructure equities, and do not run a company that buys AI compute at scale, the four trillion dollar bet may seem like a distant Wall Street story. It is not. The capital being deployed is altering the structure of the broader economy in ways that are already affecting people who have never thought about a GPU.
The first-order effects are obvious. Electrical infrastructure investment has accelerated in jurisdictions hosting hyperscaler data center buildouts, with consequent effects on local power prices, grid reliability, and public utility politics. Real estate markets near major data center clusters are absorbing pricing pressure. Construction labor markets, semiconductor fabrication labor markets, and high-skill engineering labor markets are being reshaped by demand from a small number of very large buyers. These effects are real and measurable in 2026. They are also small compared to what comes next.
The second-order effects are harder to model. If the buildout proceeds as scenario one assumes, the resulting AI infrastructure becomes the foundational layer for an enormous expansion of economic activity, with consequences for employment, productivity, public services, and the geographic distribution of economic opportunity that will play out over decades. If the buildout proceeds as scenario two assumes, a meaningful share of US corporate investment for several years will have been directed into infrastructure that does not produce expected returns, with macroeconomic consequences that include slower productivity growth, pressure on equity valuations across the broader market, and potential financial stress that propagates through credit markets. If scenario three plays out, the consequences are larger and more difficult to bound.
The thing worth understanding, regardless of which scenario emerges, is that the four trillion dollars is not theoretical. It is being spent. The infrastructure is being built. The bet is being placed. The outcome is uncertain, but the wager is not. Every major economic actor — public companies, private investors, sovereign wealth funds, governments — is now positioned, deliberately or by default, on one side or another of this trade. The choices being made in 2026 about how to deploy capital, allocate labor, train people, and structure businesses are choices whose validity depends substantially on which of the three scenarios materializes over the next thirty-six to seventy-two months.
This is what makes the AI era a civilizational moment rather than merely a technology cycle. The capital commitments are large enough, and concentrated enough, that the outcome will reshape the macroeconomic trajectory of the United States and the global economy regardless of which scenario emerges. We are all, in some meaningful sense, holding pieces of this trade — through our retirement accounts, through our employers, through the public goods our governments fund with revenue that depends on tax receipts from a small number of very large companies, through the labor markets we participate in, through the supply chains we are embedded in.
The honest framing, as we move through the next several years, is that the buildout is happening, the bet has been placed, and the rest of the economy is now along for the ride. What we owe ourselves — as readers, as participants, as citizens of an economy reshaping itself in real time — is a clear-eyed view of what is actually being bet, what could go right, and what could go wrong. Without that clarity, we cannot make sensible choices about our own positions within the trade. With it, we can at least see the shape of the moment we are living through, and decide, with fuller information, how we want to be positioned within it.
Sorso View exists to help with that. The Compute Price Index is the data spine — the weekly record of what the AI compute economy actually costs, who is charging what, and how the market is moving. The editorial layer, of which this essay is part, is the broader frame: the economic structure, the financial mechanics, the civilizational implications of a technology revolution being built at unprecedented capital scale. We will not tell you which of the three scenarios will materialize. No one knows. We will tell you what we are seeing, week by week, as the data accumulates, and we will help you make sense of it as the picture clarifies.
The four trillion dollars is on the table. The bet is in motion. The next several years will tell us whether it pays.
— Sorso View Editorial
Toronto · April 30, 2026
Capex figures sourced from CreditSights, Goldman Sachs Research, Futurum Group, and public earnings guidance from Amazon, Alphabet, Meta, Microsoft, and Oracle for 2026.