veda.ng
Essays/The Infinity Economy

The Infinity Economy

Can AI and decentralized systems make scarcity obsolete, or does the physical world have something to say about that?

Vedang Vatsa·March 11, 2026·9 min read
Infographic

Every generation of economists has worked inside one foundational assumption: resources are limited, wants are not, and the job of economics is to allocate the gap. Adam Smith built on it. David Ricardo formalized it. Karl Marx inverted the politics but kept the structure. Even the most radical twentieth-century thinkers, from Keynes to Hayek, accepted scarcity as the organizing condition of economic life.

A preprint by Pitshou Moleka, published through Preprints.org in June 2025, argues that this assumption is reaching obsolescence. The convergence of artificial intelligence, autonomous production, decentralized energy, and near-zero-cost information replication, Moleka argues, signals the emergence of what he calls the Infinity Economy: a system where wealth creation and distribution are no longer constrained by material limits. The paper proposes a post-monetary, AI-governed framework for organizing economies beyond scarcity.

The ambition is genuinely interesting. The question is whether the evidence supports it, and whether the intellectual scaffolding holds up under the weight of the physical world it claims to transcend.

Where Scarcity Has Weakened

The strongest version of Moleka's argument draws on sectors where scarcity logic has genuinely eroded.

The Scarcity Spectrum

Where different resources sit between scarcity and abundance

Digital information
Near-zero marginal cost
Cognitive labor (AI)
Falling rapidly
Solar energy (levelized)
$0.03/kWh, down 90% since 2010
Food production
Gains offset by climate + population
Fresh water
2.2B lack safe access (UN 2024)
Rare earth elements
Neodymium demand +70% by 2030
Arable land
Net loss annually
ScarceAbundant

Digital information is the clearest case. Once a piece of software, a song, or a document is created, the cost of reproducing it one more time is effectively zero. Jeremy Rifkin documented this trend in 2014, arguing that the near-zero marginal cost of digital goods was disrupting media, publishing, and software. He was broadly right. The economics of a Spotify stream or a Wikipedia article do not follow the logic of oil extraction. The good does not deplete when consumed.

$0.00
Marginal cost of a digital copy
Rifkin (2014)
90%
Drop in solar energy cost since 2010
~$0
Cost of an additional Wikipedia page view
2-3x
Annual drop in LLM inference cost
Industry data

Cognitive labor is following a similar, if less complete, path. Large language models now perform legal research, medical triage, financial analysis, and software debugging at costs that fall with each model generation. Brynjolfsson and McAfee described this as the second machine age, where machines move from replacing muscle to replacing cognition. If the cost of generating a competent first draft of a legal brief falls by 90% (and it has), then a meaningful portion of what lawyers bill for has moved closer to zero marginal cost.

Additive manufacturing has made localized, on-demand production feasible for certain categories of goods, from prosthetics to aerospace components. Chris Anderson argued over a decade ago that this would democratize manufacturing. The democratization has been slower than predicted, but the direction is observable.

These are genuine shifts and they deserve serious economic analysis.

Moleka is right that the mainstream economics profession has not fully reckoned with what happens when the principal vectors of value (information and cognition) become non-rival and near-infinitely replicable.

Where the Argument Overreaches

The difficulty begins when the paper extends the logic of digital abundance to the entire economy. Information can be copied at zero marginal cost. Lithium cannot. Fresh water cannot. Arable land cannot.

Data Center Electricity Consumption

Global TWh/year (IEA base case)

2020
240
2022
340
2024
415
2026*
590
2028*
760
2030*
945

Source: IEA Electricity 2025 report. *Projected. US and China account for nearly 80% of growth. At 945 TWh, data centers would consume ~3% of global electricity.

The IEA reported that global data center electricity consumption reached approximately 415 terawatt-hours in 2024, roughly 1.5% of total global electricity use. Their base-case projection puts this at 945 TWh by 2030, growing at approximately 15% per year as AI workloads scale. The United States and China account for nearly 80% of this growth. These "dematerialized" AI systems run on semiconductor chips manufactured from rare earth elements, cooled by water in facilities that consumed 66 billion liters in the United States alone in 2023, and powered by electricity generated overwhelmingly from fossil fuels.

The paper references "decentralized quantum energy" as a pathway to eliminating traditional energy constraints. As of mid-2026, commercial quantum energy generation does not exist. Fusion energy, which is closer to realization, remains at least a decade from commercial deployment according to the most optimistic estimates from the IAEA. The gap between what is theoretically possible and what is deployable at scale is not a minor detail. It is the central question.

Material Constraints on Digital Abundance

Physical inputs the "dematerialized" economy depends on

ResourceDemand trendSupply constraintRisk
Rare earths (neodymium)+70% by 2030China controls 60% of mining, 90% of processingCritical
Lithium+300% by 2030Surplus in 2023-24, deficit expected by 2026High
Water (data centers)66B liters/yr (US alone, 2023)Competing with agriculture, drinking waterHigh
Copper+25% by 2030Grade decline at existing minesMedium
Semiconductors+15% CAGRConcentrated in Taiwan (TSMC)Critical

Sources: IEA Global Critical Minerals Outlook 2025, S&P Global Commodity Insights, IEA Electricity 2025.

In 2025, China tightened export controls on rare earth elements essential for electric vehicle motors and AI chip components. Global demand for neodymium alone is projected to increase by over 70% by 2030. The lithium market, after a surplus in 2023-2024, is expected to face deficits by 2026 according to the IEA. Semiconductor fabrication remains concentrated in Taiwan, with TSMC producing over 90% of the world's most advanced chips. The very technologies that the Infinity Economy positions as pathways to post-scarcity are themselves constrained by material scarcities that show no sign of disappearing.

Nicholas Georgescu-Roegen's foundational work on entropy in economics, published in 1971, established something that remains true regardless of how sophisticated AI becomes: economic processes transform low-entropy resources into high-entropy waste. This is not a feature of capitalism or socialism. It is a feature of physics. No amount of algorithmic optimization changes the second law of thermodynamics.

The Cost of Building "Abundance"

One of the less examined ironies of the AI abundance thesis is how expensive the infrastructure of abundance itself has become.

Frontier Model Training Costs

Estimated cost per training run (USD)

GPT-3 (2020)$12M
GPT-4 (2023)$100M
Gemini Ultra (2023)$191M
Llama 3.1 405B (2025)$170M
Next-gen frontier* (2026-27)$1B+

Sources: Galileo AI, Local AI Master, Time, About Chromebooks. *Industry consensus projection. Costs growing 2-3x per year. Compute/hardware is 47-67% of total cost.

The cost of training frontier AI models has escalated from roughly $12 million for GPT-3 in 2020 to an estimated $100-191 million for GPT-4 and Gemini Ultra in 2023. Industry consensus projects that the next generation of frontier models may cost upward of $1 billion for a single training run. This cost grows at 2-3x per year. Compute and hardware represent 47-67% of total training cost, with the remainder split between R&D talent and data infrastructure.

$1B+
Projected cost of next-gen training run
Industry consensus (2026)
47-67%
Hardware share of training cost
Research estimates
$325B+
Combined Big Tech AI capex (2025)
Visual Capitalist
6
Companies that can afford frontier training

The capital requirements for frontier AI create a structural paradox. The technology that promises to make cognitive labor abundant is itself accessible only to a handful of companies with the resources to build it. In 2025, the combined AI-related capital expenditure of the six largest technology companies (Amazon, Apple, Alphabet, Microsoft, Meta, Nvidia) exceeded $325 billion. The barrier to entry for competing at the frontier is rising, not falling.

This is not necessarily a permanent condition. Open-weight models (Llama, Mistral, DeepSeek) have demonstrated that smaller organizations can build competitive models at lower cost. Inference costs are falling rapidly, roughly 2-3x per year. The question is whether the concentration at the training level translates into lasting market power, or whether competition at the deployment level prevents it.

The Distribution Problem That Abundance Does Not Solve

Moleka's paper acknowledges, relatively briefly, that abundance does not automatically resolve issues of power and distribution. This may be the most important concession in the paper, and it deserved more space.

Platform Concentration

Market cap ($T) of companies controlling AI infrastructure, Mar 2026

Nvidia
$4.4T
Apple
$3.8T
Alphabet
$3.5T
Microsoft
$2.8T
Amazon
$2.2T
Meta
$1.8T

Combined revenue: $2.15T (2025). 53% of S&P 500 returns from these firms.

Sources: Visual Capitalist, Forbes, The Street. Market caps approximate as of March 2026.

The digital economy has already demonstrated what happens when a resource becomes abundant while the infrastructure for producing and distributing it remains concentrated. Shoshana Zuboff documented in The Age of Surveillance Capitalism how the near-infinite scalability of data collection did not produce broadly shared prosperity. It produced platform monopolies. Google, Meta, Amazon, and a handful of other companies captured the economic value of digital abundance precisely because they controlled the infrastructure through which that abundance flowed.

As of March 2026, six technology companies command a combined market capitalization exceeding $18 trillion. Their combined revenue in 2025 was approximately $2.15 trillion. These six firms drove roughly 53% of the S&P 500's total return in 2025. This is not the distribution pattern of an economy where abundance has been broadly shared.

There is no reason to assume that a future economy of abundant AI-generated goods and services would distribute itself differently without deliberate institutional design. If anything, the capital requirements for frontier AI development suggest that concentration could intensify. Nick Srnicek's analysis of platform capitalism showed that network effects and data advantages create winner-take-all dynamics that become harder to reverse over time.

The paper proposes DAOs and blockchain-based governance as alternatives to centralized control. The empirical record of DAOs has not been encouraging on the governance front. The largest DAO experiments, including those in the Ethereum ecosystem, have struggled with low participation rates, plutocratic voting structures where influence scales with token holdings, and vulnerability to exploits. These may be solvable engineering problems. In practice, they have persisted through multiple generations of protocol design.

The Empirical Reality Check

The strongest argument against post-scarcity frameworks is not theoretical. It is empirical.

The Empirical Reality Check

Global scarcity in 2024, by the numbers

673M
People undernourished (2024)
FAO SOFI 2025
2.2B
Without safe drinking water
UN Water 2025
2.6B
Cannot afford a healthy diet
FAO/UNICEF 2025
8.2%
Global undernourishment rate
FAO SOFI 2025
512M
Projected hungry by 2030
WHO/FAO projection
-7%
Renewable water per capita (decade)
UN AQUASTAT

In 2024, 673 million people were undernourished according to the FAO (down from 735 million in 2022, but still above pre-pandemic levels). Approximately 2.2 billion people lacked access to safely managed drinking water. Some 2.6 billion people could not afford a healthy diet. Renewable water availability per person declined by 7% over the past decade. Global rare earth supply chains are tightening under geopolitical pressure. AI data centers are straining electrical grids and water supplies in regions where they cluster.

Current projections estimate that approximately 512 million people may still be chronically undernourished by 2030, with nearly 60% of them in Africa.

An economics that theorizes abundance while 8.2% of the world's population cannot feed itself has skipped a step. The step it has skipped is the most important one.

What the Paper Gets Right

The most valuable contribution of Moleka's framework is not the specific claims about quantum energy or post-monetary exchange, which remain speculative. It is the broader point that economics as a discipline has not adequately theorized abundance.

The profession's modeling tools, from supply-demand curves to general equilibrium models, are built on assumptions of rival, excludable goods. When the primary output of an economy shifts toward non-rival, non-excludable goods, those tools become less useful. This is a legitimate intellectual gap.

W. Brian Arthur's complexity economics provides a more productive foundation than classical equilibrium models for thinking about economies where value emerges from network effects and positive feedback loops rather than from the allocation of scarce inputs. Arthur's work at the Santa Fe Institute has shown that economies with increasing returns behave fundamentally differently from the diminishing-returns economies that classical theory describes well. Moleka draws on Arthur's work, and the connection is well placed.

The paper's engagement with post-labor identity also deserves attention. If cognitive automation continues on its current trajectory, and the Stanford Digital Economy Lab's data on early-career job displacement suggests it may, then large populations of educated workers could face a genuine crisis of economic purpose within a decade. The meaning of "contribution" in a society where most productive tasks can be automated is a question that neither economics nor philosophy has answered adequately. This is an ethical challenge of the next twenty years, regardless of whether anything resembling the Infinity Economy materializes.

The Dual Economy Framework

The Dual Economy

Modern economies run on both logics simultaneously

Abundance dynamics
  • Digital information (zero marginal cost)
  • Cognitive labor (LLM-generated)
  • Software development
  • Media, music, text, images
  • Financial modeling and analysis
Scarcity dynamics
  • Rare earth elements
  • Fresh water, arable land
  • Energy (generation + transmission)
  • Semiconductor fabrication
  • Physical logistics and housing

The most productive path forward may not require choosing between scarcity economics and abundance economics. Modern economies operate on both logics simultaneously.

Software follows abundance dynamics. Lithium follows scarcity dynamics. Food production falls somewhere between, with productivity gains offset by population growth, climate change, and soil degradation. Solar energy costs have fallen 90% since 2010 according to IRENA, but the minerals required for solar panels remain subject to extraction constraints and geopolitical competition.

An economics adequate to the twenty-first century would need to model these different regimes within a single framework, which neither classical economics nor post-scarcity theory currently does well. The key variables are:

What determines which logic applies

Rivalness: Can the good be consumed without reducing what others receive? Digital goods are non-rival. Physical goods are rival.

Marginal cost: How much does one additional unit cost to produce? For software, near zero. For rare earth extraction, rising.

Infrastructure concentration: Who controls the production and distribution channels? Concentrated infrastructure converts abundance into monopoly rents.

Thermodynamic cost: What energy and material inputs does production require? This is the ultimate constraint that no amount of software can eliminate.

Moleka's paper, which he describes as a "conceptual scaffold," is most useful as exactly that. It identifies a genuine blind spot in economic theory. The risk is in mistaking the scaffold for the building. The building, if it can be built at all, requires solving problems of energy, materials, governance, and distribution that theory alone does not touch.

The physics does not care about the theory. And for now, the physics is winning.

Key Takeaway

The infinity economy describes economic systems where the marginal cost of production approaches zero for digital goods: software, media, educational content, AI-generated assets. When reproduction is free, scarcity-based pricing models break down, and the economics of abundance require new frameworks for value capture, creator compensation, and intellectual property. The transition is already underway in software (open source), media (streaming platforms with near-zero marginal distribution cost), and AI-generated content (text, images, code produced at token-level cost). The structural challenges include sustaining incentives for creation when copies are free, preventing monopolistic capture of abundance by platform intermediaries, and developing licensing and attribution frameworks for AI-generated derivatives.