
The Scarcity Assumption
Modern economics rests on a foundational premise. Resources are finite. Demand exceeds supply. Markets exist to allocate scarce goods efficiently. Every major institution, from central banks to corporate boards to labor unions, operates within this framework. Pricing, wages, interest rates, and profit margins all derive their logic from the assumption that producing another unit of something costs something.
That premise is under pressure.
The convergence of artificial intelligence, renewable energy, and open-source infrastructure is driving the marginal cost of several categories of goods and services toward levels that challenge traditional pricing models. This is not a prediction about a distant future. It is a measurable, accelerating trend visible in 2026 market data.
The concept itself is not new. Jeremy Rifkin articulated the "zero marginal cost society" thesis in 2014, arguing that the Internet of Things, renewable energy, and collaborative commons would gradually erode the cost of production. A decade later, AI has added a dimension Rifkin did not fully anticipate. It is not just the distribution of information that is becoming cheaper. The production of intelligence itself, the most valuable input in a knowledge economy, is experiencing deflationary pressure at a rate that outpaces historical precedent.
Three Vectors of Deflation
The post-scarcity thesis gains traction not from any single technology but from the simultaneous convergence of three independent deflationary vectors. Each alone would reshape its respective sector. Together, they create compounding effects that may alter the structure of entire economies.
1. The Cost of Intelligence
For most of industrial history, intelligence was expensive. It was embodied in people, required years of education, and commanded wages proportional to its scarcity. A legal opinion, a medical diagnosis, a financial analysis, a software architecture, these outputs required highly trained human minds, and the supply of those minds was inherently limited.
Generative AI has introduced an asymmetry into this equation. A model that costs billions to train can produce its millionth output at a fraction of the cost of its first. Between 2024 and 2025, the cost of AI inference dropped by over 75% for comparable tasks, according to Contrary Research. The trajectory resembles Moore's Law but for cognitive output rather than transistor density.
The marginal cost of producing a legal brief, a market analysis, a code module, or a technical summary is approaching the cost of the electricity required to run the inference. The structural implications for any profession built on selling cognitive labor are significant.
This does not mean intelligence is free. The distinction matters. Training frontier models requires billions in capital expenditure. Running them at scale requires data centers, energy, and specialized hardware. AI companies report gross margins of 50 to 60%, far below the 80 to 90% margins of traditional software businesses. The marginal cost is low, not zero.
But the economic effect is similar. When a task that previously required a $200-per-hour specialist can be completed by a model for a few cents, the market price of that output faces relentless downward pressure. This is not replacing the specialist. It is re-pricing the output.
McKinsey estimates that generative AI may automate 60 to 70% of current work activities. Studies from Wharton and MIT report productivity gains of 15 to 50% for knowledge workers using AI tools. The cost of routine cognitive labor, data summarization, document drafting, basic code generation, administrative policy writing, has fallen so sharply that some researchers describe it as a structural "wage cut" for specific task categories.
2. The Cost of Energy
The second vector operates on a slower timeline but carries broader implications. Solar photovoltaic technology has achieved a 97% decline in levelized cost of electricity (LCOE) since 2010, according to IRENA. By 2024, solar was 41% cheaper on average than the lowest-cost fossil fuel alternative globally.
The dynamics here are well understood. China's massive investment in clean technology manufacturing has created an overcapacity of solar modules, driving global prices to record lows. N-type cells and bifacial panels have become industry standard, increasing energy yield per square meter. Battery energy storage system costs fell 27% between 2024 and 2025 (BloombergNEF), making solar-plus-storage increasingly viable for baseload power.
Global cumulative installed solar PV capacity surpassed 2,260 gigawatts by the end of 2024 (IEA PVPS). Renewables, led by solar and wind, account for the vast majority of new electricity capacity additions worldwide. The direction is not in question. The timeline is.
Energy is the substrate of all economic activity. When the cost of energy declines structurally, the cost floor of everything else declines with it. Manufacturing, transportation, computation, agriculture, water desalination, each of these sectors has energy as a significant input cost. The post-scarcity thesis for energy does not require energy to become literally free. It requires energy to become cheap enough that it is no longer a meaningful constraint on production.
This is already happening in some geographies. In parts of Texas, solar overproduction has driven wholesale electricity prices below zero during peak generation hours. In Chile, Germany, and Australia, similar patterns have emerged. The economics of curtailment, where energy is intentionally wasted because the grid cannot absorb it, point to a system where the problem is shifting from scarcity to management of surplus.
3. The Cost of Digital Goods
The third vector is the oldest and most established. The marginal cost of distributing a digital good, a song, a document, a software package, a video, has been near zero since the broadband era. This is why the music industry collapsed and rebuilt, why journalism is in perpetual crisis, and why open-source software dominates infrastructure.
What AI adds to this existing trend is the production side. Previously, creating a high-quality digital good still required expensive human labor. A software engineer, a designer, a writer, a musician. AI compresses the production cost alongside the distribution cost. When both production and distribution trend toward zero, the entire value chain of digital goods faces restructuring.
The post-scarcity thesis is strongest where all three vectors converge. A knowledge product, powered by cheap intelligence, running on cheap energy, distributed at near-zero cost. In that intersection, the traditional cost structure of an industry can collapse in years, not decades.
What Post-Scarcity Does Not Mean
The term "post-scarcity" invites misunderstanding. At its extreme, it implies a world where everything is free and abundant. That world does not exist, and it may never exist. The useful version of the concept is more precise.
Post-scarcity in specific categories. Digital goods, cognitive labor, and energy are trending toward abundance. Physical goods, land, rare minerals, clean water, and human attention remain scarce. The economy is not uniformly approaching post-scarcity. It is bifurcating into categories of abundance and categories of persistent scarcity.
New forms of scarcity emerge. As AI makes cognitive output abundant, the scarce resource shifts. In 2026, the most valuable inputs are proprietary data (the context that general models lack), physical execution capability (robotics, manufacturing, logistics), and human judgment in novel situations. Scarcity does not disappear. It migrates.
Infrastructure concentrates. Even if the marginal cost of an AI query trends toward zero, the fixed cost of building the infrastructure to run that query is enormous. Data centers, GPU clusters, energy contracts, and training pipelines require billions in capital. The economics of post-scarcity at the consumer level may coexist with extreme concentration of infrastructure at the producer level. A few entities may control the "utilities" of abundance.
Distribution is political, not automatic. The most persistent critique of post-scarcity optimism is that abundance does not automatically translate to access. The United States produces enough food to feed its population several times over, yet food insecurity persists. Technology makes abundance possible. Institutions determine whether abundance is shared.
The gap between technological abundance and equitable distribution is not a technical problem. It is a governance problem. Every prior wave of cost reduction, from the printing press to the internet, generated immense surplus value. The question was never whether the value would exist. It was who would capture it.
The Macroeconomic Paradox
Central banks, the institutions most responsible for managing modern economies, are built for scarcity. Their primary tools, interest rates, money supply adjustments, and inflation targeting, assume an economy where demand can outstrip supply, where wages can drive prices upward, and where the normal state of affairs involves managing cycles of growth and contraction within a framework of finite resources.
AI-driven deflation presents a category problem for these institutions. When the cost of cognitive labor drops by 75% in a year, that is deflationary. But it is not the demand-deficient deflation that central banks fear (recession, unemployment spirals). It is "good deflation," the kind that improves living standards by making goods and services more affordable.
The problem is that existing monetary policy frameworks do not clearly distinguish between these two types. A sustained period of falling prices, regardless of cause, can trigger policy responses designed for recession (lower interest rates, quantitative easing) that may be inappropriate for a technology-driven cost reduction.
Japan's experience offers a partial case study. Three decades of deflationary pressure, driven in part by demographic decline and efficiency gains, challenged the Bank of Japan's ability to stimulate growth using conventional tools. The AI era may generalize this dynamic globally, not from demographics, but from technology. Economists are beginning to frame this as a structural shift that requires new macroeconomic thinking, not just new policy tools.
The Wage-Consumption Question
Classical economics assumes a feedback loop. Workers produce goods. They earn wages. They spend wages on goods. Their spending creates demand. That demand creates more production, more jobs, more wages. This loop has sustained market economies for two centuries.
AI introduces a potential break in this loop. If machines perform a growing share of productive labor, the mechanism for distributing purchasing power (wages) weakens. Output may continue to grow, but income may not keep pace. The result is a demand problem that looks different from a traditional recession.
This is not a new concern. Keynes anticipated it in 1930 with his essay "Economic Possibilities for our Grandchildren," predicting that by 2030, the "economic problem" of scarcity would be largely solved, replaced by the challenge of how to use the resulting leisure. He did not foresee that the transition period would be the most disruptive part.
Several proposed mechanisms exist for addressing this structural gap.
Universal Basic Income (UBI). Direct cash transfers to all citizens, funded by taxation of productivity gains. Finland, Kenya, and several U.S. municipalities have conducted trials with mixed but generally positive results on well-being, though scalability and fiscal sustainability remain debated.
Universal Basic Compute. A more recent proposal that suggests distributing access to AI compute as a public utility, rather than distributing cash. The logic is that if AI is the new means of production, access to that production capacity is more valuable than a fixed income transfer. Sam Altman and others in the AI industry have proposed variations of this concept.
Stakeholder models. Restructuring corporate governance to distribute productivity gains more broadly, through profit-sharing, equity ownership, or cooperative structures, rather than concentrating them in capital returns to shareholders.
Shortened work weeks. Using productivity gains to reduce labor requirements rather than reduce headcount, distributing the same output across fewer working hours per person.
The post-scarcity challenge is not "how do we produce enough?" The industrial revolution answered that question. The challenge is "how do we distribute the value of abundance when the traditional distribution mechanism, wages for labor, no longer scales with output?"
Abundance Infrastructure
If the production of intelligence, energy, and digital goods is trending toward abundance, a new category of strategic investment emerges. Rather than investing in the goods themselves (which are becoming commodities), the opportunity may lie in the infrastructure that enables and manages abundance.
Energy storage and grid management. As solar overproduction creates periodic surplus, the value migrates from generation to storage and distribution. The ability to store energy cheaply and distribute it efficiently becomes the scarce capability in an energy-abundant world.
AI orchestration and safety. As baseline intelligence becomes commoditized, the value migrates to orchestration (managing multiple AI systems to achieve complex goals), safety (ensuring AI systems operate within acceptable parameters), and domain-specific context (the proprietary data and workflows that general models lack).
Physical execution. Robotics, advanced manufacturing, and logistics remain tied to physical geography, material science, and engineering constraints that software cannot bypass. The "latency bound" of physical execution, the time it takes to move atoms rather than bits, creates a category of persistent scarcity even in an otherwise abundant economy.
Verification and trust. In a world where AI can generate infinite content, code, and analysis, the ability to verify authenticity, accuracy, and provenance becomes increasingly valuable. Cryptographic provenance, human attestation frameworks, and audit systems represent a growth category built on the need to manage abundance rather than create it.
The Bifurcated Economy
The most likely near-term outcome is not a post-scarcity utopia or a dystopian collapse, but a bifurcated economy. Certain sectors, primarily digital goods, cognitive services, and energy, may experience deflation so persistent that their pricing models fundamentally change. Other sectors, primarily physical goods, real estate, healthcare delivery, and anything requiring human presence or rare materials, may continue to operate under scarcity dynamics.
This bifurcation creates unusual dynamics. A person might access world-class AI tutoring for free while being unable to afford housing. A company might generate sophisticated market analysis at near-zero cost while facing rising costs for raw materials and logistics. The "post-scarcity" label applies unevenly, and the economic stress falls disproportionately on those whose livelihoods depend on the sectors experiencing deflation.
The transition to partial post-scarcity is asymmetric. Capital flows toward abundance infrastructure (energy storage, AI compute, verification). Labor value concentrates in execution, judgment, and relationship management. The sectors experiencing deflation shed traditional jobs. The sectors experiencing persistent scarcity face cost inflation from demand pressure. Managing this asymmetry may define economic policy for the next generation.
What History Suggests
Every major deflationary wave in history, the printing press, the steam engine, electrification, the internet, followed a similar pattern. An initial period of disruption and displacement. A middle period of institutional adaptation. A long-term expansion of total wealth and well-being.
The printing press destroyed the economics of manuscript copying but created publishing, journalism, and mass literacy. The steam engine displaced artisanal manufacturing but created the industrial middle class. The internet collapsed the economics of physical media distribution but created the digital economy.
In each case, the transition was painful, uneven, and politically contested. In each case, the new equilibrium generated more total value than the old one. In each case, the key variable was not the technology itself but the institutional response, the laws, regulations, social contracts, and governance structures that determined how the new abundance was distributed.
The post-scarcity technology thesis is not a claim that scarcity is ending everywhere. It is an observation that specific, measurable cost functions in intelligence, energy, and digital goods are declining fast enough to challenge the institutional frameworks designed around their scarcity. The question is not whether these trends continue. The evidence suggests they almost certainly do. The question is whether the institutions responsible for managing economic life can adapt at a pace that matches the technology.
Post-scarcity technology is not a utopian endpoint. It is a measurable, accelerating trend in three specific domains: the cost of intelligence (AI inference down 75%+ in one year), the cost of energy (solar LCOE down 97% since 2010), and the cost of digital goods (trending toward zero for both production and distribution). The economic challenge is not production. It is distribution. When the marginal cost of the most valuable goods in a knowledge economy approaches zero, the institutions built to price, regulate, and tax those goods face structural pressure. The practical questions, how to fund public services when the tax base shifts from labor to capital, how to distribute purchasing power when wages decouple from output, how to govern AI systems that operate at scales beyond human oversight, may define the next two decades of economic policy. The technology creates abundance. The governance determines who benefits.