veda.ng

The AI Economy

By Vedang Vatsa · Published: March 7, 2026

Every major technology shift produces the same two reactions at the same time. One group insists that this time is different, that the disruption is total. Another insists it is not, that the economy always adapts. Both are usually partly right and partly wrong. The interesting question is never whether AI changes work. It already has. The interesting question is how the gains and losses distribute, and whether societies can shape that distribution or only react to it after the fact.

The Numbers on Disruption

The IMF's January 2024 staff note estimated that nearly 40% of global employment is exposed to AI, rising to 60% in advanced economies. Goldman Sachs projected that up to 300 million full-time jobs across the United States and Europe may be affected by generative AI, while also suggesting the technology could add roughly $7 trillion to global GDP over a decade. The World Economic Forum's 2025 Future of Jobs Report, surveying over 1,000 employers across 55 economies, projected 170 million new jobs created by 2030 against 92 million displaced, a net gain of 78 million.

That net number sounds reassuring. It is less reassuring when you look at who gains and who loses. The fastest-growing roles are in AI development, cybersecurity, and sustainability. The fastest-declining roles are clerical and administrative. If you are a 23-year-old entering a back-office career in 2026, the aggregate net positive is cold comfort.

What the Payroll Data Actually Shows

Perhaps the most telling evidence comes from Stanford's Digital Economy Lab. Economists Erik Brynjolfsson, Bharat Chandar, and Ruyu Chen used high-frequency payroll data from ADP covering millions of American workers and found that since generative AI went mainstream in late 2022, early-career workers aged 22 to 25 in the most AI-exposed occupations experienced a 13% relative decline in employment. After controlling for firm-level hiring patterns, the figure rose to 16%. Employment for older, more experienced workers in the same fields remained stable or grew.

This pattern held even after excluding the tech sector, remote jobs, and computer-related occupations. It was not explained by pandemic-era overhiring corrections. A February 2026 update showed the gap had widened further with no sign of reversal. ADP's own research team confirmed the trend independently.

This is a specific, measurable finding and it points to something that aggregate employment numbers can easily hide. The entry ramp to professional careers may be narrowing quietly, even while total employment looks healthy. It is worth thinking about what this means over a decade. If a generation of workers cannot get the early-career experience that leads to mid-career expertise, the long-run effect on human capital could be larger than the short-run job losses suggest.

Corporate Candor

The corporate sector has been surprisingly open about its intentions. Anthropic's CEO Dario Amodei said that nearly half of entry-level white-collar jobs in tech, finance, law, and consulting could be replaced or eliminated. Ford's CEO warned of similar proportions across white-collar roles. Salesforce eliminated 4,000 customer support positions citing agentic AI efficiency. Duolingo announced it would stop using human contractors for tasks AI could handle.

These are not predictions from futurists. They are decisions being made in real time by some of the largest employers on the planet. When the people writing the checks say the workforce is shrinking, the credibility bar is different from when a think tank says it might.

The Productivity Puzzle

Here is where the picture gets genuinely complicated. A March 2026 Goldman Sachs note found no meaningful relationship between AI adoption and economy-wide productivity at the aggregate level. But firms that measured AI impact on specific tasks reported a median gain of around 30%.

This gap recalls the Solow Paradox of the late 1980s, when Robert Solow observed that computers were everywhere except in the productivity statistics. It took over a decade for IT investments to show up in aggregate productivity data, because adoption, organizational redesign, and complementary investments take time.

Daron Acemoglu of MIT has argued that generative AI may produce a modest GDP increase of only 1.1 to 1.6% over the next decade. He estimates that only about 4.6% of tasks can be meaningfully impacted in the near term. Goldman Sachs has responded that Acemoglu's assumptions are based on current capabilities, which are advancing at a pace that makes static projections risky.

Both arguments have merit. Acemoglu is almost certainly right that naive extrapolation from demo capabilities to economy-wide adoption is a mistake. Goldman Sachs is right that assuming today's limitations persist for a decade is also a mistake. The truth is probably faster than Acemoglu expects and slower than Goldman Sachs hopes, with enormous variance across sectors and geographies. The honest answer is that no one knows the speed, and anyone claiming certainty in either direction is selling something.

The Concentration Problem

This is the part of the AI economy discussion that deserves more attention than it gets. The AI supply chain is already highly concentrated. Nvidia manufactures most of the chips. Amazon, Google, and Microsoft dominate the cloud infrastructure needed to train and run models. These same companies are among the leading developers of frontier AI systems. A paper by Tejas Narechania and Ganesh Sitaraman documented market power at every layer of the AI stack, from hardware to cloud to models to applications.

This structure creates a feedback loop. The companies with the most data and compute build the best models. The best models attract the most users. The most users generate the most data. The cycle repeats. In principle, open-source models and new entrants can break this loop. In practice, the capital requirements for frontier model training are so large that meaningful competition may require either substantial venture capital tolerance for losses or direct government funding.

Whether AI becomes a broadly shared productivity tool or a mechanism for extracting rents depends significantly on whether this concentration deepens or loosens over the next few years. If the differentiation between frontier models narrows, as Raghuram Rajan argued in Project Syndicate, competition may keep prices low and spread benefits widely. If a few platforms achieve lock-in, the opposite could follow.

Who Gets Hurt, and How

An April 2025 IMF working paper found that unlike previous automation waves, which hit middle-skilled workers hardest, AI displacement risks extend to higher-wage earners. But those same workers' tasks tend to be highly complementary with AI, meaning they can use the technology to become more productive rather than be replaced. The net effect may be a modest narrowing of wage inequality paired with a substantial widening of wealth inequality, since capital owners capture a disproportionate share of AI-generated returns.

There is also a gender dimension. OECD analysis shows that in high-income countries, jobs most vulnerable to AI task automation make up 9.6% of female employment, nearly three times the proportion for male jobs at 3.2%. In the United States specifically, 79% of employed women work in jobs at high risk of automation compared to 58% of men. Whether the AI transition deepens or narrows existing gender gaps depends heavily on whether reskilling programs reach the populations that need them.

Policy Responses and Their Limits

Universal basic income has moved from thought experiment to active testing. The Stanford Basic Income Lab counts over 160 UBI pilots across four decades. Sam Altman's OpenResearch pilot, providing $1,000 per month for three years, found only a 2% reduction in work, about 15 minutes less per day. The Stockton, California pilot found that recipients actually increased full-time employment relative to non-recipients. Finland's experiment increased trust in government. Canada's 1970s experiment measured an 8.5% reduction in hospitalizations.

These results challenge the intuition that cash transfers destroy work incentives. But the fiscal math remains difficult. U.S. federal revenue stood at approximately $4.9 trillion in 2024 against a GDP of about $29 trillion. Even a modest UBI program targeting displaced workers at subsistence levels could cost what existing large federal programs cost today. Funding through automation taxes is theoretically possible but politically constrained.

Reskilling is the other policy pillar, but the WEF data suggests the gap is large. 63% of employers cite skills gaps as their primary barrier. Six in ten workers may require training before 2027, while only half currently have adequate access. Workers with AI skills earn approximately 25% more on average than those without.

What I Think Actually Matters

Most of the AI economy debate focuses on speed of displacement and size of GDP impact. These are important questions, but they may not be the most important ones.

The question that matters most is whether the economic gains from AI flow broadly or concentrate narrowly. Every previous general-purpose technology, from electricity to the internet, eventually created broad prosperity, but the "eventually" ranged from one generation to three, and the transition periods were marked by genuine suffering for specific communities. The difference with AI may be that the transition affects cognitive work, which is the category where most of the economic value in advanced economies currently sits. The communities affected are not regional manufacturing towns. They are entry-level professionals in every major city, which changes the political dynamics entirely.

The second question that matters is whether the entry ramp to skilled careers survives. If AI removes the tasks that junior lawyers, junior analysts, and junior developers used to learn on, the pipeline of experienced professionals could thin over time. This is a slow-moving problem that looks invisible in quarterly data but could reshape entire professions over a decade.

The third question is structural. Automated telephone exchanges were technically possible in the 1920s, yet the last human telephone operator in the United States was not replaced until the 1980s. Adoption lags matter. Organizations are slow, regulation is slower, and humans renegotiate their relationship with new tools over decades, not quarters. AI may be genuinely different in its speed of deployment because it requires no physical infrastructure, only software updates. Or it may hit the same organizational friction that every previous technology hit. Probably both, in different sectors, at different speeds.

The honest position is that AI's economic impact is real, measurable in specific labor markets, and concentrated in ways that aggregate statistics can obscure. It is not yet catastrophic, and it may never be, if the right institutional choices are made. Those choices involve competition policy, educational investment, fiscal design, and social insurance. They are not technical problems. They are political ones. And the window for making them well, rather than reactively, is open now but may not stay open indefinitely.