The internet has an asymmetry problem. The cost of producing a convincing falsehood — a deepfake video, a synthetic news article, a cloned voice — has dropped to near zero. The cost of verifying whether a piece of content is real has not dropped at all. This asymmetry is the defining structural problem of the information environment, and it is getting worse every month.
The Production Cost Problem
In 2020, creating a convincing deepfake video required specialized hardware, technical expertise, and hours of footage for training. By 2025, generating a synthetic video of a person saying words they never said requires a consumer laptop, a free open-source model, and a 30-second voice sample. The barrier to entry has collapsed.
The data reflects this. Deepfake content volume is growing at approximately 900% year-over-year. The number of deepfake files shared online rose from roughly 500,000 in 2023 to an estimated 8 million by 2025. In Q1 2025 alone, 179 verified deepfake incidents were reported — exceeding the total for all of 2024 by 19%. By late 2025, deepfake attacks were occurring at a rate of at least seven per day.
The distribution by media type: video deepfakes account for approximately 46% of synthetic content, images for 32%, and audio for 22%. Audio deepfakes — the cheapest and easiest to produce — are the fastest-growing category, because they require the least source material and are the hardest for humans to evaluate without visual context.
The Detection Failure
Human ability to identify deepfakes is poor and declining as generation quality improves.
A 2025 controlled study found that only 0.1% of participants could correctly distinguish between real and synthetic media across formats. For high-quality video deepfakes, human detection accuracy drops to 24.5%. For images, it hovers around 62%. For audio, participants reported 73% confidence in their judgments — but their actual accuracy was significantly lower.
This is the core of the problem. Humans are bad at detecting synthetic content and simultaneously overconfident in their ability to do so. A person who is 73% confident they can tell real from fake audio is a person who will share synthetic audio believing it to be genuine. The overconfidence is more dangerous than the inability.
The cost of producing a convincing lie is approaching zero. The cost of verifying truth has not changed. This asymmetry is the structural crisis of the information environment.
Technological detection is not solving the problem at scale. The deepfake detection market is projected to reach $15.7 billion by 2026, growing at 28-42% annually. But when detection models move from controlled lab environments (where they perform well) to real-world conditions — with video compression, variable lighting, low resolution, and cross-platform re-encoding — their effectiveness drops by 45-50%.
Gartner projects that by 2026, 30% of enterprises will no longer consider standalone identity verification and authentication solutions to be reliable in isolation, specifically because of deepfake-driven threats. The detection layer is losing the arms race against the generation layer.
The Economic Damage
The financial cost is concrete and growing.
Deepfake-driven fraud attempts spiked 3,000% globally between 2022 and 2024, with North America alone seeing a 1,700% increase. In the first half of 2025, deepfake-related scams cost Americans approximately $547 million. Deepfakes now contribute to roughly 40% of all biometric fraud attempts worldwide. Nearly half of enterprises (49%) reported experiencing audio or video deepfake fraud by 2024.
The corporate vulnerability is direct. A fake earnings call audio from a CEO can move stock prices within minutes. A synthetic video of a CFO authorizing a wire transfer can drain accounts before anyone realizes the instruction was synthetic. The World Economic Forum has identified AI-generated disinformation as one of the top global risks, with estimated costs in the tens of billions annually when accounting for market manipulation, consumer fraud, and institutional damage.
Deepfakes create damage even when no deepfake exists. Once the public accepts that any audio or video could be synthetic, real evidence becomes dismissable. A politician caught on camera can claim the footage is AI-generated. A whistleblower's leaked recording can be dismissed as fabricated. This is the "liar's dividend" — the ability to cast doubt on genuine evidence by pointing to the existence of synthetic media. The damage to institutional trust is the same whether a specific deepfake is deployed or not.
The Political Dimension
The 2024 election cycle was the first to contend with deepfakes at scale. AI-generated robocalls impersonating President Biden discouraged voters from participating in the New Hampshire primary. Synthetic images of candidates circulated across social media platforms. Voice clones were used in targeted disinformation campaigns.
The anticipated worst-case scenarios — mass-scale AI-generated disinformation swaying election outcomes — did not materialize in 2024 at the scale some predicted. But the infrastructure for future deployment was clearly demonstrated. The tools are available. The cost is low. The detection capacity is insufficient. The Brennan Center for Justice documented how the primary long-term threat is not any single deepfake but the cumulative erosion of trust in democratic institutions, media, and the electoral process itself.
Brookings Institution research added a nuance: casting real scandals as "misinformation" can sometimes increase voter support for the accused politician. The information environment is not just polluted with synthetic content — it has become a space where the concept of verification itself is weaponized.
The Infrastructure Response
The response to the internet of lies cannot be content-level fact-checking alone. Manual fact-checking takes hours per claim. AI generates thousands of synthetic artifacts per second. The ratio does not work. The response must be infrastructure-level.
Three systems are under construction.
1. Content Provenance (C2PA)
The Coalition for Content Provenance and Authenticity (C2PA) — backed by Adobe, Microsoft, Google, the BBC, Intel, and Arm — is building a technical standard for cryptographic content provenance. A photograph or video signed with C2PA metadata carries verifiable proof of its origin, creation date, capture device, and modification history. The chain of provenance is cryptographic: any alteration to the file invalidates the signature.
This does not prevent deepfakes from being created. It allows consumers and platforms to distinguish content with verified provenance from content without it. A news photograph signed by a Reuters camera with C2PA metadata is distinguishable from an AI-generated image with no provenance chain. The distinction is mathematical, not editorial.
A C2PA-enabled camera or software tool creates a signed "manifest" that is embedded in the media file. The manifest contains: (1) the identity of the device or software that created the content, (2) a timestamp, (3) a record of any edits made, and (4) a cryptographic signature linking all elements. When the file is shared, any recipient can verify the manifest against the signer's public key. Tampering with the content invalidates the signature. This is the media equivalent of HTTPS — not a guarantee of truth, but a guarantee of origin.
2. Decentralized Identity (DIDs)
Decentralized Identifiers allow publishers to cryptographically sign their content with an identity they own — not one controlled by a platform. A journalist, news organization, or institution can attach a digital signature to every article, video, or broadcast. Anyone who consumes the content can verify the signature against the publisher's public key without contacting a central authority.
This creates an unforgeable link between creator and content. A fake article attributed to The New York Times can be instantly exposed by checking whether the content carries the Times' valid digital signature. The verification is mathematical and takes milliseconds.
3. AI-Mediated Verification
The only system that can verify content at the speed and scale of AI generation is another AI. Personal AI agents — the same general-purpose systems described in The Text Field is the New Dashboard — can perform automated provenance checks, cross-reference claims against verified databases, and flag content that lacks cryptographic signatures or contains manipulation artifacts. These agents operate as verification intermediaries, sitting between the user and the information stream.
The pattern is identical to email spam filtering. Humans cannot manually evaluate every email for spam. Automated filters, trained on patterns and continuously updated, handle the volume. Content verification at internet scale requires the same architecture: automated systems performing continuous, real-time assessment, with human oversight for edge cases and policy decisions.
What Changes and What Does Not
The internet of lies is not a temporary crisis caused by a specific technology. It is a structural consequence of the gap between production cost and verification cost. As long as creating synthetic content is cheaper than verifying it, the volume of false content will exceed the capacity to check it.
The infrastructure responses — C2PA, DIDs, AI verification — do not eliminate falsehood. They shift the default. In the current environment, all content arrives with equal credibility. In the provenance-enabled environment, content arrives with either verified credentials or without them. The absence of credentials becomes a signal. Over time, platforms, browsers, and AI agents can surface this distinction, allowing users to calibrate their trust accordingly.
The transition will not be fast. Camera manufacturers need to embed C2PA signing. Social media platforms need to preserve provenance metadata instead of stripping it during upload (as most currently do). Browser vendors need to surface verification indicators. And users need to learn what a provenance badge means, the same way they learned what a padlock icon in the address bar means.
The alternative is an information environment where nothing verifiable exists, where every image, every recording, every document is presumed synthetic until proven otherwise. That endpoint is not theoretical. It is the trajectory the data describes if no infrastructure intervention occurs.
The internet of lies is a structural problem with quantifiable dimensions: deepfake content growing 900% annually, 8 million synthetic files in circulation, 0.1% human detection accuracy, $547 million in US fraud losses in H1 2025 alone. The asymmetry between production cost (approaching zero) and verification cost (unchanged) is the root cause. Content-level fact-checking cannot scale. Infrastructure-level responses — C2PA for cryptographic provenance (backed by Adobe, Microsoft, Google, BBC), decentralized identifiers for unforgeable publisher signatures, and AI-mediated verification for real-time assessment — represent the only technically viable path. The goal is not to eliminate lies but to make truth verifiable at speed and scale.