explainer··9 min read

What Is C2PA, and Why It Will Quietly End AI Detection

Cryptographic content provenance is rolling out across Adobe, OpenAI, Sony, and the BBC. Here's how it works, who's signing what, and why detection tools should embrace it instead of fight it.

Every AI-detection company has a five-year-horizon problem they don't talk about. It's called C2PA — the Coalition for Content Provenance and Authenticity — and it will quietly make most current detection work obsolete. Here's why that's good news, even for a detection tool.

What C2PA actually is

C2PA is an open technical standard for embedding tamper-evident cryptographic signatures into media files. When a camera, AI generator, or editing tool produces an image, video, or audio file, it can sign that file with a private key. Anyone can verify the signature with the corresponding public key — and any subsequent edit either invalidates the signature or appends a new signed entry to a chain of custody.

The signature carries metadata: who created it, what tool, when, and a manifest describing the operations performed. It's the digital equivalent of a chain-of-custody form, but mathematical instead of bureaucratic.

Who's signing what

  • OpenAI signs DALL-E 3 and GPT-Image-1 outputs with C2PA "Content Credentials."
  • Adobe embeds Content Credentials in Firefly, Photoshop, and Lightroom exports.
  • Microsoft ships Content Credentials in Copilot and Designer.
  • Sony bakes signing into select Alpha-line cameras — journalists can prove a photo came from a specific physical camera at a specific moment.
  • Leica shipped the first consumer C2PA-signing camera (M11-P) and is rolling it across the line.
  • BBC, AP, AFP, Reuters are running newsroom-side pipelines that sign and verify content.
  • Truepic, Numbers Protocol, Verify Media provide C2PA infrastructure for enterprise.

Why this beats every detector

Detection is statistical inference: you observe artifacts and infer a cause. The error rate is bounded below by what the model can hide from you — and that floor rises with every model release.

C2PA is cryptographic verification: you check a signature against a public key. There's no error rate. The signature is valid or it isn't.

For content that carries a valid C2PA signature, no forensic detector can compete. The signature wins. End of story.

The catch — and the gap our tool fills

C2PA only works when content is signed. Most images on the open internet today aren't. The transition will take years, possibly a decade. During the gap, you still need forensic detection for:

  • Older AI content created before C2PA adoption.
  • AI tools that haven't adopted C2PA yet (most of the fast-moving open-source ecosystem).
  • Content that's been processed through pipelines that strip metadata (most social platforms still do this).
  • Content where the signature has been tampered with — the signature fails, but you still want a verdict.

The right architecture for an AI-detection product in 2026 is provenance-first, forensics-second. Read C2PA before doing anything else. If a valid signature is present, you're done — surface the signer, the timestamp, the edit history, and call it. If no signature, fall back to the forensic ensemble.

What this means for builders

If you're building in this space, two practical takeaways:

  1. Adopt C2PA on day one. The c2pa-js library reads signatures in the browser. The rust crate signs them. Both are open source under permissive licenses.
  2. Sell the workflow, not the model. Detection accuracy will degrade. Provenance verification doesn't. The product that survives is the one that gives buyers a clear, auditable workflow — verify signature, fall back to forensics, log the chain of evidence.

The big-picture bet

In 2026 the question is "is this AI?" In 2030 the question will be "is this signed, and by whom?" Tools that frame themselves around the second question will outlast the ones obsessed with the first.

That's the bet behind this site: detection now, provenance always.