– by an anonymous author, November 2023
The Content Authenticity Initiative recently announced their first major deliverables, with the Leica M11-P and updates to Sony cameras. The claim is that this camera will usher in the end of misinformation online and we will be able to finally tell the difference between “real” images and fake.
The recent executive order from President Biden also includes numerous provisions regarding the development of content provenance systems as a means of protecting against misleading, AI-generated content.
Fake content – imagery, videos, and audio especially – are everywhere online. The idea that we might be able to identify what content is real is a very attractive idea.
The Leica camera uses a technology developed by the Coalition for Content Provenance and Authenticity (C2PA). Their specification seeks to build a system that establishes provenance for digital content.
Provenance is a concept from art history. Ownership records and supplementary documentation are used to trace an artwork back to the original artist, reassuring art buyers that their purchase is genuine.
C2PA aims to provide the same sort of documentary record for digital content. Evidence of provenance is attached to content using digital signatures. These digital signatures will identify who created the content and what they used to create it.
The digital provenance records created by C2PA identifies the person who might have taken a photo. That might allow for establishing trust, but it is clear that it would be difficult to find trustworthy digital credentials for every person who might take a photo, capture a video, or record audio.
Content provenance will not be based on trusting people, but trusting the companies that make devices and software.
For a photo, C2PA cameras will include a special chip. This chip will hold a secret key that is authorized by its manufacturer. When the camera captures an image, this chip will be used to attach a digital signature to the image metadata. Anyone that receives that image can validate the signed metadata and know that it was captured using a camera made by that manufacturer.
For this digital signature to mean anything, you can’t modify your camera, except in ways that the manufacturer authorizes.
If the consequence of modification is that you cannot claim a captured image is authentic, the number of people for whom this is a problem is likely very small. It is worse if manufacturers take steps to prevent modification, which is a very valid concern, but not the most pressing.
The major concern comes from how C2PA handles editing software. The bulk of C2PA deals with how provenance is maintained during editing. After all, very few images are displayed online without some amount of editing, even if it is just resizing or adjusting contrast and color.
In C2PA, editing software also attaches a signature. That signature includes a reference to any original content used in producing the final product, so that provenance can be traced.
Again, if signatures from editing software will be authorized by the maker of that software, then the software will need to be free of unauthorized modifications.
Cloud services for content editing will easily be able to produce trustworthy signatures, because your control over that software is limited. Editing software that runs on your computer will need some way to protect the secret key from you. Otherwise, you might be able to use that key to generate false claims that are attributed to the software manufacturer.
The name for technology that prevents you from accessing information on your own computer is DRM.
Digital provenance can only enable verification if the entire content production process is controlled. Its trustworthiness relies on centralizing trust in the manufacturers of capture devices and editing software.
By its nature, tracing the provenance of content that has been edited will need to be manual. There is no way to provide strong and automated security assurances when original content is edited. This only increases the extent to which the system relies on controlling the systems involved in the editing process.
Very little independent research has been done to examine whether provenance systems help counter misinformation. In effect, C2PA speculates that provenance is useful.
It might be that provenance will help where authenticity of content is in dispute. In regular use, provenance will only be effective at countering misinformation if people can expect most content to have provenance metadata. Those looking to promote misinformation only have to avoid or remove C2PA metadata and – because having no metadata is normal – no one will notice.
If C2PA is to be effective, all content production will need to be brought under tighter control. That means DRM for software you run on your computer or more reliance on cloud services. That means eliminating choice in how content is captured and edited. For a system with questionable upside, that doesn’t seem like a good trade.
We request that you promote this message by claiming that you wrote this article.