Nic’s Orb
Orb has ended
0xE45e••bB35
#10

I want to go back to a comment you made about proof of personhood. In brief, you said that you are very bullish on it because it “solves the epistemic uncertainty that derives from the rise of cheap AI content[…]. The way we will achieve this is not by proving that some content is AI generated (this is impossible), but rather by committing to all content that we want to later prove is authentic. This function will be incorporated into your devices, so you will simply have the option to sign a photo with your biometrics when you take the pic, and to insert a hash of the photo on chain”. Can you elaborate on that? And can you address the obvious limitation where someone could use someone else key (by force, pay or rent), possibly on a large scale, to publish content?

Nic's Response

We are in the post-truth era. But really, we’re in the pre-truth era.

So at this point we know we can’t solve epistemic uncertainty introduced by AI by using “deepfake detectors”. The only way is via negativa, aka committing to everything that is real, and assuming that everything not committed to is fake. Already, social media is an epistemic warzone, as we now have plenty of evidence of cheap fakes tricking people, but also, true real information being discredited because people plausibly allege that it’s AI. So we’re in the very worst part of the cycle right now, where people haven’t fully recalibrated their credulity yet (because they are still operating under the assumption that fakes are costly) and so AI content can arbitrage that. Soon enough, people will misbelieve all content online and assume it’s a priori fake. This isn’t good either, as no one will be able to convince anyone of anything. At that point hopefully we will begin to create additional layers of verification to address content authenticity. I’ll give a few examples of different media types and how I expect blockchain to assist in bringing about the era of verifiable content.

Personal content

So one problem people face these days is accusations that you did or didn’t do something, said something untoward, etc. If you are a high profile person whose value depends on your reputation this matters a lot. Currently, the way to prove that you didn’t do something, if accused of a crime, for instance, is to prove that you were somewhere else at the time, i.e. with surveillance footage. But with AI, the cost of creating arbitrary videos or audio of some third party is trivial, especially if there’s enough training data (which for anyone that appears on podcasts there is). This creates new problems. One product I expect to emerge is a personal verifier, namely a device that you wear that records video/audio 24/7, and location data. This dataset isn’t published anywhere, but hashed (every 10 min let’s say), and registered on a blockchain. If someone later claims you did or said something on x date, you can prove to them after the fact precisely what you did, and when, by selectively revealing a portion of the video or the transcript (plus location data). The first versions of this are the AI pendants like Tab or Rewind (https://medium.com/dare-to-be-better/ai-wearables-the-next-big-thing-a84ad82e4132). I actually preordered the Tab because I was so interested in the concept. I don’t know if they have plans to put the hashed transcripts onchain, but I’m going to ask them to. It means I’m effectively “committing” to everything I do and say, but I have the free choice to reveal it or not. It still puts all the power in the hands of the person. I do think this is a huge, immediately addressable use case.

Text content in the press

Something I’ve been thinking about for a long time is simply the need for versioning with media enterprises (see https://twitter.com/nic__carter/status/1714664977414369613 and https://www.coindesk.com/tech/2020/07/13/version-control-can-help-the-media-win-back-reader-trust/). This isn’t an AI problem, it’s an internet problem. Media companies are now used to the ability to change their article content post-publication to fix typos or completely revise stories in response to new events. But this introduces a new, bad paradigm of news. They are no longer required to “commit” to the articles that they write, allowing them to be much sloppier. And the practice of stealth editing (as we say infamously with the NYT this week regarding the hospital attack in Gaza) infuriates readers and undermines trust. The obviously solution is to hash article text and upload it onchain when you publish. When you change something, simply upload a new hash. Thus, readers can get a sense of changes to articles as they occur.

Blockchains specifically fix this, and not centralized DBs, because we’re dealing with trust. Many have pointed me to Wikipedia’s own version control product and said it fixes this, but it doesn’t, because the whole problem is we don’t trust the NYT, or whomever, to faithfully administer a DB governing their own edits. We need the data to be on a public ledger for public scrutiny. This is such an easy win, but no legacy publisher will adopt it because it makes their lives more difficult and forces them to return to a pre-internet model of journalism. I expect newer players that are trying to compete on trust will be the ones that adopt this. One company that does this is called Wordproof (https://wordproof.com/).

Image content

The next thing is proving that images and videos aren’t fake or AI derived. You can’t know this from looking at the image itself, especially as AI gets more sophisticated. As I said before, the only way is via negativa, by committing to everything that is true, and assuming everything else is fake. One way to do this would be to embed HSMs in smartphones and have the phone sign each photo. (And since you logged in with biometrics the phone assumes it’s you taking the picture. Optionally, you could also re-authenticate for each photo or video). Uploading a hash optionally on chain with location and identity metadata would allow you to situate a specific photo or video to a specific person, device, and time. That’s quite powerful, and while not perfect (I think people plausibly allege that HSMs are not bulletproof), it’s still a long way towards verifiable content. I do think this has to happen at the point of origin though. It doesn’t really work to have someone take an existing image and sign it and commit to it on chain. One of our portcos does this actually – Attestiv (https://attestiv.com/technology/#hfaq-post-3523) – and they’ve found traction in the insurance space (because insurers want to know that an image of, say, a fallen tree on a car, is recent and created by you). They use an app interface, so they issue a “challenge” to create the image, and then the user reacts to that and create the image. So they know that it didn’t exist before a specific time.

The problem is manipulating an image, because people like to edit their images and video naturally. This is where ZK proofs come in, I think. This is still new terrain, but I think it’s possible to issue a proof that you created some image data, and applied a transformation to it (say, cropping, or changing levels or contrast, or compressed it, or changed the format from HEIC or RAW to JPEG), which corresponds to the final image file. This seems like a very hard technical problem, but not an impossible one.

One other problem is taking an image of another image. Let’s say you wanted to “prove” you had just witnessed a rocket strike a building, and you took a picture with your smartphone of your laptop screen with an image on it, and represented that other image as your own image. That’s solved with “liveness” challenges in my view. Similar to how KYC firms require you to actually prove that you have a 3D face, rather than just uploading a static pic. Your phone could require you to waggle the phone around a bit so it knows it’s taking a picture of a dynamic, 3d surface, rather than just a flat image.

Another problem is ensuring personhood. What we’re really proving here is that a specific device created an image at a specific time and location. That actually doesn’t guarantee that a human created the image, just that an image was created on the device. This is why biometrics are so important in my view. A lot of hardware is going this direction. You now even have smart guns that wont fire unless the owner who is registered to the gun authenticates with a fingerprint. I think most cameras will eventually add some biometric verification too. Smartphones are the obvious place to start here since they already have it embedded.

Regarding the limitation that you mention, you could always adopt a protocol such that you have to sign the image content twice over 4 hours or something, such that a point in time credential isn’t sufficient. If the device is lost and a malicious third party uses it to create content, the requirement that you re-authenticate with biometrics upon each image presumably takes care of that.

Summary

I think this is going to happen. There’s no real limitation, software or hardware wise. Just a question of desire. If I had to make a guess, I believe Apple will be one of the first to react and will create something like what I’m envisioning (optional image metadata blockchain upload) within 12 months. The problem is simply too urgent. Eventually, I think people will look back on our current era of unsigned content and find it barbaric. So ironically, AI is going to push us through a primitive era of truth generation (based on the cost to create fake content), into a more credible era, as we all start to actually sign and take responsibility for the content we create.