Time-stamp: <2020-11-16 15h49 UTC>
There are many protocols and systems which seek to solve the problem of some agents determining, perhaps with a the certainty afforded by mathematical proof, that someone in particular said something. But upon reflection, this seems to be only half of the picture and it occurred to me just recently to try to imagine what the other half might be. The title of this document reflects an attempt at summarising this.
In the real world (TM) after we share something with others might wish to somehow withdraw that information. There are many reasons for this, sometimes we simply change our minds, sometimes better facts come to light, and sometimes we just don't want people to know anymore. Of course, in the age of smartphones and mass surveillance nothing can ever really be withdrawn, but there is a rather reasonable compromise on this.
If someone claims that someone else said something, but that person denies it and there's no /proof/ (in a stronger sense to be defined shortly) that they said it, then we have no reason to believe they did. That is to say, the reasonable compromise on withdrawing information is a system wherein refusing to acknowledge that something was said is indistinguishable from not having said it in the first place.
Of course all of this talk about people saying things was just motivation, and to really clarify the problem and identify solutions we should abstract away from tricky concepts like ``people'' and ``words''. Let us re-seat our problem in the context of agents which communicate and author information, and the presence of a protocol for proving authorship of information.
The first term we should clarify in this context is the notion of proof. The state of the art informs the concession that a proof of authorship should be only statistical in nature. As long as one agent can prove to another that with overwhelming probability it is the author of a piece of information, we will consider the other agent to be completely convinced.
With this in mind we may then elaborate our `reasonable compromise' criterion from above into the following set of requirements:
For all agents A, B, C and information Z:
It's that last part that really ties everything together: without the ability to trivially ``forge'' statements on the part of another agent, the plausible deniability mechanism does not hold.
If this sounds like an interesting or useful thing, then you might be interested in Zero Knowledge Proofs (ZKPs). The Wikipedia article
is rather well written and illuminating. It also points to the paper ``An Improved Protocol for Demonstrating Possession of Discrete Logarithms and Some Generalizations'' by Chaum, Evertse, and van de Graaf which has demonstrates a pleasingly understandable approach to implementing the abstract model above. In summary, this problem is solve-able -- but is it usefully solve-able?
Of course in such a system we could refuse to prove authorship to people and no-one would be able to tell thereafter with mathematical certainty whether we had in fact authored something in particular, but ... if we had originally proved authorship to many people and they all claimed with mathematical certainty in their own words that we had authored this information ... well, unfortunately persuasiveness of mass collusion is not modelled well in this system. That is to say, unless the air is rife with contradiction claims and unverifiable contradictions which propagate en masse, rescinding authorship of one piece of information in isolation may always be cast as `suspicious' -- and unfortunately that would appear to be the real standard of truth.