On 21 december, gemini://thurk.org/ had the public key 5dDrQtsEsCyzK7/ZrSFr8unk8OXfgVBhzW0kup9fTaI=. Today, it is OqGA8UkjoW+oDCX5e3NdfH8q7wBCAlkoyBv/02BcG24=. The Lupa crawler protested. (Same thing for gemini://mozz.us.) What should we do when a public key changes? Reject it? Accept it if the certificate is signed by a known CA? Ask this mailing list? The security part of the current specification is quite vague. It says "If the certificate is not the one previously received, but the previous certificate's expiry date has not passed, the user is shown a warning, analogous to the one web browser users are shown when receiving a certificate without a signature chain leading to a trusted CA." So, always accept, just logs a warning, thus defeating all security? (Note that it requires to store the entire certificate, not just the public key, which means renewals by Let's Encrypt would break TOFU.)
Hello, I have written about TOFU before, perhaps it can help? gemini://makeworld.gq/gemlog/2020-07-03-tofu-rec.gmi This part of TOFU is not perfect and never will be. For user-facing clients like browsers it comes down to showing users a warning. For crawlers like Lupa, I think that it should ignore cert changes entirely. There's no way for a crawler to decide if it's legitimate or not, and a crawler doesn't need much privacy as it's going to access everything anyway. makeworld
I?ve said this before, and I feel more and more confident in this position: Expiry dates in general and CA issued certs in particular really do not mix well with TOFU. When a cert expires a window for MitMA is opened. When this happens every 30-60 days it becomes quite ridiculous. An SSH hostkey has no expiration date; neither should certificates in geminispace (or at the very least we shouldn?t care about it). I even go further and claim that neither Common Name nor Subject Alternative Names matter either. With a self-signed certificate these are as easily forged as any other fields. I know a lot of people disagree with me here, but I have yet to see an argument that can convince me that CN, SAN, not-valid-before or not-valid-after have any bearing on the security of the certificate or give me as a user any information that helps me make a safe decision. All of these fields are crucial in a CA validation scheme, but only add a false sense of security in TOFU. As for the specific question: a crawler has no way to make useful decisions about the security of the certificate. It should just not try. Cheers, ew0k (Also: ??? Ho ho ho! Merry Christmas, everyone!)
> I know a lot of people disagree with me here, but I have yet to see an argument that can convince me that CN, SAN, not-valid-before or not-valid-after have any bearing on the security of the certificate or give me as a user any information that helps me make a safe decision. All of these fields are crucial in a CA validation scheme, but only add a false sense of security in TOFU. When making self-signed certs for my domains, I make sure that all fields are correct and set the expiration date years into the future. If any of the fields don't match, I want that to be seen as indication that something isn't right. This might be good enough to prevent a sloppy attempt at interception. Some admins are not as careful about their certs, but that's on them. As long as we're using X.509, the browser should inform the user of any discrepancies. As for renewal of CA-signed certs: when receiving a new cert from the same CA, if the old cert is within 30 days of expiration, the browser could either silently accept it, or show the user a notice along the lines of: "The old certificate is about to expire in 20 days, renewal in this time frame is expected. Do you want to continue?". For crawlers like Lupa, sure, there's no need for any kind of cert validation, except for generating TLS-related statistics, which would be quite interesting.
---