Hey List! While planning out the implementation of TOFU for Kristall, i noticed something weird: Quoting the spec: ---------------------------------------------------------------------- TOFU stands for "Trust On First Use" and is public-key security model similar to that used by OpenSSH. The first time a Gemini client connects to a server, it accepts whatever certificate it is presented. That certificate's fingerprint and expiry date are saved in a persistent database (like the .known_hosts file for SSH), associated with the server's hostname. On all subsequent connections to that hostname, the received certificate's fingerprint is computed and compared to the one in the database. If the certificate is not the one previously received, but the previous certificate's expiry date has not passed, the user is shown a warning, analogous to the one web browser users are shown when receiving a certificate without a signature chain leading to a trusted CA. ---------------------------------------------------------------------- This means that trust in a server is lost as soon as a certificate is expired and a subsequent renewal of the certificate with the same private key is the same as visiting the host for the first time. But when i refresh my server certificate before it expires (which is recommended), we will have the situation that the client will not trust the server anymore (same situation as when an attacker is doing a MITM attack) which i think is not a good situation. My proposal for both server certificates is the following: An endpoint stores the public key of the servers certificate as well as the host name. As long as this host continues to use the same identity (pub+privkey), it will be trusted. Certificates that aren't refreshed will error to the client, having another pubkey presented will error "harder" (as in: this is a possible MITM attack) The same behaviour could be used for client certificates as well, allowing the renewal of client certificates with the same private key (which would solve the "how do i have persistent identities") What do you guys think? Regards - xq
It was thus said that the Great Felix Quei?ner once stated: > Hey List! > > While planning out the implementation of TOFU for Kristall, i noticed > something weird: > > Quoting the spec: > ---------------------------------------------------------------------- > TOFU stands for "Trust On First Use" and is public-key security model > similar to that used by OpenSSH. The first time a Gemini client > connects to a server, it accepts whatever certificate it is presented. > That certificate's fingerprint and expiry date are saved in a persistent > database (like the .known_hosts file for SSH), associated with the > server's hostname. On all subsequent connections to that hostname, the > received certificate's fingerprint is computed and compared to the one > in the database. If the certificate is not the one previously received, > but the previous certificate's expiry date has not passed, the user is > shown a warning, analogous to the one web browser users are shown when > receiving a certificate without a signature chain leading to a trusted CA. > ---------------------------------------------------------------------- > > This means that trust in a server is lost as soon as a certificate is > expired and a subsequent renewal of the certificate with the same > private key is the same as visiting the host for the first time. But > when i refresh my server certificate before it expires (which is > recommended), we will have the situation that the client will not trust > the server anymore (same situation as when an attacker is doing a MITM > attack) which i think is not a good situation. > > My proposal for both server certificates is the following: > An endpoint stores the public key of the servers certificate as well as > the host name. As long as this host continues to use the same identity > (pub+privkey), it will be trusted. Certificates that aren't refreshed > will error to the client, having another pubkey presented will error > "harder" (as in: this is a possible MITM attack) > > The same behaviour could be used for client certificates as well, > allowing the renewal of client certificates with the same private key > (which would solve the "how do i have persistent identities") > > What do you guys think? I think that's reasonable, as the client certificate for gemini.conman.org is set to expire in a few days (I run my own CA as an experiment). -spc
On Sat, Jun 13, 2020 at 12:41:44AM +0200, Felix Quei?ner wrote: > My proposal for both server certificates is the following: > An endpoint stores the public key of the servers certificate as well as > the host name. As long as this host continues to use the same identity > (pub+privkey), it will be trusted. Certificates that aren't refreshed > will error to the client, having another pubkey presented will error > "harder" (as in: this is a possible MITM attack) I'm not sure this makes sense: if you expect me to believe that you will keep your private key secure from compromise for N days/months/years, then why not just send me a certificate which doesn't expire for N days/months/years? But I could be jumping the gun on that. I haven't had any coffee yet and there are a host of other reasons that certificates expire beyond concerns about key compromise. Some of them don't really transfer from CA-land to self-signed TOFU land. But some may...in which case some kind of "long keys, short certs" model might indeed make sense. Cheers, Solderpunk
So I've realized that this will break a lot of workflows, I think. Many sites on Gemini-space just use their Let's Encrypt cert for their site, but the Let's Encrypt tool, certbot, sets a new private key when renewing[1]. As far as I can tell, this is standard practice (certbot or no), and so I don't think that storing only the public key for TOFU is a good idea. 1: https://github.com/certbot/certbot/issues/231 However, this doesn't solve the issue Felix presented: > This means that trust in a server is lost as soon as a certificate is > expired and a subsequent renewal of the certificate with the same > private key is the same as visiting the host for the first time. But > when i refresh my server certificate before it expires (which is > recommended), we will have the situation that the client will not trust > the server anymore (same situation as when an attacker is doing a MITM > attack) which i think is not a good situation. I'm not sure what to do about this. Both options seem bad, and both will cause breakage. It seems that there is no good way to do TOFU with certs, unless you want to try and control how servers use certs, like specifying that keypairs should not change or something. Solderpunk, I'd appreciate if we could work towards some general solution for this, and official recommendations for how to handle TOFU and cert renewal. makeworld
On Sun, Jun 14, 2020 at 11:09:05PM +0000, colecmac at protonmail.com wrote: > Solderpunk, I'd appreciate if we could work towards some general solution for this, > and official recommendations for how to handle TOFU and cert renewal. I would *love* to! And I have loads of ideas on this front. I've just never had the time to write anything substantial on them because there is always some more urgent matter popping up, like surprise auto-cookies or people wanting to add upload capabilities. If things ever settle down (tonight I will make official the spec changes I recently asked for feedback on and then freeze the thing again, perhaps that will help) we can tackle this. > I'm not sure what to do about this. Both options seem bad, and both will cause > breakage. It seems that there is no good way to do TOFU with certs, unless > you want to try and control how servers use certs, like specifying that keypairs > should not change or something. I don't think that keeping the same keypair forever is a good idea, but I *do* think that "controlling how servers use certs" is. Without CAs in the picture it's trivial to automate cert changes, which makes this easy. I also think that pushing Gemini servers to use the smallest certs they can (i.e. not RSA) is a good idea to reduce TLS overhead, which is another reason for people to take control of their own certificate generation. Quick sketches:
> I would love to! And I have loads of ideas on this front. Happy to hear it! Let's do this :) > I do think that "controlling how servers use certs" is [a good idea] This is probably the only way forward, but unfortunately it complicates things. It makes Gemini less simple, because people can't stick to what they know about certs, or just use the existing one they have for their domain. I guess we just have to try and get servers to support this and abstract it away. > - Servers can sign their new cert with their previous private key. > Then TOFU clients which accepted the previous cert can validate the > changeover - and then immediately stop trusting the previous cert so > that anybody who stole the private key can't sign their own new cert. > Basically, when you accept a new cert you also grant it one-use-only, > very-limited-scope CA powers. BLoCkcHaiN style, nice ;) This does mean that servers would have to serve up an ever growing certificate chain though? I think? Because otherwise how can a client verify that it was signed. I guess the servers only need to serve up two certs, the previous and current, but if I boot up my client after a year, then how does it know whether it just has missed some certs in between, or if there's an MITM attack? > - Servers can generate their new self-signed cert N months in advance > and, for those N months, advertise the hash of the new cert at a > well-known endpoint, access to which is secured by the current cert. > TOFU clients can notice when an accepted cert is close to expiry and > pre-fetch the future fingerprint. The problem is still like what if I miss a cert? Like if my client got cert 1 and the hash of cert 2, but by the time I come back online, that site is serving cert 3 and I don't know whether that's one I should trust or not. > - Servers can use DANE (RFC6698) to advertise their self-signed cert > over DNS, and TOFU clients can check this when receiving an unrecognised > cert. LOTS of details to discuss here re: DNS security. DANE seems cool, I want to look into it more. But it will complicate things, and then there's DNSSEC, etc etc. I'm guessing it should be avoided for now. > - We could built Perspectives/Convergence style "notary" servers that > TOFU clients clients can consult when receiving an unrecognised cert. > This was an idea that was developed before it's time, IMHO. Today there > is no reason that achieving broad network perspective requires trusted > third parties and an effective "shadow infrastructure" alongside CAs. > Just run your own certificate observatory on a dirt cheap VPS. Share it > with friends, who share theirs with you. Pubnixes can run then for > their users. Unlike some of the other ideas, this works just as nicely > with CA-signed certs (like those from Let's Encrypt) as self-signed > certs. This seems cool, and I want to learn more. How is conflict resolution handled? Doesn't this need bootstrapping? It could be a good solution, but still will complicate the protocol a lot. I feel somewhat unsure about the problems I raised here btw, please correct me if I've made any mistakes. makeworld ??????? Original Message ??????? On Monday, June 15, 2020 3:32 AM, solderpunk <solderpunk at SDF.ORG> wrote: > On Sun, Jun 14, 2020 at 11:09:05PM +0000, colecmac at protonmail.com wrote: > > > Solderpunk, I'd appreciate if we could work towards some general solution for this, > > and official recommendations for how to handle TOFU and cert renewal. > > I wouldlove to! And I have loads of ideas on this front. I've just > never had the time to write anything substantial on them because there > is always some more urgent matter popping up, like surprise auto-cookies > or people wanting to add upload capabilities. If things ever settle > down (tonight I will make official the spec changes I recently asked for > feedback on and then freeze the thing again, perhaps that will help) we > can tackle this. > > > I'm not sure what to do about this. Both options seem bad, and both will cause > > breakage. It seems that there is no good way to do TOFU with certs, unless > > you want to try and control how servers use certs, like specifying that keypairs > > should not change or something. > > I don't think that keeping the same keypair forever is a good idea, but > I do think that "controlling how servers use certs" is. Without CAs > in the picture it's trivial to automate cert changes, which makes this > easy. I also think that pushing Gemini servers to use the smallest > certs they can (i.e. not RSA) is a good idea to reduce TLS overhead, > which is another reason for people to take control of their own > certificate generation. > > Quick sketches: > > - Servers can sign their new cert with their previous private key. > Then TOFU clients which accepted the previous cert can validate the > changeover - and then immediately stop trusting the previous cert so > that anybody who stole the private key can't sign their own new cert. > Basically, when you accept a new cert you also grant it one-use-only, > very-limited-scope CA powers. > > - Servers can generate their new self-signed cert N months in advance > and, for those N months, advertise the hash of the new cert at a > well-known endpoint, access to which is secured by the current cert. > TOFU clients can notice when an accepted cert is close to expiry and > pre-fetch the future fingerprint. > > - Servers can use DANE (RFC6698) to advertise their self-signed cert > over DNS, and TOFU clients can check this when receiving an unrecognised > cert. LOTS of details to discuss here re: DNS security. > > - We could built Perspectives/Convergence style "notary" servers that > TOFU clients clients can consult when receiving an unrecognised cert. > This was an idea that was developed before it's time, IMHO. Today there > is no reason that achieving broad network perspective requires trusted > third parties and an effective "shadow infrastructure" alongside CAs. > Just run your own certificate observatory on a dirt cheap VPS. Share it > with friends, who share theirs with you. Pubnixes can run then for > their users. Unlike some of the other ideas, this works just as nicely > with CA-signed certs (like those from Let's Encrypt) as self-signed > certs. > > Cheers, > Solderpunk >
Sorry for letting this thread sit for a while! On Mon, Jun 15, 2020 at 06:53:49PM +0000, colecmac at protonmail.com wrote: > > I do think that "controlling how servers use certs" is [a good idea] > > This is probably the only way forward, but unfortunately it complicates things. > It makes Gemini less simple, because people can't stick to what they > know about certs, or just use the existing one they have for their domain. > I guess we just have to try and get servers to support this and abstract it > away. Well, "different" and "less simple" aren't always the same. The automated cron-based approach of Let's Encrypt is *very* different to what people were used to before it came along, but uptake was swift - okay, in part because it was free, but also in part because it was actually easier. I think that anything which can be implemented as a cron job is feasible for widespread adoption. A cron job which does not communicate with any external machines is arguably even simpler than one which does. > > - Servers can sign their new cert with their previous private key. > > Then TOFU clients which accepted the previous cert can validate the > > changeover - and then immediately stop trusting the previous cert so > > that anybody who stole the private key can't sign their own new cert. > > Basically, when you accept a new cert you also grant it one-use-only, > > very-limited-scope CA powers. > > BLoCkcHaiN style, nice ;) > This does mean that servers would have to serve up an ever growing certificate > chain though? I think? Because otherwise how can a client verify that it was signed. I hadn't imagined an ever-growing chain, that would soon add up to some pretty hefty overhead. I imagined just two or maybe three at most. > I guess the servers only need to serve up two certs, the previous and current, but > if I boot up my client after a year, then how does it know whether it just has missed > some certs in between, or if there's an MITM attack? It wouldn't! Let me be clear: TOFU is a very simple security model. It's totally decentralised, totally decommercialised, involves no third parties beyond the server and client and you can deploy it even on weirdo off-grid wifi meshnets that have no connection to the Real Internet whatsoever. It should go without saying that something like this is not going to give you 100% unconditional authentication of remote identities under all circumstances. That doesn't mean it's rubbish. The CA model doesn't give you 100% unconditional authentication either (and it certainly don't look simple once you add in things like OCSP and CT to try and get it closer to that goal). In terms of the its ability to protect everyday people from their greatest realistic privacy threats (things like passive, automated bulk surveillance by their ISP) compared to its implementation complexity, I think TOFU can be very worth using. But you do need to have realistic expectations: really strengthening it up to the point where it can address active, targetted attacks will necessarily involve adding more complexity, and this is the spirit in which I brought up all the ideas in this thread. The role of TOFU-based TLS in Gemini is not to offer something equivalent to TLS on the web, so we can all comfortably send around our credit card numbers and make bank transfers in Geminispace even though criminals are actively trying to intercept us. It's to fix the glaring defect of Gopher whereby nobody would blame you for being reluctant to use Gopher to consume:
Thanks for the well written response! Worth the wait :) I see now that I have over-analyzed TOFU, thanks for pointing that out. I think having a mostly secure protocol that works without a centralized system is a good place to be in, even though reaching "full" security might be mostly unattainable. With that in mind, let me look back at those ideas again. > * Servers can generate their new self-signed cert N months in advance > and, for those N months, advertise the hash of the new cert at a > well-known endpoint, access to which is secured by the current cert. > TOFU clients can notice when an accepted cert is close to expiry and > pre-fetch the future fingerprint. This is the one I like the most I think, it seems the simplest. Even simpler than the signing method, because server don't need to serve multiple certs and increase the overhead, and clients don't even need to do key validation. Whether this is specced (as an optional client behaviour) or not, I think the spirit of "mostly secure" suggests that at the very least, simple clients should look at cert hash and expiry, and not just the cert public key as Felix suggested in this thread originally. I think it'd be nice to see this suggestion in the Best Practices file, if you agree. Thanks, makeworld ??????? Original Message ??????? On Friday, June 19, 2020 1:29 PM, solderpunk <solderpunk at SDF.ORG> wrote: > Sorry for letting this thread sit for a while! > > On Mon, Jun 15, 2020 at 06:53:49PM +0000, colecmac at protonmail.com wrote: > > > > I do think that "controlling how servers use certs" is [a good idea] > > > > This is probably the only way forward, but unfortunately it complicates things. > > It makes Gemini less simple, because people can't stick to what they > > know about certs, or just use the existing one they have for their domain. > > I guess we just have to try and get servers to support this and abstract it > > away. > > Well, "different" and "less simple" aren't always the same. The > automated cron-based approach of Let's Encrypt is very different to > what people were used to before it came along, but uptake was swift - > okay, in part because it was free, but also in part because it was > actually easier. I think that anything which can be implemented as a > cron job is feasible for widespread adoption. A cron job which does not > communicate with any external machines is arguably even simpler than one > which does. > > > > - Servers can sign their new cert with their previous private key. > > > Then TOFU clients which accepted the previous cert can validate the > > > changeover - and then immediately stop trusting the previous cert so > > > that anybody who stole the private key can't sign their own new cert. > > > Basically, when you accept a new cert you also grant it one-use-only, > > > very-limited-scope CA powers. > > > > > > > BLoCkcHaiN style, nice ;) > > This does mean that servers would have to serve up an ever growing certificate > > chain though? I think? Because otherwise how can a client verify that it was signed. > > I hadn't imagined an ever-growing chain, that would soon add up to some > pretty hefty overhead. I imagined just two or maybe three at most. > > > I guess the servers only need to serve up two certs, the previous and current, but > > if I boot up my client after a year, then how does it know whether it just has missed > > some certs in between, or if there's an MITM attack? > > It wouldn't! > > Let me be clear: TOFU is a very simple security model. It's totally > decentralised, totally decommercialised, involves no third parties > beyond the server and client and you can deploy it even on weirdo > off-grid wifi meshnets that have no connection to the Real Internet > whatsoever. It should go without saying that something like this is not > going to give you 100% unconditional authentication of remote > identities under all circumstances. > > That doesn't mean it's rubbish. The CA model doesn't give you 100% > unconditional authentication either (and it certainly don't look simple > once you add in things like OCSP and CT to try and get it closer to that > goal). In terms of the its ability to protect everyday people from > their greatest realistic privacy threats (things like passive, automated > bulk surveillance by their ISP) compared to its implementation > complexity, I think TOFU can be very worth using. But you do need to > have realistic expectations: really strengthening it up to the point > where it can address active, targetted attacks will necessarily involve > adding more complexity, and this is the spirit in which I brought up all > the ideas in this thread. > > The role of TOFU-based TLS in Gemini is not to offer something > equivalent to TLS on the web, so we can all comfortably send around our > credit card numbers and make bank transfers in Geminispace even though > criminals are actively trying to intercept us. It's to fix the glaring > defect of Gopher whereby nobody would blame you for being reluctant to > use Gopher to consume: > > - serious political activism > - information about a locally-banned religion > - erotic literature > - health advice for stigmatised conditions > - counselling resources for abuse victims > > because it would be trivial for your ISP to sell that information to > marketing agencies or report you to people who will haul you off to the > gulag (because if they don't, they're at risk of getting hauled off). > Or because if you're using the open wifi network at a cafe or a public > library or an airport, all the other patrons on that network will be > able to see what you're reading. I believe that people who want/need > to read the above should be able to read the above with some degree > of protection, and Gopher lets them down on that point. I honestly > think this keeps a lot of people who are fed up with all the web's > problems from migrating into Gopherspace. At the same time, I believe > that fixing this shouldn't require complicated and expensive > private infrastructure: Yes, Let's Encrypt is free for the end user > and I'm a big fan, but it costs millions of dollars each year to run > it, most of which comes from corporate sponsors and, ironically, some > of their biggest sponsors are companies like Google and Facebook > that make the money they donate by doing things that aren't good for > privacy! > > From this perspective, TOFU provides "good enough" security at a "cheap > enough" price that I feel like it should be treated as a first class > option in Geminispace, and that it's a viable option for a lot of (but > maybe not all) Gemini servers. It's enough to stop ISPs and sleazy > hotspot providers doing automated MITM attacks on all Gemini traffic, > which they could do if we just accepted whatever certificate came down > the line without any checks whatsoever - all it takes is one customer > with a TOFU client on a machine which routinely moves between networks > (say work and home, or home and the library, whatever) to reveal that > this is happening and raise the alarm. > > Thinking about comparatively simple extensions on top of basic TOFU > which can add a little extra security is absolutely worth doing and I > encourage it and that's the spirit in which I've proposed a lot of these > things, like signing new certs with old keys, or pre-announcement of > cert roll-overs. But I think it makes more sense to ask of these simple > additional layers "do they add protection against some feasible attack > on vanilla TOFU?" and not "are there still some scenarios in which this > is vulnerable?", because the answer to the latter will always be "yes". > > For the record, I would not recommend using Gemini for serious life and > death stuff, unless perhaps you're in a situation where you can meet > everybody involved face-to-face and confirm certificate fingerprints in > an offline way. > > > > > - Servers can generate their new self-signed cert N months in advance > > > and, for those N months, advertise the hash of the new cert at a > > > well-known endpoint, access to which is secured by the current cert. > > > TOFU clients can notice when an accepted cert is close to expiry and > > > pre-fetch the future fingerprint. > > > > > > > The problem is still like what if I miss a cert? Like if my client got cert 1 and > > the hash of cert 2, but by the time I come back online, that site is serving cert 3 > > and I don't know whether that's one I should trust or not. > > Same response as above, I guess. Both of these approaches work best > for sites that you visit "regularly", where "regular" is relative to > certificate lifetime. If you're only going to check in with somewhere > once every few years and have no contact with the people involved > inbetween, it's very hard to maintain trust without involving third > parties. > > > DANE seems cool, I want to look into it more. But it will complicate things, and then > > there's DNSSEC, etc etc. I'm guessing it should be avoided for now. > > I was surprised at how many people in #gemini said the other day that > they had DNSSEC working for their domains! But, yes, this is perhaps > the trickiest add-on discussed, because automating it would require > hooking into an API for updating DNS records, and there are many of > those in use so writing a cron-jobbable implementation of this approach > which can be used by the majority of people is not straightforward. > This might be something adopted by a relatively small number of servers > who have some good reason to want to provide additonal assurance to > their visitors. > > > > - We could built Perspectives/Convergence style "notary" servers that > > > TOFU clients clients can consult when receiving an unrecognised cert. > > > This was an idea that was developed before it's time, IMHO. Today there > > > is no reason that achieving broad network perspective requires trusted > > > third parties and an effective "shadow infrastructure" alongside CAs. > > > Just run your own certificate observatory on a dirt cheap VPS. Share it > > > with friends, who share theirs with you. Pubnixes can run then for > > > their users. Unlike some of the other ideas, this works just as nicely > > > with CA-signed certs (like those from Let's Encrypt) as self-signed > > > certs. > > > > > > > This seems cool, and I want to learn more. How is conflict resolution handled? > > Doesn't this need bootstrapping? It could be a good solution, but still will > > complicate the protocol a lot. > > I think this one is cool, too. :) I plan to code such an observatory > (as a Gemini server itself, naturally!) one day. I like that it works > well even for CA-signed certs, and that it requires nothing special on > the part of the server admin. > > Conflict resolution would, I imagine, be configurable at the client's > end. You could set it up to raise a red flag unless every observatory > polled had seen the cert in question, or you could accept a cert if more > than N observatories had seen it, whatever you thought was sensible. I > don't think bootstrapping is needed, observatories can check which cert > they see for a domain fairly quickly when they first receive a query > about it (and thereafter add it to a list to to check on a recurring > basis). > > Regarding "complicating the protocol a lot", I certainly don't imagine > speccing this or any of the other ideas here as required. I don't > think consulting remote TLS observatories will be a mainstream thing > the average Geminiaut does. It will probably mostly be a toy for > privacy and decentralisation geeks, and perhaps something that people > involved in serious activism might pick up once said geeks have gotten > it working smoothly. > > > I feel somewhat unsure about the problems I raised here btw, please correct me if > > I've made any mistakes. > > I think everything you said, about possible shortcomings of my > proposals, was factually correct! I think there was just a difference > in expectations of what simple TOFU solutions can provide. > > Cheers, > Solderpunk
On Fri, Jun 19, 2020 at 06:51:35PM +0000, colecmac at protonmail.com wrote: > Whether this is specced (as an optional client behaviour) or not, I think > the spirit of "mostly secure" suggests that at the very least, simple clients > should look at cert hash and expiry, and not just the cert public key as Felix > suggested in this thread originally. I think it'd be nice to see this suggestion > in the Best Practices file, if you agree. I want to setup an entirely separate document on TOFU practices! I don't want to rush into it, though. I am planning to read this paper over the weeekend: https://rp.delaat.net/2012-2013/p56/report.pdf Feel free to join in! Cheers, Solderpunk
Oh okay, sounds good! Happy to hear your thoughts later. I'll check that document out. Thanks, makeworld
Two quick takeaways I made that I will add to Amfora:
Maybe some trusted gemini server could also propose a dedicated page with known certificat from gemini server. If all truster server give the same certificat, it's giving a higher level of security. TOFU to get the bytes and multiple trusted servers to valid the bites? freD.
---
Previous Thread: [ANN] gemini-textboard.fgaz.me: a simple textboard