💾 Archived View for rawtext.club › ~sloum › geminilist › 001800.gmi captured on 2020-09-24 at 01:38:14. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
solderpunk solderpunk at SDF.ORG
Fri Jun 19 18:29:34 BST 2020
- - - - - - - - - - - - - - - - - - -
Sorry for letting this thread sit for a while!
On Mon, Jun 15, 2020 at 06:53:49PM +0000, colecmac at protonmail.com wrote:
I do think that "controlling how servers use certs" is [a good idea]
This is probably the only way forward, but unfortunately it complicates things.
It makes Gemini less simple, because people can't stick to what they
know about certs, or just use the existing one they have for their domain.
I guess we just have to try and get servers to support this and abstract it
away.
Well, "different" and "less simple" aren't always the same. Theautomated cron-based approach of Let's Encrypt is *very* different towhat people were used to before it came along, but uptake was swift -okay, in part because it was free, but also in part because it wasactually easier. I think that anything which can be implemented as acron job is feasible for widespread adoption. A cron job which does notcommunicate with any external machines is arguably even simpler than onewhich does.
- Servers can sign their new cert with their previous private key.
Then TOFU clients which accepted the previous cert can validate the
changeover - and then immediately stop trusting the previous cert so
that anybody who stole the private key can't sign their own new cert.
Basically, when you accept a new cert you also grant it one-use-only,
very-limited-scope CA powers.
BLoCkcHaiN style, nice ;)
This does mean that servers would have to serve up an ever growing certificate
chain though? I think? Because otherwise how can a client verify that it was signed.
I hadn't imagined an ever-growing chain, that would soon add up to somepretty hefty overhead. I imagined just two or maybe three at most.
I guess the servers only need to serve up two certs, the previous and current, but
if I boot up my client after a year, then how does it know whether it just has missed
some certs in between, or if there's an MITM attack?
It wouldn't!
Let me be clear: TOFU is a very simple security model. It's totallydecentralised, totally decommercialised, involves no third partiesbeyond the server and client and you can deploy it even on weirdooff-grid wifi meshnets that have no connection to the Real Internetwhatsoever. It should go without saying that something like this is notgoing to give you 100% unconditional authentication of remoteidentities under all circumstances.
That doesn't mean it's rubbish. The CA model doesn't give you 100%unconditional authentication either (and it certainly don't look simpleonce you add in things like OCSP and CT to try and get it closer to thatgoal). In terms of the its ability to protect everyday people fromtheir greatest realistic privacy threats (things like passive, automatedbulk surveillance by their ISP) compared to its implementationcomplexity, I think TOFU can be very worth using. But you do need tohave realistic expectations: really strengthening it up to the pointwhere it can address active, targetted attacks will necessarily involveadding more complexity, and this is the spirit in which I brought up allthe ideas in this thread.
The role of TOFU-based TLS in Gemini is not to offer somethingequivalent to TLS on the web, so we can all comfortably send around ourcredit card numbers and make bank transfers in Geminispace even thoughcriminals are actively trying to intercept us. It's to fix the glaringdefect of Gopher whereby nobody would blame you for being reluctant touse Gopher to consume:
because it would be trivial for your ISP to sell that information tomarketing agencies or report you to people who will haul you off to thegulag (because if they don't, *they're* at risk of getting hauled off).Or because if you're using the open wifi network at a cafe or a publiclibrary or an airport, all the other patrons on that network will beable to see what you're reading. I believe that people who want/needto read the above *should* be able to read the above with some degreeof protection, and Gopher lets them down on that point. I honestlythink this keeps a lot of people who are fed up with all the web'sproblems from migrating into Gopherspace. At the same time, I believethat fixing this *shouldn't* require complicated and expensiveprivate infrastructure: Yes, Let's Encrypt is free for the end userand I'm a big fan, but it costs millions of dollars each year to runit, most of which comes from corporate sponsors and, ironically, someof their biggest sponsors are companies like Google and Facebookthat make the money they donate by doing things that *aren't* good forprivacy!
From this perspective, TOFU provides "good enough" security at a "cheapenough" price that I feel like it should be treated as a first classoption in Geminispace, and that it's a viable option for a lot of (butmaybe not all) Gemini servers. It's enough to stop ISPs and sleazyhotspot providers doing automated MITM attacks on all Gemini traffic,which they could do if we just accepted whatever certificate came downthe line without any checks whatsoever - all it takes is one customerwith a TOFU client on a machine which routinely moves between networks(say work and home, or home and the library, whatever) to reveal thatthis is happening and raise the alarm.
Thinking about comparatively simple extensions on top of basic TOFUwhich can add a little extra security is absolutely worth doing and Iencourage it and that's the spirit in which I've proposed a lot of thesethings, like signing new certs with old keys, or pre-announcement ofcert roll-overs. But I think it makes more sense to ask of these simpleadditional layers "do they add protection against some feasible attackon vanilla TOFU?" and not "are there still some scenarios in which thisis vulnerable?", because the answer to the latter will always be "yes".
For the record, I would not recommend using Gemini for serious life anddeath stuff, unless perhaps you're in a situation where you can meeteverybody involved face-to-face and confirm certificate fingerprints inan offline way.
- Servers can generate their new self-signed cert N months in advance
and, for those N months, advertise the hash of the new cert at a
well-known endpoint, access to which is secured by the current cert.
TOFU clients can notice when an accepted cert is close to expiry and
pre-fetch the future fingerprint.
The problem is still like what if I miss a cert? Like if my client got cert 1 and
the hash of cert 2, but by the time I come back online, that site is serving cert 3
and I don't know whether that's one I should trust or not.
Same response as above, I guess. Both of these approaches work bestfor sites that you visit "regularly", where "regular" is relative tocertificate lifetime. If you're only going to check in with somewhereonce every few years and have no contact with the people involvedinbetween, it's very hard to maintain trust without involving thirdparties.
DANE seems cool, I want to look into it more. But it will complicate things, and then
there's DNSSEC, etc etc. I'm guessing it should be avoided for now.
I was surprised at how many people in #gemini said the other day thatthey had DNSSEC working for their domains! But, yes, this is perhapsthe trickiest add-on discussed, because automating it would requirehooking into an API for updating DNS records, and there are many ofthose in use so writing a cron-jobbable implementation of this approachwhich can be used by the majority of people is not straightforward.This might be something adopted by a relatively small number of serverswho have some good reason to want to provide additonal assurance totheir visitors.
- We could built Perspectives/Convergence style "notary" servers that
TOFU clients clients can consult when receiving an unrecognised cert.
This was an idea that was developed before it's time, IMHO. Today there
is no reason that achieving broad network perspective requires trusted
third parties and an effective "shadow infrastructure" alongside CAs.
Just run your own certificate observatory on a dirt cheap VPS. Share it
with friends, who share theirs with you. Pubnixes can run then for
their users. Unlike some of the other ideas, this works just as nicely
with CA-signed certs (like those from Let's Encrypt) as self-signed
certs.
This seems cool, and I want to learn more. How is conflict resolution handled?
Doesn't this need bootstrapping? It could be a good solution, but still will
complicate the protocol a lot.
I think this one is cool, too. :) I plan to code such an observatory(as a Gemini server itself, naturally!) one day. I like that it workswell even for CA-signed certs, and that it requires nothing special onthe part of the server admin.
Conflict resolution would, I imagine, be configurable at the client'send. You could set it up to raise a red flag unless every observatorypolled had seen the cert in question, or you could accept a cert if morethan N observatories had seen it, whatever you thought was sensible. Idon't think bootstrapping is needed, observatories can check which certthey see for a domain fairly quickly when they first receive a queryabout it (and thereafter add it to a list to to check on a recurringbasis).
Regarding "complicating the protocol a lot", I certainly don't imagine*speccing* this or any of the other ideas here as required. I don'tthink consulting remote TLS observatories will be a mainstream thingthe average Geminiaut does. It will probably mostly be a toy forprivacy and decentralisation geeks, and perhaps something that peopleinvolved in serious activism might pick up once said geeks have gottenit working smoothly.
I feel somewhat unsure about the problems I raised here btw, please correct me if
I've made any mistakes.
I think everything you said, about possible shortcomings of myproposals, was factually correct! I think there was just a differencein expectations of what simple TOFU solutions can provide.
Cheers,Solderpunk