πΎ Archived View for gemi.dev βΊ gemini-mailing-list βΊ 000139.gmi captured on 2023-12-28 at 15:41:48. Gemini links have been rewritten to link to archived content
β¬ οΈ Previous capture (2023-11-04)
-=-=-=-=-=-=-
Howdy, It's time to start thinking hard about client certificates. A super quick introduction for the unaware: everyone is (hopefully!) familiar with the idea that when you make a TLS connection to a server, the server provides a certificate (basically a public key plus some metadata, signed - directly or via a chain - by a trusted third party). This is supposed to convince you that you've connected to who you think you have. What some may not know is that TLS allows clients to send a certificate to the server as well. This never (or almost never) happens on the web, where clients typically authenticate using a username and password inside a cookie-powered session. I first learned about client certificates in the murky, distant past when the "semantic web" was a hot topic, in the context of the decentralised "Friend Of a Friend" social network idea. You can read about FOAF+SSL at https://www.w3.org/wiki/Foaf+ssl. Gemini specs a lot of use for client certificates - partially because they're a nice tool for the job, partially because the design goal of maximising power-to-weight ratio means once you accept the weight of using TLS you'd better implement everything you can using TLS rather than adding yet another pile of concepts. The current rough spec fairly clearly outlines two usage scenarios for client certificates. One is where you want to restrict access to some Gemini resource to a limited set of clients, e.g. you want to be able to check a webcam at your home via Gemini from your office while at work, or via your phone, but you don't want to open up access to the whole world. So, you generate a self-signed certificate on your office computer or your phone, and manually add its fingerprint to a whitelist on your home server, and nobody else is allowed in. This is entirely analogous to the way many of you are probably familiar with of logging into a server over ssh using a private key. It works the same way and has the same pros/cons (e.g. can't be brute forced, is limited to devices with the right keys installed). This is pretty explicitly supported in Gemini via status code 62 to request such a certificate and 63 to reject one not on the whitelist. It is actually implemented at gemini://gemini.conman.org/, if you want to play around with this. I'm not sure there's much more to consider for this scenario. Because in a whitelisting scenario the server and client are typically both under some degree of control by the same person, people can handle expiry/renewal however they want and the spec should stay silent on this. The second scenario involves transient client certificates. These are basically an alternative to cookie-powered sessions in HTTP(S). A client generates a self-signed certificate and uses it for some requests so that the server can recognise consecutive requests as coming from the same location, and use the certificate fingerprint as a key in a database to maintain state between requests. This allows, e.g. using a long series of status code 10 responses to basically "fill out a form", e.g. in signing a guest book there could be separate prompts for a name, email address or URL, plus guest book comment. You could actually build fairly extensive command-line applications using this paradigm (they'd suck in a graphical browser that kept popping up input windows, but in something like AV-98 the experience would be a lot smoother). Anyway, the appeal of using client certificates for sessions is that:
Hi! On Sat, May 23, 2020 at 02:56:38PM +0000, solderpunk wrote: > Gemini specs a lot of use for client certificates - partially because > they're a nice tool for the job, partially because the design goal of > maximising power-to-weight ratio means once you accept the weight of > using TLS you'd better implement everything you can using TLS rather > than adding yet another pile of concepts. This was the thing that stood out most on my first read-through of the spec! I particularly like the insistence "Transient certificates MUST NOT be generated automatically". I wasn't here for the earlier discussion but I assume this is to fend off servers making sessions purely for tracking purposes by default. > Right now, AV-98 fills the "Subject" of transient certs with > random unique values from Python's `uuid` module, because I seemed > to encounter errors sending totally empty certificates to > conman.org. I've seen similar problems before with custom usage of X.509. It would be nice to use something smaller than UUID if possible just to keep cert size down - provided there is library support as you noted. > There is a third scenario, which the spec does not explicitly discuss at > all, but which is actually the most widely used scenario in Geminispace > so far, which is the main reason that I want to kick off a discussion > about this and change the spec if required. It's the idea of persistent > identity (basically, a "user account") at a server which is not under > the client's control. > ... > We could change 62 to specify that the META should be a plain-language > message to users, which could disambiguate the scenario. Something > else to consider here is that astrobotany uses the Common Name part of > the certificate for the username. I like this idea a lot, but > different applications may want different or additional user > information, and using META to convey this information could work well. I'm not a big fan of using certificate fields for anything meaningful - it's a side channel (surrounded by lots of other fields that people might be tempted to use) - and it slightly increases the complexity of integration between the TLS library and the client/server code. When a server wants the client to use a non-disposable certificate there is an ambiguity. Is it just because they want the user to preserve their authentication for more than 24 hours for convenience? Or is it because the user will permanently lose access to their account if they ever lose that key? Much like a bitcoin wallet, the latter is technically a very tidy and secure solution but it absolutely sets up users to fail. People who are auto-generating this authentication in regular clients will lose their keys and be sad about it. By comparison, those who are setting up whitelisted keys out-of-band are likely more savvy and know what they're getting into. With this in mind, my current opinion is that there should be no way for a server to request a non-disposable certificate. Where authentication is required, it should be done in-band via a password or username/password 10 responses as you noted, which is then associated on the server with the transient certificate. It then becomes the responsibility of clients to ask the user "how long do you want to stay authenticated to this website?" If it times out, they can simply repeat their authentication. One argument against is that it encourages clients to choose very lax deletion policies to reduce friction. This may not be as big a concern as it is on the web, since users will presumably only have a session with a small fraction of the gemini sites they visit, so they are not picking up transient tracking sessions willy nilly that need to be flushed out. One final thought: do we need a way to encourage clients to have multiple certificates for the same server, depending on path? If I authenticate to a CGI application on a multi-user server I don't necessarily want my identity to be followed by everybody else's applications on that server. Cheers, Tom
It was thus said that the Great Thomas Karpiniec once stated: > Hi! > > On Sat, May 23, 2020 at 02:56:38PM +0000, solderpunk wrote: > > Gemini specs a lot of use for client certificates - partially because > > they're a nice tool for the job, partially because the design goal of > > maximising power-to-weight ratio means once you accept the weight of > > using TLS you'd better implement everything you can using TLS rather > > than adding yet another pile of concepts. > > This was the thing that stood out most on my first read-through of the > spec! I particularly like the insistence "Transient certificates MUST > NOT be generated automatically". I wasn't here for the earlier > discussion but I assume this is to fend off servers making sessions > purely for tracking purposes by default. There wasn't much of a discussion. Okay, a bit of history of this mess, which can be laid at the feet of me and solderpunk. In early June of last year, solderpunk starting the design of Gemini and sometime between the 16th and 21st, he had the initial protocol designed. On the 21st (19:18:37 -0400 to be exact) I started GLV-1.12556. I already had a framework for servers written, and a TLS wrapper for Lua, and I wanted to play around with TLS. Now, the library I picked for this was LibreSSL, a hard fork of OpenSSL because of issues. The reason I picked LibreSSL was mainly for its inclusion of a higher-level library libtls, which makes it easy for an application writer to use TLS correctly in an application. And if your client accepts the whole CA mechanism (for both servers and clients) then yes, it is *very* easy to use [1]. This is further reflected by some of the functions they defined: tls_config_insecure_noverifycert() tls_config_insecure_noverifyname() tls_config_insecure_noverifytime() They definitely want to make sure you know that not verifying a certificate is bad. They also make it easy to use client certificates: tls_config_verify_client() tls_config_verify_client_optional() So with all this, by midnight on the 22nd, I had a minimal Gemini server written, with client certificate support by the 24th. Since client certificates weren't a part of the original specification, I thought I would just go ahead and implement it to show it could be done [2]. To that end, I set up two end points that were (and still are) protected: gemini://gemini.conman.org/private/ gemini://gemini.conman.org/conman-labs-private/ I set client certifications optional (else every request would require them and this would be checked by libtls), and for "/private/" all I require is that a certificate is sent (I don't even bother looking at it). For "/conman-labs-private/" I require a certificate I signed to be used (much like Astrobotany). My intent was to protect certain areas of a Gemini server with an access control mechanism, and using certificates was (in my mind) a no brainer. It was on July 9th that solderpunk decided he liked the idea of a client certificate for authentication and on August 15th, settled on the idea for good, and seems to have come to see temporary client certificates as for some form of "cookie" the client controls by the 19th, because of the "/private/" area on my server not requiring any particular client certificate (just *a* client certificate). And from then, there wasn't much work on it until just recently. I was surprised and amused that Astrobotany exists, using client certificates as I envisioned them being used. > > Right now, AV-98 fills the "Subject" of transient certs with > > random unique values from Python's `uuid` module, because I seemed > > to encounter errors sending totally empty certificates to > > conman.org. > > I've seen similar problems before with custom usage of X.509. It would > be nice to use something smaller than UUID if possible just to keep > cert size down - provided there is library support as you noted. I'm not aware of what fields are mandatory either, but certainly using the string "anon" or "unknown" or "noydb" [3]. Or you know, Noah Body or Abby Normal. > I'm not a big fan of using certificate fields for anything meaningful > - it's a side channel (surrounded by lots of other fields that people > might be tempted to use) - and it slightly increases the complexity of > integration between the TLS library and the client/server code. > > When a server wants the client to use a non-disposable certificate > there is an ambiguity. Is it just because they want the user to > preserve their authentication for more than 24 hours for convenience? One scenario I envisioned was a Gemini server serving up sensitive material to known, authenticated users. How that authentication happens is beyond the scope of the Gemini protocol, but perhaps a companion way (or a "best practices" way) could be discussed. > Or is it because the user will permanently lose access to their > account if they ever lose that key? Of if they get logged out and forget the password? That can happen now, so I don't think it's of much concern. > With this in mind, my current opinion is that there should be no way > for a server to request a non-disposable certificate. I disagree. I might want to serve up documents to a select few, and I can control that by given them a client certificate to use. -spc [1] https://github.com/spc476/libtls-examples [2] I've also played around with client certificates for the web. I wish they were used more often as they obliviate the need for "logging" in (has certificate? User is logged in) and "logging out" (client just stops browsing). It gets difficult when you use multiple devices, and the UI around generating and using them is ... let's say it's "technical" and leave it at that. [3] None of your darned business.
On Sat, May 23, 2020 at 09:57:14PM -0400, Sean Conner wrote: > > Or is it because the user will permanently lose access to their > > account if they ever lose that key? > > Of if they get logged out and forget the password? That can happen now, > so I don't think it's of much concern. I don't think these are really the same. Even ignoring the many people who remember their passwords (against all good advice) there are well established password managers and it's likely that random gemini client X will require additional backup work to avoid losing its keys. Strong passwords can be easily written down somewhere safe and typed in, keys less so. It also means the only way to share an account between multiple devices is to copy the key material across, and coordinate renewals so that it is only done once. > > With this in mind, my current opinion is that there should be no way > > for a server to request a non-disposable certificate. > > I disagree. I might want to serve up documents to a select few, and I can > control that by given them a client certificate to use. I think I made this statement broader than intended - it makes complete sense to support that use case the way it's working now. I didn't mean to say we should take that away. What I meant was in a regular browsing session, where a client and server meet each other for the first time, a server would not be able to rely on the client having a permanent certificate store. They may request (and get) a transient client certificate. They may encourage the user to hold onto it. But a server application must assume that the same person will need to reauthenticate with a new certificate at some point in the future. This would solve both the user-friendliness issue, and also the problem of client certificate renewal. Looking at it another way, suppose it was possible for a long-term client key to be generated, negotiated and stored, in-band via the gemini protocol. My hope would be that most server applications would eschew this in favour of user/pass authentication over transient keys. However if setting up a permanent client key became common practice (possibly because it's easier) I am worried that we would end up in a similar situation to cryptocurrencies where only "hardcore" users manage their own key material and others rely on some sort of managed service to keep things safe but available to them. Cheers, Tom
On Sat, May 23, 2020 at 09:57:14PM -0400, Sean Conner wrote: > > Or is it because the user will permanently lose access to their > > account if they ever lose that key? > > Of if they get logged out and forget the password? That can happen now, > so I don't think it's of much concern. Apologies for the double-reply; another thought occurred to me. Suppose that TLS just expects a parseable cert from the client, and the server application is only interested in the key type and fingerprint. Maybe expiry is not a significant concern? (After all, we don't normally expire passwords on the web.) A client could implement this using random keys and a permanent keystore if it wanted. It could simply regenerate its own certificate based on the same key when it wants to. That sidesteps the validity problem. Alternatively, a client could use a key derivation function based on a combination of a user-selected (hopefully high quality) password and the domain of the server. You would then be able to reestablish your identity at any time knowing just the password. If the key derivation method was enshrined as a best practice, you could then take your passwords with you when you try out different clients that implemented it. This would free applications from the main burden of my original proposal, which is having to add some sort of login/response system on top of the certificate. That is admittedly a hassle. I forgot to mention, thank you for the history of how the current certificate handling came to be! Very interesting. Cheers, Tom
Thomas Karpiniec <tkarpiniec at icloud.com> wrote: > Where authentication is required, it should be done in-band via a password > or username/password 10 responses as you noted, which is then > associated on the server with the transient certificate. Hello It would be nice if we had a separate status code for password input, say 11. Simple clients could treat this as a 10, intermediate clients could hide user input behind asterisks and advanced clients could ask to make a call to the password manager (set up in advance) or whatever other convenience system there might exist. This has been mentioned before but I didn't want to dig through the archive again. Sorry for the sidetrack. -- Katarina > -------------- next part -------------- An HTML attachment was scrubbed... URL: <https://lists.orbitalfox.eu/archives/gemini/attachments/20200524/b2de 3466/attachment.htm>
On Sun, May 24, 2020 at 09:04:29AM +1000, Thomas Karpiniec wrote: > This was the thing that stood out most on my first read-through of the > spec! I particularly like the insistence "Transient certificates MUST > NOT be generated automatically". I wasn't here for the earlier > discussion but I assume this is to fend off servers making sessions > purely for tracking purposes by default. Right, transient sessions generated automatically and invisibly to the user are not really any better than cookies. Of course, even though tht spec forbids is, there is a risk that clients start actually doing that. The fact that generating a certificate takes a small but noticable amount of time means this would not make for a good user experience, which I hope is incentive against it. > I've seen similar problems before with custom usage of X.509. It would > be nice to use something smaller than UUID if possible just to keep > cert size down - provided there is library support as you noted. The cert size concern is a good one (the lack of connection reuse in Gemini means that TLS overhead is more of an issue than it otherwise would be. This is one nice thing about self-signed certs, they will typically be shorter as they won't have a long chain of intermediate signing certs). I think Sean's solution of having the spec require a single fixed nonce value is a good one. Having every cert identical with regard to subject/issuer prevents linking certs together just as well as having every one be unique. > I'm not a big fan of using certificate fields for anything meaningful > - it's a side channel (surrounded by lots of other fields that people > might be tempted to use) - and it slightly increases the complexity of > integration between the TLS library and the client/server code. I guess I am seduced to some extent by the ideas in FOAF+SSL. If the cert can refer to some other resource (a FOAF profile in that case, but we wouldn't need to follow that) and that resource contains the cert fingerprint to validate the connection, then client certificates can - if and only if the user wants them to! - be vehicles for rich identities under full user control. > When a server wants the client to use a non-disposable certificate > there is an ambiguity. Is it just because they want the user to > preserve their authentication for more than 24 hours for convenience? > Or is it because the user will permanently lose access to their > account if they ever lose that key? The former, i.e. the idea is that, unlike a transient certificate (which has the semantics of "I don't really care who you are and am not at all interested in recognising you again next week, I just need to tie together a handful of separate requests *right now*" - although that idea can be destroyed if that handful of separate requests involves the client providing a username and password), a non-disposable certificate is supposed to be a persistent identity - what people think of as a "user account" on the web. In the simplest possible implementation of this, the server just uses the cert hash as user ID and then, yes, we're in high-risk territory where if the user doesn't backup their keys and certs then a hard drive failure or theft of a laptop or similar scenarios result in lock out. It doesn't *need* to work that way, though. An application which users authenticate to this way could give users the option to associate an email address which could be used for the equivalent of "password recovery" - email the user a URL with a random unique token in it and tell them the first new certificate which visits that URL sometime in the next one hour will become linked to their account. > With this in mind, my current opinion is that there should be no way > for a server to request a non-disposable certificate. Where > authentication is required, it should be done in-band via a password > or username/password 10 responses as you noted, which is then > associated on the server with the transient certificate. It then > becomes the responsibility of clients to ask the user "how long do you > want to stay authenticated to this website?" If it times out, they can > simply repeat their authentication. Hmm, interesting. I'll ponder this, thanks for your thoughts. My first reaction is that that I'm reluctant to remove a dedicated mechanism for creating something which is unavoidably and non-negotiably very short lived. Maybe you didn't mean for that to happen, though? > One final thought: do we need a way to encourage clients to have > multiple certificates for the same server, depending on path? If I > authenticate to a CGI application on a multi-user server I don't > necessarily want my identity to be followed by everybody else's > applications on that server. Ah! I absolutely positively meant to bring this up in my original post to this thread. This is a very valid point. Right now, the spec (and my proof-of-concept implementations in AV-98) associates client certs with a domain. The presence of many multi-user servers in Geminispace (and for what it's worth, I like those, a lot, assuming the users constitute a community in some sense beyond sharing a server) means this is not necessarily going to work well. We could use the <META> content of status codes which request a cert to specify a path or range of paths for which the cert should be used. I'm, unsurprisingly, extremely out of touch with modern web development: are cookies still strictly tied to domains or have they evolved some kind of path-specificity? Cheers, Solderpunk
On Sun, May 24, 2020 at 12:33:17PM +0200, Katarina Eriksson wrote: > It would be nice if we had a separate status code for password input, say > 11. Simple clients could treat this as a 10, intermediate clients could > hide user input behind asterisks and advanced clients could ask to make a > call to the password manager (set up in advance) or whatever other > convenience system there might exist. > > This has been mentioned before but I didn't want to dig through the archive > again. Sorry for the sidetrack. Yes, I proposed precisely this along time ago. It never gained much traction, but then it's only very useful on top of a client certificate and *they* are only just now starting to see use, so maybe it's not too surprising. I think I will add this to the spec. It's very little effort for clients to handle, and it degrades well enough in a client that treats 11 as 10. People will probably do the usename/password thing anyway even without it, so we may as well make it possible to protect against shoulder surfing. Cheers, Solderpunk
It was thus said that the Great solderpunk once stated: > I'm, unsurprisingly, extremely out of touch with modern web development: > are cookies still strictly tied to domains or have they evolved some > kind of path-specificity? It's not path-specificity, but domain-specificity, but a cookie *can* be shared between sub-domains of a domain. domain cookies =============== conman.org alpha www.conman.org alpba beta sub.www.conman.org alpha beta gamma The 'alpha' cookie will be sent to the domain and each subdomain, the 'beta' cookie will only be sent to the www subdomain, and 'gamma' will be only sent to sub.www sub-subdomain. A cookie can only be set on a domain, not a TLD, but this requires some explaining. If my site were under the UK, then: conman.co.uk alpha www.conman.co.uk alpha beta sub.www.conman.co.uk alpha beta gamma A cookie for 'co,uk' MUST be rejected by browsers, as 'co.uk' is considered a TLD (much like .org and .com). Yes, this means that every browser has to be aware of the domain rules for every country (and they can change over time). For the US, a domain is either a place name under a state (two letter code) or (and this changed several years back) any domain (other than the state ones) under the .us domain: nyc.ny.us -- VALID for cookies acme.us -- VALID for cookies ny.us -- INVALID for cookies If you think this is insane, it is. -spc (Kind of wish I was making this up)
Hey! > thoughts about client certificates First of all: I really love the idea of client certificates, especially for short-term session management it's a nice idea! I wanted to write a much longer, more detailed answer with deeper insight, but i don't think i'll find the time for that, so i just share my "main" concern/idea: When i first read the idea of the persistent/long-term certificates, i didn't even come across the idea of using it for whitelisting. My first thought was: Nice, this makes some really good identitiy management for web forums/shops/chats/... It gives the client full control over their identity. I can use multiple client certificates for the same site to manage different identities. What i imagined in a client was this: https://i.imgur.com/Ayh2sVx.png When a server requests use of a client certificate, you get to chose one of many identities, maybe even share an identity between sites for collaborating services. You are always allowed to create new identities, destroy old ones. </end-of-vision> It didn't occur to me that certificates require a lifetime to be chosen and now i'm thinking about how to solve this. The "easy" way would be to create certificates with 150 year duration, and force the recovery strategies on the user. But as already discussed, this isn't practical and losing the certificate and/or key would require some kind of account recovery strategy. E-Mail-Recovery is a usual strategy common in the webspace, but i'm not a huge fan of that. Another possibility would be that the server gives the user a common secret that allows re-connection of a account to a new certificate, but there's the same problem of the lost identity. Regards - xq
On Sat, 23 May 2020, solderpunk wrote: > I first learned about client certificates in the murky, distant past > when the "semantic web" was a hot topic, in the context of the > decentralised "Friend Of a Friend" social network idea. You can read > about FOAF+SSL at https://www.w3.org/wiki/Foaf+ssl. Though very skeptical about SSL itself, I have always had a soft spot for SSL client certificates. I first came across them at Zeus (we made an HTTP webserver that behaved rather differently from Apache; in particular, CGI was implemented via a SCGI-like mechanism. I loved CGI - in the 1990s I would check in at http://hoohoo.uiuc.ncsa.edu/CGI/ just to see if they'd updated the spec): not only did some of our customers use client certs, but we used them for authenticating email: if your cert had been signed by our CA and not revoked, the IMAP/SMTP magically worked for you, otherwise no. I've tried writing a Gemini server (now looking pretty tidy: https://github.com/mk270/blizanci/blob/master/apps/blizanci/src/blizanci_gemini.erl ) but I reckon SSL client certs are going to be what stops me using Erlang and forces me over to Rust with the cool crowd. The use case I care about is your first one: I want to make a set of documents available to clients who can present a certificate signed by a particular CA (e.g., one I control). I appreciate that under the current dispensation, the distribution of certificates or certificate-signing-requests is out-of-band, but in my use case, it's not clear to me how I'd go about implementing this. I'd have thought something like: C: connect to S, without presenting a client cert and request /path/file S: 62 you need to present an authorised client cert; closes connection C: reconnect to S, presenting appropriate client cert, request /path/file S: 20 text/gemini [data follows]; closes connection However, it's my possibly mistaken understanding that an SSL client will not present a cert to the server unless the server sends the CertificateRequest message first. Since the server doesn't know whether the client is going to ask for a restricted resource, it won't request a client cert. This seems to lead to a chicken-and-egg problem: to get access to the resource, the client must present its cert; to present the cert, the client must be asked to do so by the server; but the server doesn't know it should ask for the certificate before the client has said which resource it wants to access. This only arises where there is a combination of restricted and unrestricted resources in the URL namespace on a server on a particular port. In the case where *all* the resources on the server are restricted, the server could unconditionally request a cert from each client that connects, and then allow or deny access to the resources on a per-URL, per-cert basis. What seems to be impossible is having a landing page, say, gemini://gemini.podunk.edu/ which has a link to gemini://gemini.podunk.edu/restricted on its main landing page. Maybe this is fine, or maybe I misunderstand how SSL works (I know things changed a little on the certificate request front in TLS1.3). Anyway, I think the ergonomics and patterns around how certificate signing requests get moved around are going to be a bigger problem. Keep up the good work! Mk -- Martin Keegan, +44 7779 296469, @mk270, https://mk.ucant.org/
On Sun, 24 May 2020, Martin Keegan wrote: > > Anyway, I think the ergonomics and patterns around how certificate signing > requests get moved around are going to be a bigger problem. Oh, +1 on using the "Common Name" / CN field as a username. Mk -- Martin Keegan, +44 7779 296469, @mk270, https://mk.ucant.org/
On Sun, May 24, 2020 at 11:22:48PM +0100, Martin Keegan wrote: > On Sat, 23 May 2020, solderpunk wrote: > I appreciate that under the current dispensation, the distribution of > certificates or certificate-signing-requests is out-of-band, but in my use > case, it's not clear to me how I'd go about implementing this. I'd > have thought something like: > > C: connect to S, without presenting a client cert and request /path/file > S: 62 you need to present an authorised client cert; closes connection > > C: reconnect to S, presenting appropriate client cert, request /path/file > S: 20 text/gemini [data follows]; closes connection This is exactly the intended workflow, and is now supported somewhat smoothly in AV-98. Here is an example session. For the sake of making it clear what is happening, I have `set debug true` but deleted most of the debugging output, leaving only what is necessarily to make it clear what is happening here: AV-98> go gemini://gemini.conman.org/conman-labs-private/ [DEBUG] Response header: 62 Authorized Certicate Required. The site gemini.conman.org is requesting a client certificate. This will allow the site to recognise you across requests. What do you want to do? 1. Give up. 2. Generate new certificate and retry the request. 3. Load previously generated certificate from file. 4. Load certificate from file and retry the request. > 2 What do you want to name this new certificate? Answering `mycert` will create `~/.av98/certs/mycert.crt` and `~/.av98/certs/mycert.key` conman Generating a RSA private key ..............+++++ .........................................+++++ writing new private key to '/home/solderpunk/.av98/client_certs/conman.key' ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]:. State or Province Name (full name) [Some-State]:. Locality Name (eg, city) []:. Organization Name (eg, company) [Internet Widgits Pty Ltd]:. Organizational Unit Name (eg, section) []:. Common Name (e.g. server FQDN or YOUR name) []:Let me in! Email Address []:. [DEBUG] Sending gemini://gemini.conman.org/conman-labs-private/<CRLF> [DEBUG] Response header: 63 Certificate Not Accepted. The server did not accept your certificate. You may need to e.g. coordinate with the admin to get your certificate fingerprint whitelisted. What do you want to do? 1. Give up. 2. Generate new certificate and retry the request. 3. Load previously generated certificate from file. 4. Load certificate from file and retry the request. > 1 > However, it's my possibly mistaken understanding that an SSL client will not > present a cert to the server unless the server sends the CertificateRequest > message first. Since the server doesn't know whether the client is going to > ask for a restricted resource, it won't request a client cert. This seems to > lead to a chicken-and-egg problem: to get access to the resource, the client > must present its cert; to present the cert, the client must be asked to do > so by the server; but the server doesn't know it should ask for the > certificate before the client has said which resource it wants to access. Hmm. Either you are mistaken, or you're correct but all the servers I've tested this against thus far (admittedly not many!) request a client cert unconditionally and don't complain when one is not provided. If you're right, I guess we need to spec this behaviour as required. > Anyway, I think the ergonomics and patterns around how certificate signing > requests get moved around are going to be a bigger problem. I have to admit that I don't see a lot of point in using CSRs in this context. The whole point of a signed certificate is so that party A can prove to anybody who trusts party B that party B verified they are really party A. The certificate is for the benefit of third parties. If you are running a Gemini server and you want to use client certificates to restrict access to certain people, there is no third party in the picture. So why not just remember the fingerprint of certificates you've verified as belong to people you want to grant access to? That way nobody has to send you a CSR and you don't have to send back a signed certificate. In a two-party scenario all that just seems like pointless busy work to me. Am I missing something? Cheers, Solderpunk
On Wed, 27 May 2020, solderpunk wrote: >> must present its cert; to present the cert, the client must be asked to do >> so by the server; but the server doesn't know it should ask for the >> certificate before the client has said which resource it wants to access. > > Hmm. Either you are mistaken, or you're correct but all the servers > I've tested this against thus far (admittedly not many!) request a > client cert unconditionally and don't complain when one is not provided. > If you're right, I guess we need to spec this behaviour as required. Thanks for taking the time to spell this out. I was largely mistaken, as far as I can tell from my own attempts to add client certs to my server implementation. I think what's going on is that since I last seriously looked into this circa 2001, client UIs now behave better when servers request a client cert without making it mandatory. >> Anyway, I think the ergonomics and patterns around how certificate signing >> requests get moved around are going to be a bigger problem. > > I have to admit that I don't see a lot of point in using CSRs in this > context. The whole point of a signed certificate is so that party A can Well, they are a potential implementation of the step you referred to as "You may need to e.g. coordinate with the admin to get your certificate" In my proposed pattern, the server trusts any cert which has been signed by some CA run by the server operator. The coordination with the admin to get such a cert is done via normal CSRs. I'd recommend having a look at how Scuttlebutt does its analogous step: "invite codes". See, e..g., https://github.com/ssbc/ssb-server/wiki/pub-servers https://ssbc.github.io/scuttlebutt-protocol-guide/ (invite codes section) https://handbook.scuttlebutt.nz/guides/pubs/create-an-invite ... which suggests that a layer on top of CSRs may also be useful. Anyway, best way to find out is to try it. > really party A. The certificate is for the benefit of third parties. > If you are running a Gemini server and you want to use client certificates to > restrict access to certain people, there is no third party in the > picture. So why not just remember the fingerprint of certificates you've > verified as belong to people you want to grant access to? That way nobody > has to send you a CSR and you don't have to send back a signed certificate. > In a two-party scenario all that just seems like pointless busy work to > me. Am I missing something? Well, in the two party scenario, sending a CSR and sending a fingerprint seem to be pretty similar: the user's software submits the mystic runes to openssl and the result is pasted to the other party. In the CSR route, however, you do indeed need to save the other party's response (the cert). I would say that the CSR mechanism goes with the grain of how SSL is conventionally used, and thus is likely to have better existinglibrary/docs support. My server sets $REMOTE_USER to the client cert's "Common Name" field, which is probably not what other people are doing, but which I think is more in the spirit of Gemini. If I get time I'll write up what the current practices of the other server software are. Mk -- Martin Keegan, +44 7779 296469, @mk270, https://mk.ucant.org/
On Wed, May 27, 2020 at 07:37:02PM +0100, Martin Keegan wrote: > I'd recommend having a look at how Scuttlebutt does its analogous step: > "invite codes". See, e..g., > > https://github.com/ssbc/ssb-server/wiki/pub-servers > https://ssbc.github.io/scuttlebutt-protocol-guide/ (invite codes section) > https://handbook.scuttlebutt.nz/guides/pubs/create-an-invite > > ... which suggests that a layer on top of CSRs may also be useful. Anyway, > best way to find out is to try it. Thanks, I will have a read! > Well, in the two party scenario, sending a CSR and sending a fingerprint > seem to be pretty similar: the user's software submits the mystic runes to > openssl and the result is pasted to the other party. In the CSR route, > however, you do indeed need to save the other party's response (the cert). Sure, but in the fingerprint case the relevant runes just get sent as a side-effect of an ordinary TLS transaction. > I would say that the CSR mechanism goes with the grain of how SSL is > conventionally used, and thus is likely to have better existinglibrary/docs > support. Ah, now *this* is definitely true. At the start of this whole thing I brushed off a lot of people's concerns about the complexity of TLS by waving my hands and saying "there are library bindings to do this in every language". I did not realise that so many of those libraries would be so totally unable to handle anything even the slightest bit unconventional. It bothers me a bit now that fully implementing so many of the ideas that I thought would make TLS a little bit less of an imposition, or a little bit less offensive to radical decentralists, looks like it will be quite a pain in some cases. I never would have imagined it would be literally impossible for a server using Python's standard `ssl` module to accept a self-signed client certificate! Cheers, Solderpunk
> On May 27, 2020, at 20:58, solderpunk <solderpunk at SDF.ORG> wrote: > > I never would have > imagined it would be literally impossible for a server using Python's > standard `ssl` module to accept a self-signed client certificate! https://docs.python.org/3/library/ssl.html#ssl.CERT_REQUIRED
On Wed, May 27, 2020 at 11:07:47PM +0200, Petite Abeille wrote: > > On May 27, 2020, at 20:58, solderpunk <solderpunk at SDF.ORG> wrote: > > > > I never would have > > imagined it would be literally impossible for a server using Python's > > standard `ssl` module to accept a self-signed client certificate! > > https://docs.python.org/3/library/ssl.html#ssl.CERT_REQUIRED Yes, precisely: > With server socket, this mode provides mandatory TLS client cert > authentication. A client certificate request is sent to the client and > the client must provide a valid and trusted certificate. Cheers, Solderpunk
> On May 27, 2020, at 23:11, solderpunk <solderpunk at SDF.ORG> wrote: > > Yes, precisely: SSL/TLS client certificate verification with Python v3.4+ SSLContext https://www.electricmonk.nl/log/2018/06/02/ssl-tls-client-certificate-verif ication-with-python-v3-4-sslcontext/
On Wed, May 27, 2020 at 11:13:09PM +0200, Petite Abeille wrote: > SSL/TLS client certificate verification with Python v3.4+ SSLContext > https://www.electricmonk.nl/log/2018/06/02/ssl-tls-client-certificate-ver ification-with-python-v3-4-sslcontext/ Okay, I stand very slightly corrected: a Python server using the standard library can accept a self-signed client certificate *if* that certificate (not just its fingerprint but the entire thing) is whitelisted in advance of the connection. But this is insufficient for almost all the applications we've discussed. It's of no use for the transient client certificate paradigm, in particular. Cheers, Solderpunk
> On May 27, 2020, at 23:30, solderpunk <solderpunk at SDF.ORG> wrote: > > But this is insufficient for > almost all the applications we've discussed. It's of no use for the > transient client certificate paradigm, in particular. Bah. No python then. Still like the creative use of TLS.
On Sun, May 24, 2020 at 02:27:26PM +0000, solderpunk wrote: > On Sun, May 24, 2020 at 09:04:29AM +1000, Thomas Karpiniec wrote: > > > With this in mind, my current opinion is that there should be no way > > for a server to request a non-disposable certificate. Where > > authentication is required, it should be done in-band via a password > > or username/password 10 responses as you noted, which is then > > associated on the server with the transient certificate. It then > > becomes the responsibility of clients to ask the user "how long do you > > want to stay authenticated to this website?" If it times out, they can > > simply repeat their authentication. > > Hmm, interesting. I'll ponder this, thanks for your thoughts. My first > reaction is that that I'm reluctant to remove a dedicated mechanism for > creating something which is unavoidably and non-negotiably very short > lived. Maybe you didn't mean for that to happen, though? I really like the "put all control into the hands of the user" aspect of this design, I have to admit. And only having support for one kind of on-demand client certificate simplifies things. I just keep coming back to the thought that it's hard for the user to have all the information they need to make that decision. There's no way to stop people using certificate fingerprinting to do a bitcoin-esque tying of account identity to a private key. Some people might really think that's better than using in-band username and password (which is subject to weak passwords, brute-forcing, theft of password database, etc.). So in all likelihood both patterns will end up being used "in the wild". And unless I know in advance what's coming up after I generate a certificate, it's hard to know what to do. If I'm about to be asked to supply a username and password, I probably would have been happy with a short-lived certificate which gets deleted when I close my client. If I'm about to be told "okay, great, we've taken your cert fingerprint, this is now your idea, please back it up and be careful", I definitely want to pick something longer lived. This either requires different status codes for different authentication models so that clients can suggest sensible defaults and users can make informed decisions, *or* it requires good communication from app designers at some point in the "sign in/up" workflow before the client request comes along (and good understanding from the user). Cheers, Solderpunk
As some of you may have read at gemini://gemini.circumlunar.space/users/solderpunk/cornedbeef/the-mercury-protocol.gmi, I have been having a small semi-crisis-of-confidence regarding the apparently unavoidable complexity of speccing a robust and flexible mechanism for in-band authentication with client certificates. Thanks, by the way, to everybody who emailed me or made posts of their own in response to that post. I'm still committed to mandatory TLS in Gemini, as I have been since day one. And I still think client certificates are an under-appreciated and powerful tool for authentication. But I've also convinced myself that the transient certificate idea got specced mostly just because I was so pleased by the realisation that it was *possible* to use client certs that way, not because there was a clear motivation. So far nobody has used them for anything and it hasn't exactly ruined the experience. People have been building interesting interactive things without client certs so far. The most obvious and compelling use case for client certificates for me is for people to be able to put up private content for their own use (a private bookmarking or to-do app, for example), and that doesn't require anything complicated in Gemini at all, it can be done ssh style by whitelisting the fingerprint of a self-signed cert, or traditional TLS style by setting up your own CA. None of which is to say the other stuff needs to go, but I think it probably ought to be a lower priority than other considerations which affect searchability and accessibility of publically available material, which is clearly more important - yes, less fun and interesting from a technogeek perspective, but more actually important. I'm going to keep thinking about this stuff, and I encourage people to share their thoughts and ideas and to experiment with what's specced in AV-98. But, whereas I previously thought this would be the part of the spec which saw the bulk of activity once the spec freeze wore off, I think maybe for now this should actually stay in the "experimental features for power users, subject to change" category while we focus on other stuff. Cheers, Solderpunk
On Thu, 28 May 2020, solderpunk wrote: > I have been having a small semi-crisis-of-confidence regarding the > apparently unavoidable complexity of speccing a robust and flexible > mechanism for in-band authentication with client certificates. Thanks, > by the way, to everybody who emailed me or made posts of their own in > response to that post. > that way, not because there was a clear motivation. So far nobody has > used them for anything and it hasn't exactly ruined the experience. > People have been building interesting interactive things without client > certs so far. The most obvious and compelling use case for client > certificates for me is for people to be able to put up private content > for their own use (a private bookmarking or to-do app, for example), and > that doesn't require anything complicated in Gemini at all, it can be > done ssh style by whitelisting the fingerprint of a self-signed cert, or > traditional TLS style by setting up your own CA. There is no need whatsoever for a crisis of confidence. I certainly have confidence in your approach to Gemini or I'd not have tried making a server in an uphill language like Erlang. The client certificate mechanism is unfamiliar rather than complex. The unfamiliarity will run into friction in terms of developer resistance and the limitations of existing code and documentation, but those are only two among many elements in the tradeoff. Given time, the limitations of SSL libraries will be better understood or obviated. Maybe the transient cert thing will take off; maybe it won't. Again, time will tell and it doesn't need to be resolved any time soon. I have a pretty clear vision for what I'd like to be able to do with Gemini: have a visually tasteful, minimalist, distraction-free reading experience for content that is trivial to publish and trivial to keep just among my friends, and I feel the ecosystem will be there in a few months if not weeks. Mk -- Martin Keegan, +44 7779 296469, @mk270, https://mk.ucant.org/
On Thu, May 28, 2020 at 07:28:04PM +0100, Martin Keegan wrote: > I have a pretty clear vision for what I'd like to be able to do with Gemini: > have a visually tasteful, minimalist, distraction-free reading experience > for content that is trivial to publish and trivial to keep just among my > friends, and I feel the ecosystem will be there in a few months if not > weeks. This is pretty much the "first class" application for Gemini, well-stated. It is clear that the protocol is capable of more, and where it's posible to pave a smooth path for doing more without interfering with exactly what you said or making things too complex I intend to do it. But that stuff should be thought of as "icing" on the cake that you described (although I'd add something about privacy in there too). Cheers, Solderpunk
It was thus said that the Great solderpunk once stated: > As some of you may have read at > gemini://gemini.circumlunar.space/users/solderpunk/cornedbeef/the-mercury-protocol.gmi, Ooh, I did not know this. Anyway, I just read it, and having written both a gopher server [1] and a Gemini server [2], I think I can answer the questions you posed. > How much more difficult is Gemini to implement than Mercury? This, I think will depend upon the TLS libries used. It would take me less than five minutes to adapt my Gemini server to a Mercury server. At a minimum, I would just have to change local tls = require "org.conman.nfl.tls" local okay,err = tls.listen(addr,port,main,function(conf) -- options for TLS ... end) to local tcp = require "org.conman.nfl.tcp" local okay,err = tcp.listen(addr,post,main) and remove the authentication block from the configuration, and ta-daaah! I have Mercury running. But I was careful in my selection of TLS library and I specifically picked the one I used [3] because of the ease it made using TLS. For the record, both my gopher server and Gemini server are *very* similar in construction, and largely have feature-parity (sans TLS). I can't say for other TLS libraries---only the writers of other Gemini servers (or clients) can say for sure how much complexity was added due to TLS. And as we're finding out, the client certificate support is a bit of a mess, regardless of TLS library. > What are the things Gemini can do which Mercury cannot? TLS. And protection of an area of a Gemini site. > How much do we value those things? TLS is valued quite a bit from what I see. People are *still* experiementing with TLS and gopher. I don't have much else to say about your post, or this message. I think the ease (or not) of TLS is an interesting conversation on its own right. -spc [1] https://github.com/spc476/port70 [2] https://github.com/spc476/GLV-1.12556 [3] It was a pain to install, but only because: 1. it's a fork of OpenSSL 2. I didn't want to blow out my current installation of OpenSSL It can be done thouygh, and I should probably do a write up on it so others may have a change of using GLV-1.12556, or even just know the joys of using libtls.
---