💾 Archived View for bbs.geminispace.org › u › jeang3nie › 1683 captured on 2023-07-22 at 17:48:34. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
I was just reading this post. It is certainly interesting, and a possible use case immediately came to mind based on what I've been working on.
In the Misfin protocol spec, when checking a presented client certificate's validity, it is suggested to send an empty message to the mailbox associated with that certificate. Since the certificate hash is returned as the "meta" field upon successful delivery, the original server can then compare that hash with the hash of the certificate that has been presented.
I can see one glaring problem with this strategy if implementors get it wrong. Suppose two servers have never communicated with each other previously. A user on server A sends a message to another user on server B. Server B sends a blank message to server A to get the hash, waiting on that return value. Server B, not having seen the certificate it is now being presented with, does the same, and waits on that return value. Neither server gets the response that they are waiting for. Worst case scenario they just keep sending blank messages back and forth in an endless loop until one of the admins notices their cpu is pegged and logs are full.
Anyway, even before getting to the actual implementation, I was already thinking there needs to be a better mechanism for getting a user's certificate hash. I'm not sure this is it, but it's funny that it popped up just as I was thinking about the issue.
2023-06-08 · 6 weeks ago
I don't know, I'm struggling to find anything in the proposal that isn't solved by traditional PKI certificate authorities and signed client certificates. It's kind of the exact problem they were designed to solve in the first place.
CAs don't need to be centralized, anyone can create their own CA and host a service to sign client certificates. So I generate a CSR and send it id.gemlog.org to sign it, and then it becomes mozz@id.gemlog.org. Before signing, your server can verify the the cert details (like ensure the common name is unique, or verify other certificate metadata like a backlink to the owner's gemini capsule).
The only challenge would be that you can't upload a CSR over gemini so you would need to use titan or some other protocol. But it feels weird to go down the path of reimplementing PKI to work around that single limitation.
I do quite like the ascii visual hash idea.
@mozz Good point about taking advantage of the standard PKI for this.
I can think of these advantages in Morgan's system:
I know per-server client certificates kind of solve this, but I shall quit using any server that will participate in this scheme.
Not a fan of this idea.
My cert has some fields associated with it (inside it?). I was assuming these were provided from the client to the server when the cert was used with a gemini site. Mine has a user name, and email address a domain name. I always assumed that a site could use that info if they wanted to. So Bubble could use my UID as the default handle for a new account, if it wanted to?
2023-06-09 · 6 weeks ago
I don't know much about the way certs work, but I assume they have a public and private key associated with them? It seems like I should be able to "sign" a regular gemini page with a certificate. For Bubble, we just trust the server to associate a cert with a username in a consistent way, using the backend software. But if there was a way I could create that association in an editor and append the signature to the end of the page, that would be cool. That also enables a use case where I make my home page on my capsule be signed by the N different certs that I want to publically associated with. Crawlers could collect the associations and create a lookup site.
So that leaves us with the desire for some people to make their cert 'public' on (eg) Bubble so that people could match it up with the same cert that's used to sign another gemini page. All of this is optional of course. But if my cert could have a gemini home page inside it that's signed with the cert itself, that would be pretty cool. Maybe I'm misunderstanding about the cert metadata being broadcast all the time. All I know is Lagrange associated the metadata with the cert. It might be private to the client. (But it seems like a cool use case for something that can be visually verified. There must be some existing practice for public keys)
Any time someone starts talking about servers using other servers to do stuff, I have a knee jerk reaction like "that doesn't sound like the gemini I know". So anything centralized needs to solve a problem that can't be solved by a single gemsite on it's own.
@emilis The problem being solved here is verifying that you have the "right" to a display name. That is, anyone could register an account called "emilis" and start pretending to be you. Can that be prevented in a way that preserves privacy?
Even if most people on Gemini never need this since they can just publish stuff on their own servers/domains, community capsules like Station and Bubble should have some sort of optional identity verification mechanism. Otherwise it's way too easy to pretend to be someone else.
What I'd personally like to see is either something like @mozz suggested, with non-self-signed client certificates (which requires a participating server to know of the used CA), or something tied to a domain that you own and operate. The latter could be a Mastodon-style backlink.
Given the scale of Gemini, a convention of linking to your Station/Bubble/other public accounts from your capsule would be sufficient IMO. It would still be a TOFU-ish approach, though, because anyone could register a new capsule called "its-really-emilis.com" and then register accounts with corresponding names, if you hadn't already.
I don't think it's realistic to expect unique short handles between sites. The level of centralization to achieve that seems big. But there should be a way I can post (and verify) a hex code or QR code or ascii-art that is uniquely mine (based on my cert). I think the only way to validate that "fred" on one bbs is the same as "fred" on another BSS with*out* the end user visually matching a blob of information is for the two BBSes talk to either other or talk to a central server.
I apologize for rambling on to such an extent, but the concept of identity is one of my special interests, I promise to stop soon. What if I could ask Bubble to sign u/cquenelle with one of my certs? (I'm still assuming that's possible). If station also used the convention of u/username, then station and bbs.geminispace could choose to federate and validate against each other, on my u/cquenelle page, Bubble could add a "(station)" flag if the same user and cert is found on station (presumably optional).
@cquenelle I'm intrigued by the idea of a signed u/-page. Not sure if it works with just the client certificate, though. If you make a signature using the client cert public key (hash + encrypt), you'd need the corresponding private key to verify it. So only you personally would be able to verify the signature.
@skyjake Ah yes, that's right. The signing would have to happen on the client side to use the private key. Maybe it works just as well to just include a fingerprint of the public key on the u/-page? I feel like using unicode could enable some terse ascii-art version of keys. But not everyone is using clients as cool as Lagrange ❤️
A silly idea: what if the registered user name is in the form "user@domain.tld", then the server (or anyone really) could make a Finger request with said address to get an identity verification token.
Not sure what form this token would need to be. Just a random string? A piece of ASCII art?
Having ownership of the domain could be sufficient to solve the identity problem, and using an old protocol like Finger feels somehow appropriate for Geminispace. It's also not encrypted so it's great for sharing public info like this.
@skyjake I don’t feel the problem you describe. I would actually be happy if someone impersonated me.
Centralised identity didn’t even spread on the web naturally until Facebook and Google started pushing it down on everyone.
If you really need to sign your posts, do it on your capsule. Contact admins if there is someone impersonating you in a way you don’t like.
I don’t see any need for “Sign in with Facebook/Google” on Gemini. It goes completely opposite to the goals of the protocol.
@emilis Fair enough. Many people on Gemini may feel similarly.
However, I see this as a credibility issue for capsules like bbs.geminispace.org. My vision for Bubble is to provide an actual "productivity" platform natively on Gemini, and for there to be trustworthy communication, there needs to be a way to prevent impersonations when necessary. Relying on admins for cleanup after the damage is done is not an ideal solution.
I do agree that anything centralized is a total no-go for this in Gemini, as it gives one server too much power/tracking ability.
Gemini is about keeping things human-scale, and servers talking to each other isn't it, so maybe a best practice could be to just sign your contact information and declare your public accounts there:
I can see the value of a selfhost Identity Provider I would actually like something akin to AzureAD I can selfhost and use to signin/manage OBSD and Linux devices I own on a tailscale style flatnet using wireguard. If i could use that solution for web sigins and gemini identity verification too that'd be great. If this was centralised though not a fan (hence removed my like). Google have their claws in literally everything with "free" web fonts, captcha, analyitics, ads, social beacons. Not an attack on CircaDian but strongly dislike big-tech.
Thanks everyone for the thoughtful comments and input :)
Re: more correct/secure solutions using signing for e2e validation; I see a general problem there, which is that anything client side needs to be added to multiple clients, which is very hard; it would either need to be part of the gemini spec or an equally convincing additional spec. I don't see a path forward there.
Re: fields in existing client certificates, those don't solve the imitation problem, and would also require client support / standardization to use consistently. I don't see a path forward there, except maybe a very lightweight one: "CN is default display name".
Re: using capsules for verification/names, I think there are possibilities there, but I think they need to be "strictly optional"; otherwise, it would grant an unacceptably high advantage to Geminauts with a capsule, which I think defeats part of the point of offering services like Bubble as an alternative to self hosting.
Re: signing and trusting servers. That was a point I touched lightly in my post. Having a `/u` page signed with a client certificate is hard: it requires new client code / protocol. But having Bubble display the client cert visual hash is easy. Then we are trusting Bubble to not lie about the cert. But why not? Bubble could already make any user appear to say anything arbitrary, which would be worse and harder to detect than lying about the cert. I think "widely used servers don't lie about client cert sha1s" is a reasonable step to take for the sake of significantly more simplicity and usability.
Re: domain ownership; I think like capsule ownership that should be strictly optional, it can't be a core part of the setup, to avoid creating a too-high barrier to entry.
Re: centralization and @emilis specifically:
Thank you for sharing your concerns! Negative feedback is the most useful feedback here :)
I am excited by what client certificates offer here, they are fundamentally different to what the web can do. I would suggest that identity on Gemini is already centralized when you want it to be--the centralization point is the private key stored on your client. The sha1 of the certificate that you pass to the server, if you pass one, is not a side effect that you would prefer to be private, like your IP address; it's an explicit action on your part.
Neveretheless, a server passing on your sha1 is concrete new action that I agree needs evaluating carefully; is it always okay, okay with explicit permission, or never okay?
Re: centralization and too much power. I think it's worth disecting what that power looks like.
A popular ID server would get pings with sha1s from all kinds of services; it could map IP addresses back to services to discover where the identities are used. So I could ask it, for example: where is "Morgan" active?
Does that matter? I'm not sure. An effective search engine could give you the same result, unless some of the services are private or semi private.
An ID server could lie about the metadata. Does that matter? I find it hard to see how--it would be noticed and people would switch to a reliable server.
An ID server that makes decisions, now we're getting somewhere. If the ID server adjudicates display name ownership, for example, then it has real power. So maybe the answer to "centralization is too much power" is "decisions must be made in an agreed way based on public data", so there is no centralized control.
Unfortunately that brings us right back to where we started--how could we prevent imitation without a centralized authority, without client side signing and without requiring a capsule or a domain.
Maybe there is some possibility wherein the server publishes logs as an audit trail. (For example, noting when Morgan is first associated with a sha1). Then, in order to cheat the server would have to change old log entries, and this can be automatically spotted, causing the server to be declared bad. Additionally, the untampered logs would provide all the data needed to spin up an alternative server.
So there would be three pieces: a server anyone can run, a data format that the server should publish that allows decisions to be checked and a new server to fully "take over" from an old one, and a script you can point at an ID server to catch it cheating by changing old logs.
Thanks again for the thoughts everyone! I'd like to emphasize again that this is just exploring interesting possibilities, I have no intention of launching+promoting such a server.
Thinking about this a bit more, the minimal server can be very simple:
- You can sign in with a client cert and post messages
- The messages get timestamped then published forever to a public log with your certificate sha1
- That's it :)
So to claim a display name you can post a message to the server saying "call me Morgan". It's published forever with your certificate sha1 and a timestamp, so anyone can verify that you have the first claim to the name. (As long as the server doesn't cheat--which can be checked by watching that the log only grows and does not gain post-dated entries).
Functionality like UI for posting specific types of message, despamming and lookups (which sha1 belongs to "Morgan") can be built on top of this basic core--and don't even need to be on the same server. (Except for the UI).
Hi everyone!
I have refined the visual hash idea in a new post, please see what you think.
— Identity Again: Visual Hashing
The idea is to do as much as possible with no new servers at all.
Thanks :)
2023-06-11 · 6 weeks ago
I’ve linked my BBS and Station identities without any fancy crypto: made the profile pages include a link to each other.
It’s the same system that works for confirming profile links in Mastodon.
2023-06-12 · 6 weeks ago
Thanks @emilis!
That does indeed accomplish the same thing--but does not seem 100% satisfactory/scalable.
The problem I see is that as the number of sites grows, the complexity grows; you'd have to link all sites from all sites, or arrange that there is a chain of crosslinks :) ... as the complexity grows, it gets easier to confuse people. I could register somewhere new with your name, copy the list of links, and people may simply not notice that there is no backlink to this one.
Having a capsule makes it a bit clearer as you can link to everything from the capsule, but still it's the case that if you link from a fake identity to the capsule then people might not notice there's no backlink. You could also link from the fake identity to a copy of the capsule with a backlink added.
With your visual hash, e.g. "Long list ties, wise soup flows; edgy page demos." displayed due to you opting in across the sites, people can check that. There's one issue I didn't address yet though: someone could just paste your visual hash into a free form text field on their profile and it might be convincing. Probably there needs to be some recognizable visual context (e.g. a padlock emoji) that servers present with a real visual hash and block on free form text.
Security is hard! And fun. Thanks ;)
— I summarized my thoughts on my gemlog.
Thanks @skyjake! At first glance your two points don't fit with how I'm thinking about the problem--but I'll give them more thought.
Let me share my initial thoughts in case they are useful :)
Re: using the same certificate on many servers enabling more tracking; I think that's true of any mechanism for Gemini-wide identity. If a human can verify the identity via backlinks or PGP or anything else, so could the servers--and collaborate to track. In fact, the servers don't have to collaborate at all: if your identity is verifiable across multiple capsules then a third party can track your activity across multiple capsules.
And that's really part of the intention; if I want to be identifiable across capsules then it's the same as wanting someone to be able to find all my posts across all capsules, which is a form of tracking.
This already happens today if you use the same display name across capsules--except with something nobody wants, the possibility for impersonation.
I plan on writing in detail about what tracking is possible on Gemini and what isn't, compared to what happens on the web. My guess is that the tracking that happens on the web is mostly not possible on Gemini--exactly as intended. But it needs careful thought.
Re: trusting the server you are looking at; I actually think this is a correct thing to ask for, but I'll think about it some more. Here's my thinking so far.
Server certificates mean the trust you are placing is reliably linked to a real-world entity. (Unless there has been a man in the middle attack ongoing since you first saw it--a real security hole, but a topic for another day).
A malicious server can do lots of bad things. For example, it could show a fake identity with a link to a copy of the victim's real capsule that has a backlink to the fake identity. It could show entirely different versions of messages to different people. It could "ghost" certain people, making them think they are posting when they are not. And so on.
That's why I think that trusting a server to correctly show identities does not lose much in terms of security.
I do agree that real end to end signing would be great to have as an option--but because it's not in the protocol, I don't see it as a thing that can work on Gemini. Security that works except that nobody uses it, is not security :)
Anyway, thanks for the thoughts! As I said, I will reflect more :)
More somewhat-tangential thoughts/ideas. Not good or better ideas, not proposals--just exploring the problem space.
The original identity server idea can actually work with no unwanted tracking. Instead of pinging it to ask about individual users, servers wanting to use it just download all identity data in bulk every so often. Then, they don't leak any information about who visits what server and when.
The identity server could publish sha256s of sha1s instead of the sha1s directly, so that it would not leak the sha1s.
Or, a whole new way of doing it: this time the "identity server" is just something that checks cross-capsule link loops. So as well as cross linking your Bubble and Station profiles, you throw in a link to the identity server asking it to confirm the loop. When someone clicks that link it checks your Bubble and Station profiles and confirms that they do mutually link, meaning that they represent a real cross-capsule identity. This time there is no special server integration needed as well as no tracking :)
You could drive such a server from crawler data, making it a central directory of capsule-link-based identities that runs fully automatically with no per-user setup and no tracking. Then it's a "Gemini person search". Creepy? Yes--and that's why I mention it: not because it's a good idea, but because it underlines what information is already out there.
If we discard enough bad ideas maybe we eventually get to a good one ;)
Thanks.
I like thoughts. Thinking is good! ❤️🧠 I realized just now that I proposed earlier a scheme where servers could talk to each other to cross-verify handles. And I also said earlier that servers talking to each other was to be avoided. So I thought some more about a client-oriented approach. What if community sites offered a way to publish the publickey/cert associated with a handle according to a convention discoverable by a client.
2023-06-13 · 6 weeks ago
So the model would be like this: I see 'fred' on 'site1' and I think he's cool, so I go to his profile page and see that he is 'public' and my client offers to store his handle and cert in my client's "address book". Then if I go to 'site2' and click on fred's profile page, my client can say "BTW This is 'fred' from site1" It would be nice if clients didn't automatically save all the certs they encounter. But for people that you like, or hate, or like-to-hate, you can click on them to keep track of them, if they don't mind.
Thanks @cquenelle :)
That pretty much lines up with the "visual hash" idea--except that with visual hashes the idea is to store them mostly in your brain ;) with new client support that need goes away, and so there's no need for the visual part, it can just be the sha1 or a hash of the sha1.
For the most part though I'm skeptical about adding anything new to the client--because there are a lot of clients, it needs a very strong level of agreement, possibly even it needs to be in the Gemini spec, which seems unlikely to happen.
I'm really looking for "zero client changes, almost zero user effort, almost zero server effort". Which may be impossible ;)
Thanks.
(I’m about to make some very strong claims about things I’m not 100% sure about, so please let me know if I’m missing something!)
This problem strikes me as having a familiar shape: attempting to manage the allocation of limited, valuable resources (unique personal identifiers in this instance) amongst a set of parties who have no reason to trust each other 𝘢 𝘱𝘳𝘪𝘰𝘳𝘪. Using a TOFU scheme to underpin a Gemini identity system seems to borrow some similar drawbacks from cryptocurrency systems, too, like having no real provision for users to rekey after a compromise without losing their claim on their resources. I suspect that without some trusted parties a Gemini identity system can only function by accepting some of the rigid, anti-social trade-offs that are commonly associated with cryptocurrency nowadays.
I’ve been thinking about two silly (but also kind of interesting, IMO) alternative approaches:
Sorry if this was too far off topic.
2023-06-18 · 5 weeks ago
Thanks @totroptof :) some interesting thoughts.
Bubble has a mechanism to set a temporary password to allow you to link a new certificate, but that doesn't help with recovery if you lose a certificate. I guess today you'd email Skyjake.
Email addresses and domains will probably be used as fallback proof of identity when certificates get lost.
I suppose the ideal setup today is, you should export and back up your certificates. Backup is hard to get right, it's a lot to ask of users.
Account recovery is a really hard problem. I suppose we are not going to have a centralized service where you can recover access via a code sent to your phone :)
2023-06-19 · 5 weeks ago
@Morgan: Reading your reply, I’ve realised I wasn’t really clear on what point I was trying to make. This is my second attempt after thinking about it for a bit 🙂
A motif I’ve noticed browsing some of the more technical writings in Geminispace is that many Geminauts don’t really seem to take security that seriously. That’s not necessarily a criticism; I think there are reasonable justifications for this position, particularly in the context of Geminispace. However, if the goal is to build a system that addresses some of the attacks made possible (in part) by this lack of security culture, I believe a system that isn’t designed for inevitable key compromise will fail to meet its goals.
To clarify, I don’t think “design[ing] for inevitable key compromise” means recovery codes sent by SMS. Looking at operators elsewhere on the internet, one can observe things like Let’s Encrypt restricting certificate validity to a maximum of three months and wide deployment of ratcheting key agreement systems in chat applications. To me (and please note, I’m no cryptographer), designing for key compromise looks like a focus on ephemeral (or at least relatively temporally limited) keys, regular rekeying and per-device keys. I’d argue that a system which requires users to export and back up private keys is the exact opposite, since
In the writings of solderpunk and others on the topic of Gemini I perceive a philosophy that’s essentially humanist with a particular de-emphasis on technocentric approaches. The “solutions” to the identity problem I posed in my last comment, as impractical as they are, were an attempt to try and examine the problem space in a way that seemed more consistent with this philosophy.¹
I hope that’s a bit clearer (it is for me at least :P)
Footnotes
─────────
[1] I don’t mean to imply an endorsement of those values; in fact, it’s a topic I intend to glog about at some point. 😉
Thanks @totroptof
Some more thoughts from me, although I'm no expert either :)
I think short key expiry is a distraction--it seems like what you want is secure key changes, backup proof of identity and intrusion detection. All of these are hard :)
But not entirely hopeless.
For example, you could have a second key pair that you print and keep only on paper. It's your emergency key in case you need to change the live one or regain control of it.
So that maybe covers secure key changes and backup proof of identity.
It would be nice to have intrusion detection, but I don't see how. For example, a server could notice that you are acessing your account from the other side of the world to where you usually are--by IP address--and ask for confirmation before trusting your cert from your new location. This would catch a lot of compromised keys. But it requires a level of sophisticated tracking and centralization unlikely to come to Gemini. Not saying that's a bad thing--just means it's a different problem to web account security.
I suppose "here is the latest activity associated with your cert" is a form of intrusion detection we could have on Gemini in a distributed way. Then if you see activity you don't recognize you can trigger a key change using your offline key. Somehow.
Fun stuff.
I̶n̶ ̶c̶a̶s̶e̶ ̶I̶ ̶f̶a̶t̶-̶f̶i̶n̶g̶e̶r̶ ̶s̶o̶m̶e̶t̶h̶i̶n̶g̶ ̶a̶n̶d̶ ̶a̶c̶c̶i̶d̶e̶n̶t̶a̶l̶l̶y̶ ̶p̶o̶s̶t̶ ̶t̶h̶i̶s̶ ̶a̶s̶ ̶a̶ ̶c̶o̶m̶m̶e̶n̶t̶ ̶i̶n̶s̶t̶e̶a̶d̶ ̶o̶f̶ ̶a̶ ̶d̶r̶a̶f̶t̶:̶ ̶t̶h̶i̶s̶ ̶i̶s̶ ̶j̶u̶s̶t̶ ̶m̶e̶ ̶j̶o̶t̶t̶i̶n̶g̶ ̶d̶o̶w̶n̶ ̶s̶o̶m̶e̶ ̶p̶o̶i̶n̶t̶s̶ ̶t̶o̶ ̶t̶h̶i̶n̶k̶ ̶a̶b̶o̶u̶t̶ ̶:̶)̶
Incredibly, exactly that happened. Sorry 😵💫
2023-06-20 · 5 weeks ago
No worries :)
Gemini Identity — @Morgan has a prototype implementation of an identity service for Gemini. This is certainly interesting! Some quick thoughts: If this is something that people want to use, it should not rely on a single central server. Anyone should be able to self-host their identity service and servers should not assume a default one. How does this mesh with people not wanting to be tracked across...