Malicious Links

In Gemini, the restriction that information can only be sent to a server
by performing a request is considered a feature.  However, this can
backfire by removing the need for user interaction, even when it is
absolutely necessary.  Below, I provide an example to show why this
feature, combined with the existence of malicious links, can prevent (or
at least hinder) the sole use of TLS certificates in account-based sites
on Gemini.

Consider a website, gemini://example.org, where users can set up
accounts.  It uses TLS certificates for authentication and provides
important settings through the Gemini interface.  For example, one can
delete their account by visiting a certain URL: perhaps
gemini://example.org/account/delete.  Although this makes sense, you may
already begin to understand the problem at hand.

Malicious Gemini pages (or parts thereof) can contain links to such
locations.  Depending upon the user's Gemini client configuration, it
may not show them the URL they are going to (e.g. Amfora, I think), and
they may accidentally delete their account at gemini://example.org (or
perform any other action involuntarily) this way.

Note that simple workarounds such as responding by asking the user to
type 'YES' can be avoided trivially, by adding the 'YES' to the query
string in the URL of the malicious link.

One possible workaround is to require the user to provide their password
(through a response 11) for every such action.  Presumably, the attacker
doesn't know the user's password, and so cannot craft a URL which would
delete the user's account without interaction from them.  Sites would
have to re-state the action being performed within the prompt attached
to the response 11, and give context (so the prompt may look something
like "Are you sure you want to delete the account at
gemini://example.org?  Type in your password to continue.").  However,
if the user is performing many actions which require password
verification, this is going to get irritating.  In addition, it forces
sites into using and storing passwords, even when they use TLS
certificates.

Another workaround is to require the user to provide a randomly
generated string.  For example, upon visiting
gemini://example.org/account/delete, the user is requested to type in
some words that were generated at that moment ("correct horse battery
staple") to authorize the action.  This removes the need for a password
in addition to the TLS certificate, but forces sites to keep track of
the random words for each user whenever they request such an action
(this functionality will likely have to be implemented by hand, and
there are complications like timeouts to consider).  This prevents
attackers from crafting the necessary URL, but once again performing
account-related actions through Gemini would become irritating.

It would also be possible to avoid this issue by using a special URL
format when sensitive input needs to be included.  Sites requiring
authorization (even just a 'YES' string) could use the sensitive input
response, and clients would not allow the sensitive input string to be
automatically provided (instead, the user would be required to type it
by hand).  This provides a minimal level of necessary user interaction.
Although it still suffers from the repetition problem of the above
workarounds, it seems to reduce the amount of work necessary, and
better share it between server, client, and user.  However, it would
necessitate a breaking change in protocol (or, perhaps, a new status
code and convention for 'sensitive query strings').

The reason that HTTP/HTML does not suffer from this problem is twofold.
Firstly, HTML is interactive, and so whenever such an action is
performed, the user can be interactively requested to confirm their
intention.  Secondly, when the action is performed directly (e.g. by
using HTTP POST to the relevant URL), the attacker doesn't have access
to the necessary cookies to authorize the action.  TLS certificates are
problematic in comparison because they apply for all interactions with
the site, and so automatically authorize any interaction; cookies for a
certain site are only available to the HTML/CSS/JS of that site, and so
the site can interactively verify the user's intention before
authorizing the action.  The access control with cookies also avoids the
repetition problem, and allows users to authorize once to perform
multiple actions.

With any of the above workarounds, account management on sites through
Gemini becomes difficult, both for hosters/admins and for users.  In
fact, this extends to just about any account-based interaction with any
site; for example, sites where comments can be uploaded are affected,
because a malicious link can cause an unpleasant comment to be
involuntarily submitted.

What does the Gemini community think?  How big of a problem is this?
Are there any other feasible workarounds?  Which major (and minor) sites
are affected?  Does this call for some sort of change in protocol?

~nothien

P.S: I'm not sure what ML category this fits into.  [spec], maybe?
[tech]?  No clue.

---

Next in thread (2 of 27): 🗣️ Jonathan McHugh (indieterminacy (a) libre.brussels)

View entire thread.