πŸ’Ύ Archived View for gemi.dev β€Ί gemini-mailing-list β€Ί 000207.gmi captured on 2024-05-12 at 16:00:22. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2023-12-28)

-=-=-=-=-=-=-

Uploading Gemini content

1. Sean Conner (sean (a) conman.org)


  Okay, there's quite a few people on this list what would like to see more
content in Gemini.  What I'm proposing is *not* a modification to the Gemini
protocol itself, but it's an adjunct that *could* be implemented [1] to ease
the creation of more Gemini content.  I'm not trying to scare you,
solderpunk, there are others working on this concept, but they aren't on the
mailing list because of technical issues [2].

  Anyway, it's *another* protocol, very similar to Gemini but one that uses
a new URL scheme, to ensure that no one mistakes this for the actual Gemini
protocol.  There are already two variations on this, one by Alex Schroeder
[3] and one by baschdel [4].  The one I've constructed is a mix of the two
but not quite the same.

  I define two new URL schemes---"gemini+put" and "gemini+del".  The first
is to add (or update) a file to a server, while the second exists to delete
a file.  The protocol for uploading a file (C=client, S=server):

C: gemini+put://example.com/path/to/new.txt?mime=text/plain&size=1024 CRLF
S: 1x continue CRLF
C: size bytes of content
S: 31 gemini://example.com/path/to/new.txt CRLF <close>

  The thought process behind this is for the client to send the full path to
the new (or updated) resource.  They query string sends along some critical
information about the upload, the MIME type and the size.  This allows the
server to reject certain types of content or restrict the resources to a
given size. I'm using the query string for this information the other
methods defined by Alex and baschdel stray a bit too far from Gemini (in my
opinion).  I also think it's fine here as I've defined a new URL scheme and
I can say what the sections mean.

  I included the 1x response (input) in order to give the server a chance to
return errors (like unsupported MIME type or size issues) before the client
sends the data.  So the client can expect to see 4x or 5x here (or even
6x---but more on this below).  Once the client sees the 1x response, it can
then proceed with uploading the content.  Upon successfully receiving the
data, the server then responds with a 31 redirection to the new resource on
the server.  I can see an argument for a 20 status, but to me, 31 (a
permanent redirection) seems semantically better for a "successful" upload.

  The protocol to delete a file is not complicated either (C=client,
S=server):

C: gemini+del://example.com/path/to/old.txt CRLF
S: 52 Gone CRLF <close>

  Again, here a 52 response makes sense as the resource is being
deleted---any other error means an actual error.

  Now obviously this requires some security, so a client certificate giving
authority is required.  The "proof-of-concept" uses the Common Name from the
certificate to control access to the appropriate files.  Assuming user
content is defined by:

	gemini://example.com/~user/

a user of the site could generate (and have validated) or be given (by the
admins) a certificate to use to upload content.  The common name could be
the user name so the server will know what area of the filesystem a
certificate is valid for.  

  The idea is for a user to be able to use a Gemini client to not only
browse Gemini content, but *create* and upload new content to a server, from
within the client (a client can shell out to an editor, for instance).  This
should reduce the friction of generating content.

  I do have a "proof-of-concept" mostly finished [5], and if there is enough
interest, I can make it live.  The registration process would probably be
something like:

	1. Generate a CSR (only field required will be CN)
	2. Upload the CSR to a known path (with MIME type application/pkcs10)
	3. Server will then accept the request, sign it, and redirect to the
	   certificate the client can use (MIME type
	   application/x-x509-user-cert).

  And I repeat---this is *NOT* intended to become part of the actual Gemini
protocol, but an adjunct, a separate protocol that is still simple, but
allows data to flow from the client to the server.  And if solderpunk sticks
his fingers in his ears and goes "La la la la la la la" that's fine
too---this is just an idea.

  -spc

[1]	Because I've implemented it.  It's not live, *yet*.  But my
	"proof-of-concept" does work.

[2]	https://alexschroeder.ch/wiki/2020-06-04_DMARC_an_Mailing_Lists

[3]	https://alexschroeder.ch/wiki/2020-06-05_Gemini_Write
	https://alexschroeder.ch/wiki/2020-06-04_Gemini_Upload

[4]	https://alexschroeder.ch/wiki/Baschdels_spin_on_Gemini_uploading

[5]	All the bits except for certificate generation.  I need to work on
	client certificate verification in GLV-1.12556.

Link to individual message.

2. Matthew Graybosch (hello (a) matthewgraybosch.com)

On Sat, 13 Jun 2020 01:39:26 -0400
Sean Conner <sean at conman.org> wrote:

>   I define two new URL schemes---"gemini+put" and "gemini+del".  The
> first is to add (or update) a file to a server, while the second
> exists to delete a file.  The protocol for uploading a file
> (C=client, S=server):

May I suggest calling this ancillary protocol "titan" after the Titan
II missile repurposed by NASA as a launch vehicle for the Gemini
program? This might help distinguish it from the main gemini protocol.

Also, I'm curious as to what this protocol offers over uploading via
sftp or rsync. Just as clients could shell out to $EDITOR or $VISUAL,
couldn't they also create, update, or delete remote content using
existing tools?

Are you trying to design a platform-agnostic method suitable
for non-Unix and non-Linux clients and servers?

-- 
Matthew Graybosch		gemini://starbreaker.org
#include <disclaimer.h>		gemini://demifiend.org
https://matthewgraybosch.com	gemini://tanelorn.city
"Out of order?! Even in the future nothing works."

Link to individual message.

3. Sean Conner (sean (a) conman.org)

It was thus said that the Great Matthew Graybosch once stated:
> On Sat, 13 Jun 2020 01:39:26 -0400
> Sean Conner <sean at conman.org> wrote:
> 
> >   I define two new URL schemes---"gemini+put" and "gemini+del".  The
> > first is to add (or update) a file to a server, while the second
> > exists to delete a file.  The protocol for uploading a file
> > (C=client, S=server):
> 
> May I suggest calling this ancillary protocol "titan" after the Titan
> II missile repurposed by NASA as a launch vehicle for the Gemini
> program? This might help distinguish it from the main gemini protocol.

  I'll keep that in mind if the idea proves popular.

> Also, I'm curious as to what this protocol offers over uploading via
> sftp or rsync. Just as clients could shell out to $EDITOR or $VISUAL,
> couldn't they also create, update, or delete remote content using
> existing tools?
> 
> Are you trying to design a platform-agnostic method suitable
> for non-Unix and non-Linux clients and servers?

  One of Sir Tim Berners-Lee's initial ideas for HTTP was to allow the user
to edit the page in the browser and have that update the file on the server. 
HTTP didn't get this capability until May of 1996 [1] where various methods
were defined that included PUT and DELETE.  It's sad to think that no
browsers at that time even attempted to do that [2].  Instead, we kept on
using FTP, later sftp and maybe even rsync for the technically minded.

  As I mentioned, I know of at least two other people working on this
concept [3], and the notion that having a single tool (even if it just
shells out for the editing portion) might be a good idea to reduce friction. 
And the non-Unix thing is a bonus (but I wasn't even thinking in that
direction).

  -spc

[1]	RFC-1945 which describes HTTP/1.0.

[2]	Although if anyone knows of a browser that did, I'd like to know
	about it.

[3]	To support a wiki on Gemini.

Link to individual message.

4. Luke Emmet (luke (a) marmaladefoo.com)

On 13-Jun-2020 08:05, Sean Conner wrote:
> One of Sir Tim Berners-Lee's initial ideas for HTTP was to allow the user
> to edit the page in the browser and have that update the file on the server.
> HTTP didn't get this capability until May of 1996 [1] where various methods
> were defined that included PUT and DELETE.  It's sad to think that no
> browsers at that time even attempted to do that [2].  Instead, we kept on
> using FTP, later sftp and maybe even rsync for the technically minded.

Actually there was Amaya (defunct 2012) which was a W3 reference browser 
which implemented read and write for the web. There was also a version 
of Netscape Navigator (3.04 Gold) that had something too.

https://www.w3.org/Amaya/

https://www.webdesignmuseum.org/old-software/web-browsers/netscape-navigator-3-04

Sadly it never really took off as it is less than 1% (n.b. made up 
statistic) of people that write the content on the web, so no-one really 
needed that functionality. But in Gemini, the readers and writers are a 
more overlapping set, as the barriers to participation are lowered.

However, as we all know, what really did explode was the wiki concept of 
being able to edit content in your browser, which has led to the 
plethora of editable websites we see today and markdown.

I'd really like to see a simple concept in gemini (+friends) of being 
able to do this. It would be good to thrash this out.

We already have the simplest, most useful text based markup language, so 
that bit is done.

I think your ideas for a schema are a good start.

> C: gemini+put://example.com/path/to/new.txt?mime=text/plain&size=1024 CRLF
> S: 1x continue CRLF
> C: size bytes of content

I would like to see the target path be a percent encoded parameter to 
the end point, otherwise we cannot so easily implement multiple end 
points on the same server. For example there might be a number of 
different server of CGI applications able to receive content on the same 
server or domain.

so maybe something like this is a more flexible way to specify the put 
request

C: gemini+put://example.com/put-handler?path=path/to/new.txt?mime=text/plai
n&size=1024 CRLF
<continues>

One idea I've been working with is to integrate this with the preformatted 
area of a page, and then the client could render certain types using a 
text editor, something like this:

 ```gemini+write://example.com/put-handler?path=path/to/new.txt

this content can be edited by the user and the client provides a button 
somewhere to "submit" it

 ```end of content

On submitting, the mime=text/plain and size=XYZ is provided

Best Wishes

  - Luke

Link to individual message.

5. Felix Queißner (felix (a) masterq32.de)

Hey Sean and others!

> *snips the whole protocol description*

I was too working on a gemini extension to allow arbitrary data uploads
as "10 INPUT" is a bit too restricted to ~700 byte of data payload.

But now that i read your gemini+put and gemini+delete extension, i think
i don't need to continue braining over my extension and it's flaws.

I really like the proposal. It's easy to implement and integrate both on
server and client side.
The "gemini+delete" scheme is just the normal gemini handling routine
and the upload routine can be put "in front" of a normal gemini request
and can be handled with the normal response handler.

> The idea is for a user to be able to use a Gemini client to not only
> browse Gemini content, but *create* and upload new content to a
> server, from within the client (a client can shell out to an editor,
> for instance). This should reduce the friction of generating content.

Yes, that was my vision as well. Allow users to write and update their
gemlogs from within a client. Right now, i'm sshing into my server and
author all the files by hand. But just opening Kristall, going to my
gemlog, activating my "editor certificate" and just start editing my
site is just appealing!

Regards
- xq

Link to individual message.

6. Luke Emmet (luke (a) marmaladefoo.com)

Hi everyone

I've been thinking some more about this self-editing wiki concept, which 
seems a great application to support writers of Gemini content. I think 
there is an opportunity for a very simple addition to Gemini that would 
support client based content submission.

The mental model is quite simple - imagine a simple web page, having a 
single text area and a single submit button. The user can edit the text 
and submit the content. The client knows where to send the content (form 
attribute) and how to send it (HTTP protocol using POST).

Exactly what the name for this doesnt really matter, and it will need to 
integrate with the authentication/certificates mechanisms we already are 
establishing.

Essentially there are a number of new elements

1. New scheme extending gemini, only for those that want to. This is not 
gemini, but something else. Whether it gets considered for gemini is a 
separate conversation.

2. An extended client behaviour working with the preformatted text 
regions having suitable markers to be defined

3. A simple text submit protocol (text/plain only, UTF-8 only)

The elements could look like this

1. The scheme name is gemini+submit:// or something, doesnt really 
matter, but is distinct from gemini://.

2. Using the preformatted regions to specify the URL end point to post 
to. Only end points having gemini+submit:// as the scheme would have an 
active behaviour. This is done in a backwards compatible way so simpler 
clients just render the content as preformatted text

Note we use 4 back ticks to convey that the content may be edited and 
submitted. There could be some other option to indicate this, the syntax 
marker is not significant, can be changed to ```! or something else. 
This gracefully degrades and is valid text/gemini

 ````gemini+submit://domain/end-point/handler?any-params-you-like-probably-
includes-asset-id-or-path

 ```` (could be 3 or 4 doesnt really matter)

the 4 back ticks mean existing clients will just show the text.

The URI will probably contain information for the server to know where 
to put the content such as:

asset-id=1234ae4f34ae

or

path=/path/on/filesystem/to/file.gmi

3. The client allows the user to edit the content and then "submit" 
(button or whatever) the content to the end point as follows:

CONTENTLENGTH<SPACE>FULL_URI_FROM_TEXT_AREA<CR><LF>
<client sends the byte content>
<client closes connection>

Only text/plain is ever sent so we don't need to specify mime type. 
Simple and restricted.
Only UTF is ever sent, so we dont need to specify it. Simple and restricted.

There is only ever one "block" of text submitted, which is the content 
of the preformatted area (no multi-field forms).

The end point on the server knows when content has arrived as the 
content length is pre-notified in the header, replies with redirect to 
success page probably.

server may also respond requesting input,certificates

on the server, the end point might be inside the server or could be a 
CGI or similar application that might get the content via stdin (as per 
POST for HTTP)

It would be nice to adopt a common scheme for this together with 
gemini+put:// (for arbitrary binary upload) and gemini+delete:// 
suggestions suggested earlier on this thread. For example to integrate 
certificates, success or failure etc.

Potentially this scheme can be used to edit simple text content of a 
number of back end applications.

There are no changes need to any client or server that is not interested 
to implement this.

I think this has a similar simplicity to the spirit of Gemini and does 
not open huge doors for a horse and cart to come through.

Best Wishes

  - Luke

Link to individual message.

7. solderpunk (solderpunk (a) SDF.ORG)

A couple of thoughts on this new line of discussion:

1. I am very grateful that people have made a point of defining this as
separate "companion" protocol rather than asking for it as part of the
core Gemini protocol, to give me a bit of breathing room.  I *do* like
the idea of naming it titan://...

2. At this point I can only laugh at the completeness with which using
URIs as requests has backfired on me.  First the userinfo auto cookie
debacle and now, hah, the scheme has proven its ability to function as
a vehicle for different request methods.  Yes, the '+' character is
valid in a URI scheme.  So, why not gemini+HEAD://, gemini+POST://,
gemini+PUT://....  Moral of the story: everything is always extensible
if you try hard enough.  Corollary: Gopher has remained largely
unextended for so long for non-technical, presumably cultural reasons.
There may be great wisdom to be found in understanding why.

3. Dear God, when/where/how does it stop?!  This was supposed to be a
simple, humble, protocol of limited scope!  But...

4. ...I totally "get it", with the ability to edit Gemini resources from
within the client.  Compared to the current situation where publishing
in Geminispace requires using scp, (s)ftp, rsync, git or whatever, this
feature would make the protocol accessible to literally several orders
of magnitude more people.  The decision to let this happen or to crush
it in the name of simplicity and minimalism could have tremendous
consequences for the eventual destiny of Gemini.  It's exciting stuff,
and I don't want to exclude non-technical people from using what we've
built.  But...

5. ...I'm wary that facilitating "user friendly" Gemini hosts which
anybody can post to using a client carries a very real risk of fostering
a culture of dependency where people don't know how to do anything not
offered as a feature by their provider, who has the ability to change,
remove, or charge for those featurs, and also of pushing us toward a
more centralised Geminispace where large numbers of users congregate on
a small number of servers which make themselves especially appealing to
non-technical users.  These servers could easily become little
semi-closed gardens with internal-only comment notification facilities
and other such niceties which work much more smoothly than the various
decntralised solutions people have already been talking about.  They
might not, but the risk is there: we might just end up creating new
versions of LiveJournal, Wordpress, Blogspot, etc, etc.

6. Does anybody else feel like we are just instinctively re-implementing
the familiar history of the web without much caution or critical
thought?

7. It's of no practical use today, here and now, for "everyday users",
but I just want to get it on the record that in a hypothetical future
where IPv6 or something else has provided us all with abundant
publically reachable addresses, the obvious and elegant way for a
client to upload to a Gemini "server" is actually to just host the
resource itself, on a random, one-use-only URL, and then send that URL
as a query to a well-known uploading endpoint on the other end,
whereupon the "server" briefly becomes a client and fetches the resource
in the usual way.  Nothing extra needed in the protocol at all!

Cheers,
Solderpunk

Link to individual message.

8. Sean Conner (sean (a) conman.org)

It was thus said that the Great solderpunk once stated:
> 
> 7. It's of no practical use today, here and now, for "everyday users",
> but I just want to get it on the record that in a hypothetical future
> where IPv6 or something else has provided us all with abundant
> publically reachable addresses, the obvious and elegant way for a
> client to upload to a Gemini "server" is actually to just host the
> resource itself, on a random, one-use-only URL, and then send that URL
> as a query to a well-known uploading endpoint on the other end,
> whereupon the "server" briefly becomes a client and fetches the resource
> in the usual way.  Nothing extra needed in the protocol at all!

  I'll respond to the rest in another email, but I want to say that this is
a *brilliant* idea and could be done today.  All it would require is
configuring people's routers to forward a port to the computer (gamers
usually have to do this anyway to play online games---thanks NAT! [1]).  I
may end up doing a "proof-of-concept" on this method.

  The only downside is that limit of 1024 characters in the URL, but I don't
think that will actually be that much of a concern.

  -spc

[1]	This destroyed the original peer-to-peer nature of the Internet. 
	Sigh.

Link to individual message.

9. solderpunk (solderpunk (a) SDF.ORG)

On Sat, Jun 13, 2020 at 05:50:25PM -0400, Sean Conner wrote:
 
>   I'll respond to the rest in another email, but I want to say that this is
> a *brilliant* idea and could be done today. 

Thanks :)  I'm very pleased with it.

Cheers,
Solderpunk

Link to individual message.

10. Sean Conner (sean (a) conman.org)

It was thus said that the Great solderpunk once stated:
> A couple of thoughts on this new line of discussion:
> 
> 1. I am very grateful that people have made a point of defining this as
> separate "companion" protocol rather than asking for it as part of the
> core Gemini protocol, to give me a bit of breathing room.  I *do* like
> the idea of naming it titan://...

  Fair enough.  titan: it is.

> 2. At this point I can only laugh at the completeness with which using
> URIs as requests has backfired on me.  First the userinfo auto cookie
> debacle and now, hah, the scheme has proven its ability to function as
> a vehicle for different request methods.  Yes, the '+' character is
> valid in a URI scheme.  So, why not gemini+HEAD://, gemini+POST://,
> gemini+PUT://....  

  Yes, the more I thought about gemini+put:, gemini+del:, the less I liked
it.  And no, the extensibility did not escape me as I was working on this.

  Another way of looking at this is the lengths people will go to to solve
their problems using tools at hand, even if they may not be the best tools
to use to solve the issue.  See Raymond Chen's blog "The Old New Thing" [1]
where he describes some of the crazy things Windows programmers have done.

> Moral of the story: everything is always extensible if you try hard
> enough. Corollary: Gopher has remained largely unextended for so long for
> non-technical, presumably cultural reasons. There may be great wisdom to
> be found in understanding why.

  Gopher has its own URL scheme, which doesn't include:

	* userinfo
	* query string
	* fragments

  That hasn't stopped people from misusing it though (I've seen gopher URLs
with query strings even though that's not allowed).  I think the other
reason (besides cultural) is that it died *quickly* (for the most part) in
the mid-90s (when the UoM wanted to charge royalties for use of the protocol
when HTTP was free).  It limped along for *years* until maybe 2010 when
there was a newed interest, and it's not like people are refusing to use
UTF-8 or non-standard selector types because they're not allowed to ...

> 3. Dear God, when/where/how does it stop?!  This was supposed to be a
> simple, humble, protocol of limited scope!  But...
> 
> 4. ...I totally "get it", with the ability to edit Gemini resources from
> within the client.  Compared to the current situation where publishing
> in Geminispace requires using scp, (s)ftp, rsync, git or whatever, this
> feature would make the protocol accessible to literally several orders
> of magnitude more people.  The decision to let this happen or to crush
> it in the name of simplicity and minimalism could have tremendous
> consequences for the eventual destiny of Gemini.  It's exciting stuff,
> and I don't want to exclude non-technical people from using what we've
> built.  But...
> 
> 5. ...I'm wary that facilitating "user friendly" Gemini hosts which
> anybody can post to using a client carries a very real risk of fostering
> a culture of dependency where people don't know how to do anything not
> offered as a feature by their provider, who has the ability to change,
> remove, or charge for those featurs, and also of pushing us toward a
> more centralised Geminispace where large numbers of users congregate on
> a small number of servers which make themselves especially appealing to
> non-technical users.  These servers could easily become little
> semi-closed gardens with internal-only comment notification facilities
> and other such niceties which work much more smoothly than the various
> decntralised solutions people have already been talking about.  They
> might not, but the risk is there: we might just end up creating new
> versions of LiveJournal, Wordpress, Blogspot, etc, etc.

  As I said, HTTP has had all the features required to support "development
in the browser via HTTP only" since 1996, but as far as I know, no current
browser (or even old ones from the time) ever did this.  I know programs
like DreamWeaver [2] could upload documents to a server, but it did so over
FTP, not the web.

  Another aspect is that it wasn't apparent *how* to support methods like
PUT or DELETE in a general purpose web server, and it took several years for
Apache (for example) to even provide a way to do this---either via custom
Apache modules [3] or (eventually) through a CGI script.  In any case, it
required special configuration on the server side of things, and for mass
hosting ... you ain't gonna get that.  I can, but that's because I'm insane
enough to run my own web server (and email server, and gopher server, and
gemini server ... ).

> 6. Does anybody else feel like we are just instinctively re-implementing
> the familiar history of the web without much caution or critical
> thought?

  Believe me, I had actively surpress thoughts during the initial
development of Gemini to push for more web-like features.  And this proposal
woudn't even have come to light if it weren't for Alex Schroeder working on
this because he likes wikis so much (also, see my response for point 2).

> 7. It's of no practical use today, here and now, for "everyday users",
> but I just want to get it on the record that in a hypothetical future
> where IPv6 or something else has provided us all with abundant
> publically reachable addresses, the obvious and elegant way for a
> client to upload to a Gemini "server" is actually to just host the
> resource itself, on a random, one-use-only URL, and then send that URL
> as a query to a well-known uploading endpoint on the other end,
> whereupon the "server" briefly becomes a client and fetches the resource
> in the usual way.  Nothing extra needed in the protocol at all!

  I responsed to this in my previous email.

  -spc

[1]	https://devblogs.microsoft.com/oldnewthing/

[2]	An HTML editor from the mid to late 90s.

[3]	Not easy to do.  I did write an Apache module [4] twenty years ago
	and I find it easier to maintain a instance of Apache 1.3 to run it,
	than to try to adapt the code to Apache 2.4.  Maybe one of these
	days I'll get a round tuit.

[4]	https://github.com/spc476/mod_litbook

Link to individual message.

11. Sean Conner (sean (a) conman.org)

It was thus said that the Great Felix Quei?ner once stated:
> Hey Sean and others!
> 
> > *snips the whole protocol description*
> 
> I was too working on a gemini extension to allow arbitrary data uploads
> as "10 INPUT" is a bit too restricted to ~700 byte of data payload.
> 
> But now that i read your gemini+put and gemini+delete extension, i think
> i don't need to continue braining over my extension and it's flaws.

  I'm curious as to what you were planning.  Care to share?

> I really like the proposal. It's easy to implement and integrate both on
> server and client side. The "gemini+delete" scheme is just the normal
> gemini handling routine and the upload routine can be put "in front" of a
> normal gemini request and can be handled with the normal response handler.

  I wouldn't be too sure about that.  My own "proof-of-concept" took about
five hours to write, and it's a separate server of its own.  I did that
because trying to integrate the new URL support would have been too invasive
of a change in GLV-1.12556.

  And while the protocol itself is easy, there were issues I had to be very
cautious about to ensure nothing bad happened when a delete was called for. 
For instance, I had to account for the deletion of a directory and
everything below it.  I also had to handle the creation of multiple
directories for the put method.

  -spc

Link to individual message.

12. Felix Queißner (felix (a) masterq32.de)


> 7. It's of no practical use today, here and now, for "everyday users",
> but I just want to get it on the record that in a hypothetical future
> where IPv6 or something else has provided us all with abundant
> publically reachable addresses, the obvious and elegant way for a
> client to upload to a Gemini "server" is actually to just host the
> resource itself, on a random, one-use-only URL, and then send that URL
> as a query to a well-known uploading endpoint on the other end,
> whereupon the "server" briefly becomes a client and fetches the resource
> in the usual way.  Nothing extra needed in the protocol at all!

As much as i like the idea, as much i see it fail in the current world
of ISPs.
Real world example:
I was developing the finger implementation for Kristall on the train,
using a mobile hotspot. I had neither the possibilty to use
finger://ping at cosmic.voyage from either my mobile phones hotspot as well
as the public wifi in the train, as the firewalls prevent pinging,
NATing and everything else.

Most "normal" people I know don't even have a computer anymore, owning
pretty much only tablets and mobile phones. If we want to get these guys
on the train as well as possible content creators, we need to provide an
easy way of uploading resources to gemini space via a client request,
not a reverse-server.

Afaik it also isn't possible from some mobile devices to activly use
most of the network features (like listing on sockets, using UDP on some
devices is completly disabled, ...)

IPv6 will probably not change this ever

And that's why i want to have some kind of company protocol or built-in
facility in Geminispace to get people on the train that are not tech
savvies.
I'm aware that we as the tech guys often forgot that most people on a
computer don't have any idea how those machines actually work and
requiring them to have a reverse-server for content uploads is just
impossible to explain and must be "hidden" in a client to be able to
probive them content changing options?

The question is now: Do we want to have those people on the train as
content creators or should gemini have a nice and easy way to have
content creators do their thing?

It's sad, but that's my view on reality
- xq

Link to individual message.

13. Matthew Graybosch (hello (a) matthewgraybosch.com)

On Sat, 13 Jun 2020 20:56:35 +0000
solderpunk <solderpunk at SDF.ORG> wrote:

> I *do* like the idea of naming it titan://...

Thanks. I honestly just couldn't resist.

> 5. ...I'm wary that facilitating "user friendly" Gemini hosts which
> anybody can post to using a client carries a very real risk of
> fostering a culture of dependency where people don't know how to do
> anything not offered as a feature by their provider, who has the
> ability to change, remove, or charge for those featurs, and also of
> pushing us toward a more centralised Geminispace where large numbers
> of users congregate on a small number of servers which make
> themselves especially appealing to non-technical users.

> 6. Does anybody else feel like we are just instinctively
> re-implementing the familiar history of the web without much caution
> or critical thought?

That's been nagging at me as well. TBH, I'm not actually comfortable
with the seeming necessity of public hosts like tanelorn.city to get
people creating and publishing gemini content. I'm not used to asking
people to trust me like this, and I'm not comfortable with the power I
have over people using tanelorn.city.

Let's be honest; it shouldn't be that hard to run a gemini daemon out
of a personal computer in your own home, whether it's your main desktop
or just a raspberry pi. The protocol is light enough that CPU and
memory usage should be next to nothing compared to Firefox or Chrome. 

It probably wouldn't be that hard for a compentent Windows or OSX
developer to create a graphical app suitable for people who aren't
sysadmins that published an arbitrary directory and starts up again
whenever their PC or Mac reboots. The nature of the protocol all but
guarantees that.

I think the biggest problem, at least in the US, is that ISPs seem
hellbent on keeping residential internet users from using their
connections for anything but consumption. You've got to use a dynamic
DNS service like no-ip.com, and even if you manage that you might still
find yourself getting cut off over a TOS violation. People are
thoroughly conditioned toward using the internet as glorified cable TV,
and only expressing themselves on platforms they don't control.

Then there's DNS, domain names, ICAAN, etc. Maybe if we still used a
UUCP-style addressing scheme like
<country>.<province>.<city>.<neighborhood>.<hostname> it wouldn't
matter what I called my host as long as the hostname was unique to the
<neighborhood>. But instead we settled on <domain-name>.<tld>, which
needs to be administered by registrars to ensure uniqueness, and domain
registration is yet more sysadmin stuff that most people don't
necessarily have the time, skill, or inclination to deal with.

I would prefer that public hosts weren't necessary. I think that
everybody who wants to should be able to publish from their own device
without having to become a sysadmin. As long as operating a gemini
service remains the province of sysadmins, we're going to maintain the
division between haves (sysadmins) and have nots (people who can't or
don't want to sysadmin) that prevented the web from becoming (or
remaining) a democratic platform.

This became something of a political rant, and I probably should have
put it on demifiend.org instead. Sorry if this doesn't belong here; I'm
posting this under a new subject so that it starts a new thread instead
of derailing the existing one.

-- 
Matthew Graybosch		gemini://starbreaker.org
#include <disclaimer.h>		gemini://demifiend.org
https://matthewgraybosch.com	gemini://tanelorn.city
"Out of order?! Even in the future nothing works."

Link to individual message.

14. Petite Abeille (petite.abeille (a) gmail.com)



> On Jun 14, 2020, at 00:07, solderpunk <solderpunk at SDF.ORG> wrote:
> 
>>  I'll respond to the rest in another email, but I want to say that this is
>> a *brilliant* idea and could be done today. 
> 
> Thanks :)  I'm very pleased with it.

A bit like the 'Mode Switching' in NNTP of yore. 

Such mode switch -where client and server trade places- would be a very 
nice use of the protocol capabilities. Very cool idea. Like it.

Eddie Murphy would be proud [1].

If only NAT traversal was a solved problem though. Sigh. Where is my 
overlay network when I need it.

Meanwhile, perhaps a simple 'patch' command would do.

For example, to turn specification.gmi into specification-modified.gmi, 
one could issue a patch command transforming the former into the latter:

gemini://gemini.circumlunar.space/docs/specification.gmi?data:text/x-patch...

Unfortunately, the 1024 bytes limit doesn't get  us very far. The diff 
itself is ~14K. ~5K compressed. Too big for one request.

Fossil delta format [2] is much more compact than diff -u, but still 
weights ~4K, 2K compressed. And this is not accounting for data: encoding overhead.

So, hmmm, 1024 bytes is quite a limiting factor if one must use only one request.

Perhaps this could be worked around using a sequence of requests, ala 
chunked transfer encoding [3]:

gemini://gemini.circumlunar.space/docs/specification.gmi?data:... chunk1
gemini://gemini.circumlunar.space/docs/specification.gmi?data:... chunk2
gemini://gemini.circumlunar.space/docs/specification.gmi?data:... chunk3

The server would then reassemble the various parts and apply the delta.

A bit clunky, but workable :D

[1] https://en.wikipedia.org/wiki/Trading_Places
[2] https://www.fossil-scm.org/xfer/doc/trunk/www/delta_format.wiki 
[3] https://en.wikipedia.org/wiki/Chunked_transfer_encoding


P.S. FWIW, attached is the fossil delta between specification.gmi and 
specification-modified.gmi

-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: specification.delta.txt
URL: <https://lists.orbitalfox.eu/archives/gemini/attachments/20200614/26e6
87af/attachment.txt>
-------------- next part --------------

Link to individual message.

15. Petite Abeille (petite.abeille (a) gmail.com)



> On Jun 13, 2020, at 22:56, solderpunk <solderpunk at SDF.ORG> wrote:
> 
>  Nothing extra needed in the protocol at all!

Considering the various network topology hurdles along the way, perhaps 
the gemini protocol itself could facilitate such a role reversal by 
offering a way to initiate such switch.

Link to individual message.

16. Sean Conner (sean (a) conman.org)

It was thus said that the Great Matthew Graybosch once stated:
> 
> Let's be honest; it shouldn't be that hard to run a gemini daemon out
> of a personal computer in your own home, whether it's your main desktop
> or just a raspberry pi. The protocol is light enough that CPU and
> memory usage should be next to nothing compared to Firefox or Chrome. 

 ... 

> I think the biggest problem, at least in the US, is that ISPs seem
> hellbent on keeping residential internet users from using their
> connections for anything but consumption. 

  As someone who has worked for various ISPs and webhosting companies for
most of my career, I think this slamming of IPSs is unwaranted.  And as
someone who runs both a public server *and* a few services on my home
network [1] there are some things you need to consider.

1. Open servers are *attacked* at an alarming rate. At home, I run an sshd
instance tha is open to the Internet [2].  I am currently blocking 2,520
hosts that have attempted to log in via ssh.  That count is only over the
past 30 days (technically, 30 days, 10 hours, 30 minutes, as that's the
average month length over the year).  Not doing so means my machine will be
constantly under login attempts.

  99% of all traffic to my webserver (on my actual public server) is
automated programs, not actual humans.  Most are just webbots spidering my
content, some are script kiddies looking for an exploit and some are just
incompetently written programs that just blow my mind [3].  There's the
wierd network traffic that just sucks up connections requests [4].  And then
there's the *wierd* (and quite stressful) situations involving black-hat
hackers [5].

  Then there's the issues with running UDP based services [6].  It's not
pretty on the open Internet.

2. If people could run a business server on their home connection, they
would.  Then they'll bitch and moan about the service being slow, or can't
the ISP do something about the DDoS attack they're under?  Even if they
aren't and their service is just popular.  Or why their connection dropped? 
Never mind the power is out, why did my server loose connection?

  Or in self defense, the ISP cuts the connection because the home server is
running a port scanner, participating in a botnet, or sending out spam
emails because of an unpatched exploit in some server being run at home.

3. Do people realize they'll need to basically firewall off their Windows
boxes?  Seriously, the level of exploits on Windows is (was?) staggering and
the number of services (like file sharing) it runs by default (because
that's what the users want) it runs is *not* condusive to allowing a Windows
box full access to the Internet.  The same can be said for Mac and Linux,
but to a slightly lesser degree.

4. It was email that poisoned home-run servers intially.  Spam increased
dramatically during the late 90s/early 2000s to the point where it because a
Byzantine nightmare to configure and run an email server due to SPF, DMARC
and DKIM, along with greylisting and filtering of attachments.  Oh, and as a
self-defense mechanism, nearly every ISP around the world will block
incoming/outgoing TCP port 25 to home users.

> You've got to use a dynamic
> DNS service like no-ip.com, and even if you manage that you might still
> find yourself getting cut off over a TOS violation. People are
> thoroughly conditioned toward using the internet as glorified cable TV,
> and only expressing themselves on platforms they don't control.

  That is true too, but I suspect even *if* you could easily run a server at
home, 99% would not even bother (or know what it is).

> Then there's DNS, domain names, ICAAN, etc. Maybe if we still used a
> UUCP-style addressing scheme like
> <country>.<province>.<city>.<neighborhood>.<hostname> it wouldn't
> matter what I called my host as long as the hostname was unique to the
> <neighborhood>. But instead we settled on <domain-name>.<tld>, which
> needs to be administered by registrars to ensure uniqueness, and domain
> registration is yet more sysadmin stuff that most people don't
> necessarily have the time, skill, or inclination to deal with.

  There are groups working on alternative naming/routing schemes that don't
require a global namespace.  It's not an easy problem.

  Also, at one time, domains under the .us domain were restricted to
geographical names, like example.boca-raton.fl.us.  But they were free to
register, and as far as I can tell, permanent.  The issue though, is that
even under the <city>,<state>.us, you still need unique names, although it's
a smaller area to worry about.

  I don't think you can do that anymore.  I went down that rabbit hole
several months ago looking to register a geographical domain under .us and
couldn't do it (or find out who controls the domains under
boca-raton.fl.us).  Pitty, I was hoping to get a free domain registration
for life.

> I would prefer that public hosts weren't necessary. I think that
> everybody who wants to should be able to publish from their own device
> without having to become a sysadmin. As long as operating a gemini
> service remains the province of sysadmins, we're going to maintain the
> division between haves (sysadmins) and have nots (people who can't or
> don't want to sysadmin) that prevented the web from becoming (or
> remaining) a democratic platform.

  Never underestimate the lack of giving a damn the general population have. 
I'm sure there are aspects of your life that you lack a damn about that
other people think you should give more than a damn.

> This became something of a political rant, and I probably should have
> put it on demifiend.org instead. Sorry if this doesn't belong here; I'm
> posting this under a new subject so that it starts a new thread instead
> of derailing the existing one.

  I think it's a conversation worth having, as it relates to how Gemini
expands with new content.

  -spc

[1]	Disclaimer: I do pay extra for a static IPv4 address---at the time I
	needed it for my job, and now it's a "nice to have" and I can still
	afford it.  It's actually not that much over the stock price of
	service.

[2]	My router will forward ssh traffic to my main development system.

[3]	http://boston.conman.org/2019/07/09-12
	http://boston.conman.org/2019/08/06.2

[4]	http://boston.conman.org/2020/04/05.1

[5]	http://boston.conman.org/2004/09/19.1

[6]	http://boston.conman.org/2019/05/13.1

Link to individual message.

17. Petite Abeille (petite.abeille (a) gmail.com)



> On Jun 14, 2020, at 02:49, Petite Abeille <petite.abeille at gmail.com> wrote:
> 
> Considering the various network topology hurdles along the way, perhaps 
the gemini protocol itself could facilitate such a role reversal by 
offering a way to initiate such switch.  

Perhaps this could be all done with just 1x (INPUT), between consenting pairs:

C: gemini://.../specification.gmi??gemini://.../delta.txt --notify the 
server of the location of the delta and to switch role to get it, as 
indicated by the ? sigil
C?S: <start role reversal> -- the client maintain the network connection 
and accepts one gemini request
S: gemini://.../delta.txt -- the server request the content from the client 
C: 20 text/x-patch; length=4106 -- the client return the data to the server
C?S: <end role reversal> -- upon EOF
S: 30 gemini://.../specification.gmi -- client redirected to updated resource

This would require a persistent connection though. And some sort of 
indications of content EOF, be it length or otherwise.

Link to individual message.

18. Sean Conner (sean (a) conman.org)

It was thus said that the Great Petite Abeille once stated:
> > On Jun 13, 2020, at 22:56, solderpunk <solderpunk at SDF.ORG> wrote:
> > 
> >  Nothing extra needed in the protocol at all!
> 
> Considering the various network topology hurdles along the way, perhaps
> the gemini protocol itself could facilitate such a role reversal by
> offering a way to initiate such switch.

  Such a thing would require a new response code block, like 70 (for
UPLOAD---INPUT is inadiquate for this).  Basic servers can then ignore this
block like they can ignore client certificates.  It would also require
knowing of the upload link, but I think that can be solved using
pre-existing measures (link titles, well-known links, etc.).

  Here's a simple proposal:

C: gemini://example.com/post-endpoint CRLF
S: 70 Proceed with upload CRLF
C: - 1234 text/plain; charset=us-ascii CRLF
C: data ...
S: 2x / 3x / 4x / 5x CRLF

C: gemini://example.com/put-endpoint CRLF
S: 71 Proceed with new resource CRLF
C: gemini://example.com/path/to/new/resource 1234 text/plain; charset=us-ascii CRLF
C: data ...
S: 31 gemini://example.com/path/to/new/resource CRLF

  For the seond method, a size of 0 means to delete the resource.  I made
the client response the same way to both to simplify the implementation. 
There is a semantic difference between a generic UPLOAD and the replacement
of an existing resource on the server, thus the two codes.

  A case could be made of making the size and mime type query parameters to
the endpoint---that would allow the server to check the proposed type and
sizes and reject before the upload starts.  In that case, I would propose
this:

C: gemini://example.com/post-endpoint?size=1234&mime=text/plain;charset=us-ascii CRLF
S: 70 Proceed with upload CRLF
C: data ...
S: 2x / 3x / 4x / 5x CRLF

C: gemini://example.com/put-endpoint?size=1234&mime=text/plain;charset=us-ascii CRLF
S: 71 Proceed with new resource CRLF
C: gemini://example.com/path/to/new/resource CRLF
C: data ...
S: 31 gemini://example.com/path/to/new/resource CRLF

  Depending upon the CGI implementation, this could even be handled via CGI
rather than the server.

  -spc

Link to individual message.

19. Sean Conner (sean (a) conman.org)

It was thus said that the Great Petite Abeille once stated:
> > On Jun 14, 2020, at 02:49, Petite Abeille <petite.abeille at gmail.com> wrote:
> > 
> > Considering the various network topology hurdles along the way, perhaps
> > the gemini protocol itself could facilitate such a role reversal by
> > offering a way to initiate such switch.
> 
> Perhaps this could be all done with just 1x (INPUT), between consenting
> pairs:
> 
> C: gemini://.../specification.gmi??gemini://.../delta.txt --notify the 
server of the location of the delta and to switch role to get it, as 
indicated by the ? sigil
> C?S: <start role reversal> -- the client maintain the network connection 
and accepts one gemini request
> S: gemini://.../delta.txt -- the server request the content from the client 
> C: 20 text/x-patch; length=4106 -- the client return the data to the server
> C?S: <end role reversal> -- upon EOF
> S: 30 gemini://.../specification.gmi -- client redirected to updated resource
> 
> This would require a persistent connection though. And some sort of
> indications of content EOF, be it length or otherwise.

  You know, another (maybe silly) way:

C: inimeg://example.com/specification.gmi CRLF
   <starts role reversal---the connection is maintained and the server sends the request>
S: gemini://example.com/specification.gmi CRLF
C: <data>
C: 20 text/gemini CRLF
   <end role reversal>
S: 31 gemini://example.com/spcification.gmi CRLF

  This doesn't solve the uploading of just data though.

  -spc

Link to individual message.

20. Petite Abeille (petite.abeille (a) gmail.com)



> On Jun 14, 2020, at 04:21, Sean Conner <sean at conman.org> wrote:
> 
>  You know, another (maybe silly) way:
> 
> C: inimeg://example.com/specification.gmi CRLF
>   <starts role reversal---the connection is maintained and the server 
sends the request>
> S: gemini://example.com/specification.gmi CRLF
> C: <data>
> C: 20 text/gemini CRLF
>   <end role reversal>
> S: 31 gemini://example.com/spcification.gmi CRLF

inimeg! Like! :)

>  This doesn't solve the uploading of just data though.

Why not? Just inimeg to a new URL.

So, to create a resource is the same as updating a resource.

C: inimeg://path/to/resource/role_reversal.gmi
?
S: gemini://path/to/resource/role_reversal.gmi
C: 20 text/gemini;length=123
?
S: 31 gemini://path/to/resource/role_reversal.gmi

Link to individual message.

21. Matthew Graybosch (hello (a) matthewgraybosch.com)

On Sat, 13 Jun 2020 21:22:15 -0400
Sean Conner <sean at conman.org> wrote:

> As someone who has worked for various ISPs and webhosting companies
> for most of my career, I think this slamming of IPSs is unwaranted.

You're probably right.

> 1. Open servers are *attacked* at an alarming rate. At home, I run an
> sshd instance tha is open to the Internet [2].  I am currently
> blocking 2,520 hosts that have attempted to log in via ssh.  That
> count is only over the past 30 days (technically, 30 days, 10 hours,
> 30 minutes, as that's the average month length over the year).  Not
> doing so means my machine will be constantly under login attempts.

I'm finding this out the hard way. Fortunately I thought to disable
root logins in /etc/ssh/sshd_config when I first set up my VPS, but I'm
also reading up on fail2ban. Thinking of using this HOWTO since it
emphasizes not tampering with distributed config files.

https://phrye.com/tools/fail2ban-on-freebsd/

> And then there's the *wierd* (and quite stressful) situations
> involving black-hat hackers [5].

You know what? I think I recognize your email because I've read about
your experience with the black-hat.

I'm reading this email and thinking, "Dear creeping gods, what have I
gotten myself into?"

> 2. If people could run a business server on their home connection,
> they would. ... Never mind the power is out, why did my
> server loose connection?

I've been this clueless. Fortunate my phone wasn't working so I
couldn't inflict it on some poor tech support worker.

>   Or in self defense, the ISP cuts the connection because the home
> server is running a port scanner, participating in a botnet, or
> sending out spam emails because of an unpatched exploit in some
> server being run at home.


You're right, this is legit.
 
> 3. Do people realize they'll need to basically firewall off their
> Windows boxes?

I firewalled the hell out of my wife's Windows machine just to block
the damn telemetry. It's insane.

> 4. It was email that poisoned home-run servers intially.

I remember this now. I know there was a reason I was reluctant to even
try setting up external email on tanelorn.city. I thought I was just
being irrational.

> That is true too, but I suspect even *if* you could easily run a
> server at home, 99% would not even bother (or know what it is).

Fair point.
 
> Never underestimate the lack of giving a damn the general
> population have. I'm sure there are aspects of your life that you
> lack a damn about that other people think you should give more than a
> damn.

You're right. It's just that I see barriers and had forgotten that some
of the barriers exist for a reason.

> I think it's a conversation worth having, as it relates to how
> Gemini expands with new content.

Thanks for taking the time to reply. There's a lot here that I either
didn't know or had forgotten.

-- 
Matthew Graybosch		gemini://starbreaker.org
#include <disclaimer.h>		gemini://demifiend.org
https://matthewgraybosch.com	gemini://tanelorn.city
"Out of order?! Even in the future nothing works."

Link to individual message.

22. Petite Abeille (petite.abeille (a) gmail.com)



> On Jun 14, 2020, at 04:39, Petite Abeille <petite.abeille at gmail.com> wrote:
> 
> 
> 
>> On Jun 14, 2020, at 04:21, Sean Conner <sean at conman.org> wrote:
>> 
>> You know, another (maybe silly) way:
>> 
>> C: inimeg://example.com/specification.gmi CRLF
>>  <starts role reversal---the connection is maintained and the server 
sends the request>
>> S: gemini://example.com/specification.gmi CRLF
>> C: <data>
>> C: 20 text/gemini CRLF
>>  <end role reversal>
>> S: 31 gemini://example.com/spcification.gmi CRLF
> 
> inimeg! Like! :)
> 
>> This doesn't solve the uploading of just data though.
> 
> Why not? Just inimeg to a new URL.
> 
> So, to create a resource is the same as updating a resource.
> 
> C: inimeg://path/to/resource/role_reversal.gmi
> ?
> S: gemini://path/to/resource/role_reversal.gmi
> C: 20 text/gemini;length=123
> ?
> S: 31 gemini://path/to/resource/role_reversal.gmi
> 

To delete something:

C: inimeg://path/to/resource/role_reversal.gmi
?
S: gemini://path/to/resource/role_reversal.gmi
C: 20 dev/null
?
S: 31 gemini://.../thanks

Link to individual message.

23. Petite Abeille (petite.abeille (a) gmail.com)



> On Jun 14, 2020, at 04:51, Petite Abeille <petite.abeille at gmail.com> wrote:
> 
> To delete something:

Actually:

C: inimeg://path/to/resource/role_reversal.gmi
?
S: gemini://path/to/resource/role_reversal.gmi
C: 52 GONE
?
S: 31 gemini://.../thanks

Link to individual message.

24. Sean Conner (sean (a) conman.org)


  To begin with, I'm going with the titan: scheme, to ensure that this isn't
mixed up with the gemini: scheme itself.

  This proposal is mostly based upon a new URL scheme, and I've spent the
day diving deep into RFC-7595 (URI Scheme Guidelines June 2015, current
standard).  First, the good news---gemini fits the spec for a "Permanent
Status" with the IETF.  Next, these bits from the RFC:

Section 1:

	The URI syntax provides a federated and extensible naming system,
	where each scheme's specification can further restrict the syntax
	and define the semantics of identifiers using that scheme.

Section 3.4:

	Note: It is perfectly valid to say that "no operation apart from GET
	is defined for this URI."

  Both are quite good for the current definition of the gemini: scheme. 

  It's some other bits from section 3.4 that bodes somewhat well for the
current proposal (with a new scheme) under consideration:

	It is also valid to say that "there's only one operation defined for
	this URI, and it's not very GET-like." The important point is that
	what is defined on this scheme is described ... The default
	invocation, or dereferencing, of a URI SHOULD be "safe" in the sense
	described by Section 3.4 of [W3CWebArch]; i.e., performing such an
	invocation should not incur any additional obligations by doing so.

[W3CWebArch]	https://www.w3.org/TR/webarch/#safe-interaction

  So doing a non-GET method based on a scheme is okay.  That's the one thing
I was worried about, as I looked over all the currently registered schemes
[1] appear to *only* specify a location, not an action and a location.  So
the following are "okay" (ish) per the spec:

	titan+put:
	titan+post:
	titan+del:

  Further more, from RFC-6335 (Service Name and Port Number Procedures
August 2011), section 5:

	There may be more than one service name associated with a particular
	transport protocol and port.  There are three ways that such port
	number overloading can occur:

	o  Overloading occurs when one service is an extension of another
	   service, and an in-band mechanism exists for determining if the
	   extension is present or not.

  So I'm still "okay" (ish) with the new URL schemes.  

  I rejected the following URL:

	titan://put at example.com/path/to/new/item

  While it's cute, and syntatically correct, semantically it's quite a
stretch---it's not a "user", it's a "command", which doesn't semantically
follow as a user nor a location.  It's too much of an abuse for my liking.

  Semantically, I would probably treat these three new schemes differently. 
The first, titan+post: (or titan-post: which is easier to type) would be:

	titan+post://example.com/post-handler/endpoint?size=1234&mime=text/plain

  The size and MIME types are part of the query string, as the data being
uploaded is *NOT* a replacement of a resource on the server, must data for a
service to consume, so semantically, it makes sense as a query string.

	titan+put://example.com/path/to/new/resource;size=1234&mime=text/plain

  Here request is being replaced---there's no "endpoint" per se to receive
the data, so query data doesn't make semantic sense.  The size and MIME type
are inherent properties of the resource being uploaded, so by using the ';'
as a sub-delimeter in the path, it semantically relates to the resource. 
That semantic relationship doesn't exist with a query string.

	tital+del://example.com/path/to/remove

  Nothing more to say, other than the resource is removed.

  Upon reflection, given the semantic meanings involved, I can cut the
number of new schemes down to just one: "titan:".  Here are the three
examples again:

	titan://example.com/post-handler/endpoint?size=1234&mime=text/plain
	titan://example.com/path/to/new/resource;size=1234&mime=text/plain
	titan://example.com/path/to/remove;size=0

  The logic goes something like this [2]:

	if the request has a query, it's an upload of data---accept data.
	if the request has no query, and the path parameter (marked by ';')
		doesn't exist---error.
	if the request has no query, and the path parameter exists:
		if size==0, delete the resource
		if size>0, accept data and make the resource available.

  So that's my current thinking (other than having a way of Gemini to
reverse the flow).

  -spc

[1]	https://www.iana.org/assignments/uri-schemes/uri-schemes.xhtml

[2]	Assuming proper authorization and data checks are made.

Link to individual message.

25. Petite Abeille (petite.abeille (a) gmail.com)



> On Jun 14, 2020, at 04:59, Petite Abeille <petite.abeille at gmail.com> wrote:
> 
>> To delete something:
> 
> Actually:
> 
> C: inimeg://path/to/resource/role_reversal.gmi
> ?
> S: gemini://path/to/resource/role_reversal.gmi
> C: 52 GONE
> ?
> S: 31 gemini://.../thanks

To move something:

C: inimeg://.../old.gmi
?
S: gemini://.../old.gmi
C: 31 gemini://.../new.gmi
?
S: 31 gemini://.../new.gmi

Of course, the server could also respond with 59 BAD REQUEST, or whatever 
is appropriate, if it doesn't like the client commands.

Link to individual message.

26. case (me (a) case.codes)



On June 13, 2020 2:56:35 PM MDT, solderpunk <solderpunk at SDF.ORG> wrote:
>A couple of thoughts on this new line of discussion:
>

>3. Dear God, when/where/how does it stop?!  This was supposed to be a
>simple, humble, protocol of limited scope!

To me, this is key and exactly where the web went off the rails. If this 
is embraced by the community it becomes a defacto standard; clients that 
do not support it will be considered less. This is what happened to mosaic and Netscape.

>
>6. Does anybody else feel like we are just instinctively
>re-implementing
>the familiar history of the web without much caution or critical
>thought?

Yes!


Why does this have to be a part of or associated with gemini at all? Why 
not build a separate protocol and client and build a community of authors around it?

In my mind gemini is a passive reader's tool. It gets all of the noise out 
of the way, advertising, comments, likes, social media distractions, 
incentives to grow audiences etc...

How about a protocol extension that says user input MUST NOT be persisted by servers?

Cheers,
Case


-- Email domain proudly hosted at https://migadu.com

Link to individual message.

27. Martin Keegan (martin (a) no.ucant.org)

On Sat, 13 Jun 2020, case wrote:

> On June 13, 2020 2:56:35 PM MDT, solderpunk <solderpunk at SDF.ORG> wrote:

>> 3. Dear God, when/where/how does it stop?!  This was supposed to be a
>> simple, humble, protocol of limited scope!
>
> To me, this is key and exactly where the web went off the rails. If this 
> is embraced by the community it becomes a defacto standard; clients that 
> do not support it will be considered less. This is what happened to 
> mosaic and Netscape.

> Why does this have to be a part of or associated with gemini at all? Why 
> not build a separate protocol and client and build a community of 
> authors around it?

I think an upload protocol should *not* be developed. We should instead 
concentrate on creating the *content* that should be uploaded, and 
improving the *existing* tools for getting content onto servers.

Until I started using Gemini, I had avoided learning how to use TRAMP in 
Emacs - it is a system which allows you to edit remote files, so you open 
up /ssh:gemini.ucant.org:public_gemini/index.gemini in the editor, and it 
does SFTP to gemini.ucant.org and edits public_gemini/index.gemini for you 
transparently. It knows about ftp and a bunch of other protocols. 
Similarly, there are all sorts of user-space filesystems out there 
nowadays, such as sshfs, and good old git post-receive hooks and NFS 
automounts. The situation on Windows is apparently even better.
So my conventional "rsync this directory to the right spot on the server" 
script is basically redundant.

I'd say, Keep Gemini Simple, and improve the documentation and tooling for 

caution that WebDAV never took off, and this looks like GeminiDAV. And 
finally I'd caution that making things easier for the those who are not 
"technically savvy" should almost never be at the cost of indirectly
making it harder for those who are.

Mk

-- 
Martin Keegan, +44 7779 296469, @mk270, https://mk.ucant.org/

Link to individual message.

28. solderpunk (solderpunk (a) SDF.ORG)

On Sun, Jun 14, 2020 at 01:09:33AM +0200, Felix Quei?ner wrote:
 
> Most "normal" people I know don't even have a computer anymore, owning
> pretty much only tablets and mobile phones.

This is something that I know, abstractly, to be true, but my brain kind
of self-censors it out of my everyday thinking because, sheesh, what a
terrifying prospect to contemplate!

I genuinely don't mean to pass judgement on those people, because I
don't doubt that it makes a lot of sense for them to do that, based on
what they want to use computers and the internet for.  At the same time,
choosing to have only those devices does pretty much mean willfully
opting out of producing meaningful written content - the kind of thing
that Gemini is specifically designed to distribute.  There is a
reason that email clients for those platforms often automatically insert
signatures saying "Written in X on a Y, please excuse Z".  Writing text
on them is *such* a miserable experience that it causes people to
produce text falling so far below societal expectations of what's
appropriate for written communication that an apology is called for.
That's not hyperbole, it's literal truth!  Let that sink in.  I don't
think it makes sense to try to make it easy to publish content to Gemini
using that kind of device.  It's straight up the wrong tool for the job.

Lowering the barriers to entry for people who aren't familiar with ssh
and unix file permissions is good and appropriate, but I don't think
requiring a "real computer" is *inappropriate*.  This may rule a lot of
people out as potential content producers, but frankly they've ruled

A secondhand laptop that's 10 years old or more is *absolutely* capable
of writing text/gemini content (I am writing this email on the same 9
year old surplus Thinkpad that I've used to author all my Gemini content

something older and less powerful) and is far cheaper than any
smartphone or tablet.  Heck, a big part of the appeal of Geminispace for
me is the knowledge that I can use "ancient" hardware and turn the CPU
scaling down to the lowest setting to save battery life and it will
remain perfectly usable, and even if I run multiple clients at once the
fan will never, ever spin up!  What a dream...anyway, my point is cheap,
old laptops in danger of ending up as landfill are not only more capable
tools for writing Gemini content than iPhones and iPads, by virtue of
having actual text input peripherals, they are also accessible to and
inclusive of a wider range of potential content producers.  So let's not
go out of our way to accommodate crippled devices.

The discussion around making publishing content in Geminispace easier
for people who aren't technogeeks is well worth having and I don't want
to stiffle it.  But we should keep it grounded and not let ourselves get
carried away with dreams of massive mainstream adoption and thinking
about what the proverbial "man in the street" needs to start publishing
content.  We are talking about a system designed specifically for
distributing relatively long-form writing without any bells or whistles. 
Most people simply aren't going to be interested in that no matter how
easy it is.  People who are really excited about it because they're fed
up with the web may be so enthusiastic that they are willing to invest a
little bit of time in learning how to publish, and we definitely should
not waste that opportunity!  If anybody has been wanting to contribute
valuable content to Geminispace but has been lacking in ideas,
accessible explanations of how to use tools like sftp, written for a
broad audience, would definitely not be a bad thing to have...

Finally, we have kind of conflated two separate concerns here.  One is
how to make publishing content to Geminispace easier for people for whom
setting up a VPS or joining a pubnix and scping up files is well beyond
their knowledge and experience.  The other is how to make
collaboratively edited things like wikis possible - the two problems are
related, but not identical, and may have different viable solutions.

Cheers,
Solderpunk

Link to individual message.

29. solderpunk (solderpunk (a) SDF.ORG)

On Sun, Jun 14, 2020 at 02:33:52AM +0200, Petite Abeille wrote:
> 
> Unfortunately, the 1024 bytes limit doesn't get  us very far. The diff 
itself is ~14K. ~5K compressed. Too big for one request.
> 
> Fossil delta format [2] is much more compact than diff -u, but still 
weights ~4K, 2K compressed. And this is not accounting for data: encoding overhead.
> 
> So, hmmm, 1024 bytes is quite a limiting factor if one must use only one request.

Well, look - 1024 bytes as a maximum URL length is a value I more or
less plucked out of the air without deep consideration, simply because
the elements passed around by a protocol *should* have a well-defined
maximum length so people can allocate appropriately sized memory
buffers, etc.  I certainly *wasn't* thinking about using queries to
upload content, I was thinking of "ordinary URLs" and so 1024 bytes
seemed hugely generous.

I believe most web browsers have a larger maximum URL length.  I did
look into this briefly for some reason - IIRC, Internet Explorer has/had
the smallest limit, and it was either 2048 or 4096 bytes.

According to GUS, currently more than half of the text/gemini content
out there is less than 1.2 KiB in size.  If URLs were allowed to be 2048
bytes long, all that content could be uploaded as a query.

I do not have hard numbers on this (Alex may be able to provide them),
but I would *imagine* that most edits to wikis, when expressed as diffs,
would also be much less than 1 KiB.

Can we solve a lot of these issues by bumping up our maximum URL length
and, perhaps, defining a new 1x status code meaning "I'm asking you for
some input and in this context it's quite reasonable that you might want
to submit something on the long side", which clients could optionally
respond to by launching a text editor instead of just reading a single
line of input?  Clients which chose to support this code would become
the preferred clients of wiki enthusiasts or people who don't want to or
don't know how to use scp etc.

Heck, wiki nerds could write their own clients which can launch an
editor pointed at a local copy of the resource being viewed, then
calculate a diff in some format and submit *that* as a query, and the
wiki software the server runs could apply the diff.  The special wiki
editing clients could even do your suggested chunked transfer thing for
very large diffs, if the wiki servers all implemented a standard API for
such a thing.

It should also be very easy to write an app targetted at "non-technical"
authors which lets them submit chunks of writing up to 2 KiB or so, with
an "append" link at the submission confirmation page to submit a follow
up chunk.  It wouldn't necessarily be the smoothest experience in the
world, but if most content could be written in a single request and 99%
with one or two "append" requests, I think it would be usable enough.
Heck, this is the "slow internet", right?  A little bit of inconvenience
as part of a careful and deliberate process should not scare us away.

In general, solving perceived problems with the limitations that Gemini
imposes by combining the "primitives" which are already there in
creative new ways, even if they are very slightly clunky, makes me far,
far happier than adding in additional more advanced features to remove
those limitations.  If we discover really useful and generally
applicable constructions that can be built in this way, we can give them
names, standardise them, and clients can choose to impelement them in
ways which hide the clunkiness from the user.  It would be wonderful,
though, if they were still usable in a clunky way by a knowledgable
users in clients which didn't support them explicitly.

In short, think like FORTH. :)

> A bit clunky, but workable :D

Maybe we should adopt this as an official motto? :p

Cheers,
Solderpunk

Link to individual message.

30. solderpunk (solderpunk (a) SDF.ORG)

On Sun, Jun 14, 2020 at 03:05:47PM +0000, solderpunk wrote:

Regarding:

> Heck, wiki nerds could write their own clients which can launch an
> editor pointed at a local copy of the resource being viewed, then
> calculate a diff in some format and submit *that* as a query, and the
> wiki software the server runs could apply the diff.

and:
 
> It should also be very easy to write an app targetted at "non-technical"
> authors which lets them submit chunks of writing up to 2 KiB or so, with
> an "append" link at the submission confirmation page to submit a follow
> up chunk.  It wouldn't necessarily be the smoothest experience in the
> world, but if most content could be written in a single request and 99%
> with one or two "append" requests, I think it would be usable enough.

I realise this works a heck of a lot better for textual content than
base64-encoded binary content, where uploading, say, a JPG photograph
would require many, many chunks.  I think I'm okay with this.  Gemini is
deliberately and unapologetically a text-first protocol.

Cheers,
Solderpunk

Link to individual message.

31. colecmac (a) protonmail.com (colecmac (a) protonmail.com)

> I'm finding this out the hard way. Fortunately I thought to disable
> root logins in /etc/ssh/sshd_config when I first set up my VPS, but I'm
> also reading up on fail2ban. Thinking of using this HOWTO since it
> emphasizes not tampering with distributed config files.

If you haven't already, it's a MUST to setup an SSH key and turn off password
login. This will basically remove all SSH based attacks.

https://www.cyberciti.biz/faq/how-to-set-up-ssh-keys-on-linux-unix/

makeworld

??????? Original Message ???????
On Saturday, June 13, 2020 10:40 PM, Matthew Graybosch <hello at 
matthewgraybosch.com> wrote:

> On Sat, 13 Jun 2020 21:22:15 -0400
> Sean Conner sean at conman.org wrote:
>
> > As someone who has worked for various ISPs and webhosting companies
> > for most of my career, I think this slamming of IPSs is unwaranted.
>
> You're probably right.
>
> > 1.  Open servers are attacked at an alarming rate. At home, I run an
> >     sshd instance tha is open to the Internet [2]. I am currently
> >     blocking 2,520 hosts that have attempted to log in via ssh. That
> >     count is only over the past 30 days (technically, 30 days, 10 hours,
> >     30 minutes, as that's the average month length over the year). Not
> >     doing so means my machine will be constantly under login attempts.
> >
>
> I'm finding this out the hard way. Fortunately I thought to disable
> root logins in /etc/ssh/sshd_config when I first set up my VPS, but I'm
> also reading up on fail2ban. Thinking of using this HOWTO since it
> emphasizes not tampering with distributed config files.
>
> https://phrye.com/tools/fail2ban-on-freebsd/
>
> > And then there's the wierd (and quite stressful) situations
> > involving black-hat hackers [5].
>
> You know what? I think I recognize your email because I've read about
> your experience with the black-hat.
>
> I'm reading this email and thinking, "Dear creeping gods, what have I
> gotten myself into?"
>
> > 2.  If people could run a business server on their home connection,
> >     they would. ... Never mind the power is out, why did my
> >     server loose connection?
> >
>
> I've been this clueless. Fortunate my phone wasn't working so I
> couldn't inflict it on some poor tech support worker.
>
> > Or in self defense, the ISP cuts the connection because the home
> > server is running a port scanner, participating in a botnet, or
> > sending out spam emails because of an unpatched exploit in some
> > server being run at home.
>
> You're right, this is legit.
>
> > 3.  Do people realize they'll need to basically firewall off their
> >     Windows boxes?
> >
>
> I firewalled the hell out of my wife's Windows machine just to block
> the damn telemetry. It's insane.
>
> > 4.  It was email that poisoned home-run servers intially.
>
> I remember this now. I know there was a reason I was reluctant to even
> try setting up external email on tanelorn.city. I thought I was just
> being irrational.
>
> > That is true too, but I suspect even if you could easily run a
> > server at home, 99% would not even bother (or know what it is).
>
> Fair point.
>
> > Never underestimate the lack of giving a damn the general
> > population have. I'm sure there are aspects of your life that you
> > lack a damn about that other people think you should give more than a
> > damn.
>
> You're right. It's just that I see barriers and had forgotten that some
> of the barriers exist for a reason.
>
> > I think it's a conversation worth having, as it relates to how
> > Gemini expands with new content.
>
> Thanks for taking the time to reply. There's a lot here that I either
> didn't know or had forgotten.
>
> -------------------------------------------------------------------------
-----------------------------
>
> Matthew Graybosch gemini://starbreaker.org
> #include <disclaimer.h> gemini://demifiend.org
> https://matthewgraybosch.com gemini://tanelorn.city
> "Out of order?! Even in the future nothing works."

Link to individual message.

32. jes (j3s (a) c3f.net)

On 6/13/20 9:40 PM, Matthew Graybosch wrote:
> On Sat, 13 Jun 2020 21:22:15 -0400
> I'm finding this out the hard way. Fortunately I thought to disable
> root logins in /etc/ssh/sshd_config when I first set up my VPS, but I'm
> also reading up on fail2ban. Thinking of using this HOWTO since it
> emphasizes not tampering with distributed config files.
> 
> https://phrye.com/tools/fail2ban-on-freebsd/

Hi!

You might also consider that there are a number of drawbacks regarding 
fail2ban, here's the article that I've written on the subject:

https://j3s.sh/thoughts/fail2ban-sucks.txt

Link to individual message.

33. Luke Emmet (luke (a) marmaladefoo.com)



On 14-Jun-2020 15:26, solderpunk wrote:
>
> Lowering the barriers to entry for people who aren't familiar with ssh
> and unix file permissions is good and appropriate, but I don't think
> requiring a "real computer" is *inappropriate*.  This may rule a lot of
> people out as potential content producers, but frankly they've ruled
> *themselves* out.  I don't see this as exclusionary or discriminatory:
> A secondhand laptop that's 10 years old or more is *absolutely* capable
> of writing text/gemini content (I am writing this email on the same 9
> year old surplus Thinkpad that I've used to author all my Gemini content
> *and* software, and I bet there is already somebody on this list using
> something older and less powerful) and is far cheaper than any
> smartphone or tablet.  Heck, a big part of the appeal of Geminispace for
> me is the knowledge that I can use "ancient" hardware and turn the CPU
> scaling down to the lowest setting to save battery life and it will
> remain perfectly usable, and even if I run multiple clients at once the
> fan will never, ever spin up!  What a dream...anyway, my point is cheap,
> old laptops in danger of ending up as landfill are not only more capable
> tools for writing Gemini content than iPhones and iPads, by virtue of
> having actual text input peripherals, they are also accessible to and
> inclusive of a wider range of potential content producers.  So let's not
> go out of our way to accommodate crippled devices.
This is quite a harsh perspective IMO - in the developing world there 
are many people who can only afford to access the internet via a phone. 
Whilst we might not build specifically for phone users, the light weight 
nature of Gemini does make it a potential vehicle to be used for many 
more people than we may imagine.

Perhaps in this phase of the definition of Gemini, we should think of a 
potential user as our "significant others". Perhaps users who have an 
inclination to write and share their thoughts outside of the highly 
commercialised mainstream web. But certainly not an assumption they know 
what ssh, sftp or running your own linux server.

What would these users need to assist them?

Best Wishes

  - Luke

Link to individual message.

34. defdefred (defdefred (a) protonmail.com)

On Sunday, June 14, 2020 3:22 AM, Sean Conner <sean at conman.org> wrote:
> [3] http://boston.conman.org/2019/07/09-12
> http://boston.conman.org/2019/08/06.2

Should we deduce that a significative part of the internet traffic is fake request?
That a shame concidering the environmental impact of the digital world.
Maybe blocking all this non-human request is the solution?

Link to individual message.

35. Luke Emmet (luke (a) marmaladefoo.com)



On 14-Jun-2020 16:05, solderpunk wrote:
> According to GUS, currently more than half of the text/gemini content
> out there is less than 1.2 KiB in size.  If URLs were allowed to be 2048
> bytes long, all that content could be uploaded as a query.
>
> I do not have hard numbers on this (Alex may be able to provide them),
> but I would *imagine* that most edits to wikis, when expressed as diffs,
> would also be much less than 1 KiB.
>
> Can we solve a lot of these issues by bumping up our maximum URL length
> and, perhaps, defining a new 1x status code meaning "I'm asking you for
> some input and in this context it's quite reasonable that you might want
> to submit something on the long side", which clients could optionally
> respond to by launching a text editor instead of just reading a single
> line of input?  Clients which chose to support this code would become
> the preferred clients of wiki enthusiasts or people who don't want to or
> don't know how to use scp etc.
>
> It should also be very easy to write an app targetted at "non-technical"
> authors which lets them submit chunks of writing up to 2 KiB or so, with
> an "append" link at the submission confirmation page to submit a follow
> up chunk.  It wouldn't necessarily be the smoothest experience in the
> world, but if most content could be written in a single request and 99%
> with one or two "append" requests, I think it would be usable enough.
> Heck, this is the "slow internet", right?  A little bit of inconvenience
> as part of a careful and deliberate process should not scare us away.

I think this is a great idea! It would go quite a long way to supporting 
collaborative editing. And as you say it is infrastructure we already have.

if your lines are about 25 characters long, 2kb is about 80 lines worth 
of text. That seems a nice sweet spot to me.

We could argue if you need more than that you will be better placed to 
find a more flexible upload option.

The only other part to the jigsaw in my view is a way to integrate the 
editing experience into the client so you can *round-trip* the content. 
As we know the first edit is seldom a perfect one.

The basic wiki concept has the following:

1. Page displays content (we can do that)
2. Edit mode of the existing page content
3. Upload (2k allowed - OK)
4. Review submitted content (return to 1)

For step 2, we want the user to be able to edit the existing content, 
not necessarily compose completely afresh.

One suggestion is that clients MAY present an integrated editor bound to 
a preformatted region on a page (perhaps the first one or a user 
selected one). This allows the re-editing of the existing content. This 
is then what is submitted when writing back via the submission.

This would cover the full lifecycle of simple yet basic wiki editing.

Best Wishes

  - Luke

Link to individual message.

36. Matthew Graybosch (hello (a) matthewgraybosch.com)

On Sun, 14 Jun 2020 18:13:09 +0000
colecmac at protonmail.com wrote:

> If you haven't already, it's a MUST to setup an SSH key and turn off
> password login. This will basically remove all SSH based attacks.
> 
> https://www.cyberciti.biz/faq/how-to-set-up-ssh-keys-on-linux-unix/

Thanks for the advice. I've already disabled root logins in my
sshd_config, and I've set up public-key logins for my own accounts.
I've got to do a better job of educating tanelorn.city's residents
before I can disable password logins via ssh entirely, though.

-- 
Matthew Graybosch		gemini://starbreaker.org
#include <disclaimer.h>		gemini://demifiend.org
https://matthewgraybosch.com	gemini://tanelorn.city
"Out of order?! Even in the future nothing works."

Link to individual message.

37. Matthew Graybosch (hello (a) matthewgraybosch.com)

On Sun, 14 Jun 2020 15:34:08 -0500
jes <j3s at c3f.net> wrote:

> You might also consider that there are a number of drawbacks
> regarding fail2ban, here's the article that I've written on the
> subject:
> 
> https://j3s.sh/thoughts/fail2ban-sucks.txt
 
Thanks. I just finished reading this, and am now reading the article on
OpenSSH hardening that you linked. I had root login disabled from the
start, so that's a start. :)

I've also seen some forum posts suggesting that I can disable password
authentication for all users by default, and then allow exceptions for
particular users. This might help me harden Tanelorn without making
things harder for less-skilled users who haven't gotten the hang of
generating a ssh key and copying it yet.

-- 
Matthew Graybosch		gemini://starbreaker.org
#include <disclaimer.h>		gemini://demifiend.org
https://matthewgraybosch.com	gemini://tanelorn.city
"Out of order?! Even in the future nothing works."

Link to individual message.

38. Sean Conner (sean (a) conman.org)

It was thus said that the Great defdefred once stated:
> On Sunday, June 14, 2020 3:22 AM, Sean Conner <sean at conman.org> wrote:
> > [3] http://boston.conman.org/2019/07/09-12
> > http://boston.conman.org/2019/08/06.2
> 
> Should we deduce that a significative part of the internet traffic is fake request?
> That a shame concidering the environmental impact of the digital world.
> Maybe blocking all this non-human request is the solution?

  Okay, how does one detect fake requests vs. legitimate requests?  Way back
in 2006 when I was paid to do this type of work, I set up a tarpit [1] with
the idea of maybe using the information gathered from it to block unwanted
tarffic.  Unfortunately, had I set this up, we still would *receive* the
packets.  To really have the effect you want, you need to block it *at the
source* and to do that would require cooperation with literally *hundreds*
(if not a few thousand) other network operators.

  My blog entry for January 26, 2006 [2] I noted having seen 107,287
connection attempts over a 24-hour period.  I didn't record the number of
unique IPs (which could give an indication of the number of networks
involved) but that's still over one "reported incident" per second.

  After awhile, it just beomces background noise you learn to ignore.

  -spc

[1]	https://en.wikipedia.org/wiki/Tarpit_%28networking%29

[2]	http://boston.conman.org/2006/01/26.3

Link to individual message.

39. jes (j3s (a) c3f.net)

On 6/14/20 8:31 PM, Matthew Graybosch wrote:
> On Sun, 14 Jun 2020 15:34:08 -0500
> I've also seen some forum posts suggesting that I can disable password
> authentication for all users by default, and then allow exceptions for
> particular users. This might help me harden Tanelorn without making
> things harder for less-skilled users who haven't gotten the hang of
> generating a ssh key and copying it yet.

Up to you! In my mind turning password auth is priority number one - but 
since you have users who could be confused by it, it's up to you and 
your own risk tolerance.

If any of these users are able to switch to the root user or similar, 
I'd say that you must disable password auth now regardless of what your 
users prefer.

You may consider setting MaxAuthTries to a reasonable value (say, 3 or 
4) which will lock user accounts that fail password auth that many times.


j3s

Link to individual message.

40. solderpunk (solderpunk (a) SDF.ORG)

Sorry, my reply to this last night went straight to Luke, not the whole
list.  Here it is again:

On Sun, Jun 14, 2020 at 09:54:12PM +0100, Luke Emmet wrote:
> This is quite a harsh perspective IMO - in the developing world there are
> many people who can only afford to access the internet via a phone. Whilst
> we might not build specifically for phone users, the light weight nature of
> Gemini does make it a potential vehicle to be used for many more people than
> we may imagine.

Fair point: not everybody without a keyboard is without one by choice.
I guess I just can't personally conceive of ever *wanting* to routinely
write hundreds of words on a touchscreen phone.  Especially not with
URLs and syntax like => thrown in.  No matter how effortless the upload
process, the actual writing would be unbearable.

Now, *reading* content is a very different story, and not what I was
talking about with my comment about not targetting "crippled devices".
For users accessing the internet on phones with low memory/CPU resources
on slow and/or expensive networks, I think Gemini could be an extremely
appealing choice.

Cheers,
Solderpunk

Link to individual message.

41. solderpunk (solderpunk (a) SDF.ORG)

On Sun, Jun 14, 2020 at 10:06:11PM +0100, Luke Emmet wrote:
 
> I think this is a great idea! It would go quite a long way to supporting
> collaborative editing. And as you say it is infrastructure we already have.

I'm glad you think so!  I hope other people who were keen on an upload
mechanism are too.  As you point out there are small details to smooth
over for the wiki case; but for simple "publishing for the masses"
where the starting state is a blank page, doesn't this basically get us
there?

Cheers,
Solderpunk

Link to individual message.

42. solderpunk (solderpunk (a) SDF.ORG)

On Mon, Jun 15, 2020 at 07:39:01AM +0000, solderpunk wrote:
> On Sun, Jun 14, 2020 at 10:06:11PM +0100, Luke Emmet wrote:
>  
> > I think this is a great idea! It would go quite a long way to supporting
> > collaborative editing. And as you say it is infrastructure we already have.
> 
> I'm glad you think so!  I hope other people who were keen on an upload
> mechanism are too.  As you point out there are small details to smooth
> over for the wiki case; but for simple "publishing for the masses"
> where the starting state is a blank page, doesn't this basically get us
> there?

An interesting idea which was just floated on the BBS at the Zaibatsu
pubnix, inspired by the "friSBEe" project started by cmmcabe (admin of
gemini://rawtext.club), is the possibility of publishing via email.
Typical text/gemini content is certainly small enough to be "uploaded"
as an attachment (it can't really travel in the body of an email due to
restrictions on line length).

Cheers,
Solderpunk

Link to individual message.

43. defdefred (defdefred (a) protonmail.com)

On Saturday 13 June 2020 22:56, solderpunk <solderpunk at SDF.ORG> wrote:
> 6.  Does anybody else feel like we are just instinctively re-implementing
>     the familiar history of the web without much caution or critical
>     thought?
yep

Link to individual message.

44. James Tomasino (tomasino (a) lavabit.com)

Since text/gemini (or gemtext, if that's what we're calling it) is 
parsable from top-to bottom in a single pass, it's also perfectly well 
suited to being treated as a stream instead of a document. I believe the 
only limitation to this currently is that many clients are expecting that 
gemtext is a document and are deferring parsing until the end-of-document is reached.

When I raised this question on the IRC channel I wanted to know if there 
was a way to indicate within the MIME perhaps that the resource is a 
stream and not a document. Then clients could know explicitly that they 
shouldn't be waiting on the end of document before parsing. I'm really not 
familiar with the technical mechanisms of how that's set up on HTTP, so I 
wanted to toss it to the list. 

Should we investigate a MIME solution to be explicit, or should clients 
treat all text/gemini as streams and just parse as they go? The later 
seems easier from a implementation standpoint. Someone raised the question 
about how the display should be handled between the two, though. Sometimes 
streams desire to keep the focus pinned to the newest content, not the 
start of the document. That sort of functionality would support using a 
separate explicit MIME or some other way to differentiate them.

With streams in place we could do some very cool things. We could build a 
fediverse front-end, or a view into the IRC channel. If you use two tabs 
and a response 10 loop and client certs, you could even post INTO these 
platforms. Let your imagination run wild!

Link to individual message.

45. defdefred (defdefred (a) protonmail.com)

On Monday 15 June 2020 11:51, solderpunk <solderpunk at SDF.ORG> wrote:
> An interesting idea which was just floated on the BBS at the Zaibatsu
> pubnix, inspired by the "friSBEe" project started by cmmcabe (admin of
> gemini://rawtext.club), is the possibility of publishing via email.
> Typical text/gemini content is certainly small enough to be "uploaded"
> as an attachment (it can't really travel in the body of an email due to
> restrictions on line length).
Also a nice way to send moderated comment to an article.

Link to individual message.

46. Katarina Eriksson (gmym (a) coopdot.com)

defdefred <defdefred at protonmail.com> wrote:

> On Monday 15 June 2020 11:51, solderpunk <solderpunk at SDF.ORG> wrote:
> > An interesting idea which was just floated on the BBS at the Zaibatsu
> > pubnix, inspired by the "friSBEe" project started by cmmcabe (admin of
> > gemini://rawtext.club), is the possibility of publishing via email.
> > Typical text/gemini content is certainly small enough to be "uploaded"
> > as an attachment (it can't really travel in the body of an email due to
> > restrictions on line length).
> Also a nice way to send moderated comment to an article.
>

My gemlog is using that as it's comment system.

-- 
Katarina Eriksson

>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.orbitalfox.eu/archives/gemini/attachments/20200615/73f5
3daf/attachment.htm>

Link to individual message.

47. Katarina Eriksson (gmym (a) coopdot.com)

solderpunk <solderpunk at sdf.org> wrote:

> It should also be very easy to write an app targetted at "non-technical"
> authors which lets them submit chunks of writing up to 2 KiB or so, with
> an "append" link at the submission confirmation page to submit a follow
> up chunk.  It wouldn't necessarily be the smoothest experience in the
> world, but if most content could be written in a single request and 99%
> with one or two "append" requests, I think it would be usable enough.
> Heck, this is the "slow internet", right?  A little bit of inconvenience
> as part of a careful and deliberate process should not scare us away.
>

People write a lot longer Twitter threads than that and Tweets are tiny
compared to Gemini's meta field. I wish I had the energy to build a proof
of concept right now because this sounds interesting.

-- 
Katarina Eriksson

>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.orbitalfox.eu/archives/gemini/attachments/20200615/ab82
ea6e/attachment.htm>

Link to individual message.

48. Felix Queißner (felix (a) masterq32.de)


> An interesting idea which was just floated on the BBS at the Zaibatsu
> pubnix, inspired by the "friSBEe" project started by cmmcabe (admin of
> gemini://rawtext.club), is the possibility of publishing via email.
> Typical text/gemini content is certainly small enough to be "uploaded"
> as an attachment (it can't really travel in the body of an email due to
> restrictions on line length).

Email sounds like a really nice solution! Using email as a
human->machine interface is actually quite nice, but you need some kind
of spoofing protection, but GPG solves that problem already.

Using simple "subject=path", "body=content", you can easily update your
server with a fitting server-side daemon.

I like!

Regards
- xq

Link to individual message.

49. Pete D. (peteyboy (a) SDF.ORG)

It's funny, when I first looked at gemini, it sparked my interest as an 
idea of a wiki scheme, just implementing the initial dream of interwiki 
as a protocol, so every page is a wiki page, with some sort of native 
markup, connected to other pages, which would all be wikis.

I was disabused of that notion when I realized, after reading a bunch of 
email digests, that the final spec for links was not in-line at all, and 
the idea was to make gopher that was less annoying (ok, it's annoying to 
me, I quit after initial phlogging because of gophermaps). But man, I 
love a simple wiki mark-up (# * ```, =>)!

So, I got caught up in this mini-reprise of the initial days of the www, 
though, it is so cool that most of the gemini content can't be gotten 
unless you have a gemini browser!  So I let go of the wiki dream, and got 
into this new world of simple text.

In this www-reprise model, lots of extra stuff is being proposed, it seems 
to me, and at some point, one has to ask, why not use http if you are 
going to decorate with all the http things? One answer is, of course, is 
that old, unfancy http seems lame and dated (looks at SDF tutorial pages), 
which is valid. That stuff is not exciting.

And so now, seeing all this wiki talk, I'm excited again by the idea of a 
wiki protocol, but at the same time, I feel like it really should be a 
different project. It's not what gemini is.

If we are turning this into a wiki protocol, why don't we have inline 
links and all the cool markup available to make nice documents (TOCs, 
etc), and versioning and all that?


peteyboy at sdf.org
SDF Public Access UNIX System - http://sdf.org

Link to individual message.

50. solderpunk (solderpunk (a) SDF.ORG)

On Mon, Jun 15, 2020 at 05:28:46PM +0000, Pete D. wrote:
 
> In this www-reprise model, lots of extra stuff is being proposed, it seems
> to me, and at some point, one has to ask, why not use http if you are going
> to decorate with all the http things?

Perhaps the biggest current gap in the Gemini FAQ is a response to "Why
are you bothering to create something new for this, don't you realise
you can just use a subset of HTTP and a subset of HTML?!".  It's a fair
question.  It's not like I'm not aware of the possibility - I wrote my
"non-violent webserver" Shizaru
(https://tildegit.org/solderpunk/shizaru/) before starting Gemini.

I have thought about this a lot, and I *do* think there is a point to
creating something new, but I can't express it in a short, sharp form
yet.  I'm working on it.

That said, I think there would be plenty of interest and energy around a
kind of "back to basics" web movement.  People are really getting sick
and tired of the mainstream web and are looking for escapes and
alternatives.  For many, that's Gopher.  Gemini was started in large
part out of a feeling that many people who *wanted* to escape, perhaps
badly, would not see Gopher as an adequate refuge.  I hope for some it is.
For some it still may not be, and some kind of minimalist subset of the
web might be better.

> And so now, seeing all this wiki talk, I'm excited again by the idea of a
> wiki protocol, but at the same time, I feel like it really should be a
> different project. It's not what gemini is.

I hope that even for people who disagree strongly with design decisions
I've made for Gemini can look to it as a source of hope and empowerment.
No matter how bad the mainstream internet experience gets and how
hopeless it may seem, the clean slate of TCP/IP is always there and you
can build whatever you want on top of it.  Something which is "just an
idea" can always turn into "a thing" if you can convince enough people
that it's a good idea.  The internet is magic and malleable and has
plenty of room for all sorts.  Build things that make you happy!

Cheers,
Solderpunk

Link to individual message.

51. Sean Conner (sean (a) conman.org)

It was thus said that the Great solderpunk once stated:
> On Mon, Jun 15, 2020 at 07:39:01AM +0000, solderpunk wrote:
> > On Sun, Jun 14, 2020 at 10:06:11PM +0100, Luke Emmet wrote:
> >  
> > > I think this is a great idea! It would go quite a long way to supporting
> > > collaborative editing. And as you say it is infrastructure we already have.
> > 
> > I'm glad you think so!  I hope other people who were keen on an upload
> > mechanism are too.  As you point out there are small details to smooth
> > over for the wiki case; but for simple "publishing for the masses"
> > where the starting state is a blank page, doesn't this basically get us
> > there?
> 
> An interesting idea which was just floated on the BBS at the Zaibatsu
> pubnix, inspired by the "friSBEe" project started by cmmcabe (admin of
> gemini://rawtext.club), is the possibility of publishing via email.
> Typical text/gemini content is certainly small enough to be "uploaded" as
> an attachment (it can't really travel in the body of an email due to
> restrictions on line length).

  As someone who runs their own email server [1], and set up my blog to
accept posts via email [2], I think I'm in a unique position to comment
credibly on this topic.

  It's a good idea, but there are issues to consider.  When I first added
email support to my blogging engine (back in 2001, 2002?) I often wondered
why no other blogging engine copied the idea.  I mean, the user can use
their own proferred method of editing, and email clients exist for just
about everything under the sun, but no.  That feature remains a very niche
feature, and I think I finally figured out why---NO ONE BLODDY WELL RUNS
THEIR OWN EMAIL SERVER ANYMORE! [3]

  Ahem.  Sorry.

  But it's true.  Due to a lot of reasons I outlined in another email,
running a server is a pain, and email is especially painful because of all
the *other* things you have to do in addition to running an SMTP server. 
Now, aside the usual problem when running a server for a well known service,
I will say that setting up a "receive-only" email server (for the express
purpose of updating content) is vastly easier than a "full service email
server", so that's a positive.

  The next issue is hooking up the processing into the email server.  This
is not really an issue as any MTA (Postfix is great, there are others, but
trust me on this---avoid sendmail [4] at all costs) can do this.  The MTA I
use makes this trivial---just add an entry to /etc/aliases that looks like:

localaddress: "|/program/to/handle -option1 -option2"

  Done.

  Now the issue becomes one of validating the email.  The scheme I use for
my blog is not secure [5] but it hasn't been broken yet, mainly because it's
not worth the trouble to thwart it.  I use a special email address that's
never been published, and there are some checks I make on the incoming
email but nothing bullet-proof.  The program then knows how to add the entry
to my blog, but subsequent changes tend to require manual editing of the
file on the server [6].  

  Then you have to decide how the email is to be formatted for the system to
know what is being created, updated, deleted, etc.  I include the required
information (author, title, date, etc) of the post in the email body [7] and
as such, it's again, not bullet-proof since I'm the only one using it, so I
know what I can and can't do (and even then, I've been bitten by typos I've
made---sigh).  

  The relevant limits for SMTP is in RFC-5321, and are:

	1,000 bytes per line
	65,536 bytes maximum message size

  Of course, a server can support larger limits than these (and I would
suspect modern systems do, but I haven't tested this).  I've never had an
issue with sending posts, but then rarely do I have lines longer than 1,000
bytes [8].

  To address security concerns, some of the things that *could* be done
include:

	Use a custom email to accept emails, perhaps a custom one per user.

	Check the From: or Sender: header, do a DNS MX lookup on the domain
	portion, and cross reference that with the Received: headers.

	Check the DKIM-Signature: header (if it exists).

	Dive into the rabbit hole that is PGP.

  So that's my two-bits worth on this topic.  Good idea, but quite a bit of
work involved.

  -spc

[1]	Of course I would.

[2]	Of course I did.

[3]	Well, almost no one.

[4]	It is a nightmare.  The last time I worked with sendmail, the
	configuration file had a configuration file (I kid you not) and it's
	been the enevitable Swiss Cheese of secure programs over its entire
	history.

[5]	This is probably a mistake to announce on a public list, but I am
	aware of the ramifications of doing so.

[6]	I do not have a way to handle a modification of an existing entry
	via email, but that's a limitiation I have, not to the idea as a
	whole.

[7]	I could use email headers for some of this, but I was lazy at the
	time and found it easier to specify in the body.

[8]	The technique I use these days is to have a sentance fragment per
	line.  This allows me easier editing while composing the entry. 
	Here's a sample of the first two paragraphs of my reply written in
	this style.

As someone who runs their own email server [1],
and set up my blog to accept posts via email [2],
I think I'm in a unique position to comment credibly on this topic.

It's a good idea,
but there are issues to consider.
When I first added email support to my blogging engine
(back in 2001, 2002?)
I often wondered why no other blogging engine copied the idea.
I mean,
the user can use their own proferred method of editing,
and email clients exist for just about everything under the sun,
but no.
That feature remains a very niche feature,
and I think I finally figured out why---NO ONE BLODDY WELL RUNS THEIR OWN 
EMAIL SERVER ANYMORE! [3]

Link to individual message.

52. Jason McBrayer (jmcbray (a) carcosa.net)

Luke Emmet <luke at marmaladefoo.com> writes:

> Perhaps in this phase of the definition of Gemini, we should think of
> a potential user as our "significant others". Perhaps users who have
> an inclination to write and share their thoughts outside of the highly
> commercialised mainstream web. But certainly not an assumption they
> know what ssh, sftp or running your own linux server.
>
> What would these users need to assist them?

I'm bullish on native apps for this, which is part of why I'm not so
sold on the various editing/submission proposals. It really shouldn't be
that hard to wrap some existing tools like rsync+ssh behind a cute
native UI and provide a nice mobile editing experience.

Of course, I'm a known pervert, so I use Emacs in Termux on an Android
phone to update my gemlog, some of the time.

-- 
Jason McBrayer      | ?Strange is the night where black stars rise,
jmcbray at carcosa.net | and strange moons circle through the skies,
                    | but stranger still is lost Carcosa.?
                    | ? Robert W. Chambers,The King in Yellow

Link to individual message.

53. Sean Conner (sean (a) conman.org)

It was thus said that the Great James Tomasino once stated:
> Since text/gemini (or gemtext, if that's what we're calling it) is
> parsable from top-to bottom in a single pass, it's also perfectly well
> suited to being treated as a stream instead of a document. I believe the
> only limitation to this currently is that many clients are expecting that
> gemtext is a document and are deferring parsing until the end-of-document
> is reached.
> 
> When I raised this question on the IRC channel I wanted to know if there
> was a way to indicate within the MIME perhaps that the resource is a
> stream and not a document. Then clients could know explicitly that they
> shouldn't be waiting on the end of document before parsing. I'm really not
> familiar with the technical mechanisms of how that's set up on HTTP, so I
> wanted to toss it to the list.

  A web browser already knows how it will deal with the content by the time
it's reading the body of the response, and unless the browser is going to
hand off the content to another program (say a PDF viewer) the browser can
just treat the resulting download of data as a stream.  TCP (the underlying
protocol for both HTTP and Gemini) provides a "reliable byte-oriented
stream" [1] and it's up to the client to deal with that as it sees fit.

  A Gemini client can see it's getting a text/gemimi file, and start reading
the network stream a line at a time and displaying it at the same time if it
so chooses.  It doesn't *have* to wait for the entire file to download
before processing it.

> Should we investigate a MIME solution to be explicit, or should clients
> treat all text/gemini as streams and just parse as they go? The later
> seems easier from a implementation standpoint. Someone raised the question
> about how the display should be handled between the two, though. Sometimes
> streams desire to keep the focus pinned to the newest content, not the
> start of the document. That sort of functionality would support using a
> separate explicit MIME or some other way to differentiate them.

  It depends how the client is written.  I client can certainly parse and
display a text/gemini as it receives it (much like web browsers can start
displaying partially downloaded content) but it complicates the codebase. 
Is it worth the tradeoff?  

> With streams in place we could do some very cool things. We could build a
> fediverse front-end, or a view into the IRC channel. If you use two tabs
> and a response 10 loop and client certs, you could even post INTO these
> platforms. Let your imagination run wild!

  Nothing stopping you now from doing that.  This no size parameter, a
Gemini client can continuously stream data to a client until the connection
is broken.

  -spc

[1]	Technically, TCP exists to manage the bandwidth between two
	endpoints, but it was engineers to also provide the said "reliable
	byte-oriented stream".  There are no packets to the client code,
	just a stream of bytes it has to read.

Link to individual message.

54. Sean Conner (sean (a) conman.org)

It was thus said that the Great Pete D. once stated:
> And so now, seeing all this wiki talk, I'm excited again by the idea of a 
> wiki protocol, but at the same time, I feel like it really should be a 
> different project. It's not what gemini is.
> 
> If we are turning this into a wiki protocol, why don't we have inline 
> links and all the cool markup available to make nice documents (TOCs, 
> etc), and versioning and all that?

  Sadly, that *is* HTTP.  HTTP/1.0 added the methods PUT and DELETE which
make adding/updating/removing resources from a webserver possible with just
a web client.  The methods map to:

	GET	Retrieve a resource, should be no side-effects on the server
	POST	Submit data to an existing resource
	PUT	Add a new resource (a file for example) to the server
	DELETE	Remove a resource (a file for example file) from the server

  The major difference between POST and PUT is that with POST, the resource
receiving the data isn't modified (it's a fixed endpoint) whereas with PUT,
the resource given is created or modified by the data.

  Of the complaints I've read online about Gemini, the question of "why not
use HTTP" comes up, and in a sense, I can see the reason for the
question---HTTP does *not* inherenetly imply spying and tracking but that
there are external forces that force that in the HTTP world.

  I still like the idea, but the trick is to tame (or limit) the damage that
can be done.

  -spc

Link to individual message.

55. Luke Emmet (luke (a) marmaladefoo.com)


On 16-Jun-2020 02:37, Sean Conner wrote:
> It was thus said that the Great Pete D. once stated:
>> And so now, seeing all this wiki talk, I'm excited again by the idea of a
>> wiki protocol, but at the same time, I feel like it really should be a
>> different project. It's not what gemini is.
>>
>    Sadly, that *is* HTTP.  HTTP/1.0 added the methods PUT and DELETE which
> make adding/updating/removing resources from a webserver possible with just
> a web client.  The methods map to:
>
> 	GET	Retrieve a resource, should be no side-effects on the server
> 	POST	Submit data to an existing resource
> 	PUT	Add a new resource (a file for example) to the server
> 	DELETE	Remove a resource (a file for example file) from the server
>
>    The major difference between POST and PUT is that with POST, the resource
> receiving the data isn't modified (it's a fixed endpoint) whereas with PUT,
> the resource given is created or modified by the data.

Yes at the moment we have implemented GET, but not POST. It turned out 
the other verbs (PUT, DELETE) are not really essential as they can 
usually be implemented via POST, certainly for user facing applications. 
So I don't think there is a need to get side tracked by those.

As far as I understand it GET is supposed to be for persistent resources 
that are more or less idempotent. It shouldn't really be used as a 
vehicle to update a system, as the URLs are shareable and linkable 
(hence the potential SPAM problem of having a million pre-configured 
links to post comments whenever a search engine crawls your page).

Unfortunately we don't have any equivalent to POST. I do think this is a 
weakness and would be pleased to understand what the alternative in-band 
method is. SSH and SFTP is I know being recommended for some, but a) 
that is an out of band channel and wont be accessible to the majority of 
end users and b) limits the content to being file-system based.

We should not re-implement the web I agree, but there are a few things 
we should learn from that history and specify a constrained POST scheme 
that is not extendible.

>    Of the complaints I've read online about Gemini, the question of "why not
> use HTTP" comes up, and in a sense, I can see the reason for the
> question---HTTP does *not* inherenetly imply spying and tracking but that
> there are external forces that force that in the HTTP world.
>
>    I still like the idea, but the trick is to tame (or limit) the damage that
> can be done.

I think we just need to lock it down and keep it really simple, but good 
enough. Something like this:

scheme://domain/fixed-end-point?params
<here come the bytes...>
close connection

params should include path/content length/mime but not the content

We just need to bolt it down. There should be no _hidden_ state that the 
server can request (like cookies).

Seeing as gemini just does GET, we can have a single simple POST scheme 
that does nothing else

gemini:// (a perfect simple GET, not really extendible)
postscheme:// (a perfect simple POST, not really extendible)

Maybe in future PUT/DELETE but I dont think they're needed as a suitably 
configured end point can receive parameters to enact this. The web got 
by without it for almost all of its life, and is only used by some REST 
end points.

Best Wishes

  - Luke

Link to individual message.

56. defdefred (defdefred (a) protonmail.com)

On Tuesday 16 June 2020 03:05, Sean Conner <sean at conman.org> wrote:
> and I think I finally figured out why---NO ONE BLODDY WELL RUNS THEIR 
OWN EMAIL SERVER ANYMORE! [3]

No need to run a smtp server.
It is totally possible to fetch email from a external smtp server to 
achieve the same goal.

Link to individual message.

57. defdefred (defdefred (a) protonmail.com)

On Monday 15 June 2020 18:11, Felix Quei?ner <felix at masterq32.de> wrote:
> Email sounds like a really nice solution! Using email as a
> human->machine interface is actually quite nice, but you need some kind
> of spoofing protection, but GPG solves that problem already.

PGP/GPG power!
The best way to authenticate writer allowed to publish in a mutualised geminispace.

Link to individual message.

58. Petite Abeille (petite.abeille (a) gmail.com)



> On Jun 13, 2020, at 22:56, solderpunk <solderpunk at SDF.ORG> wrote:
> 
> 3. Dear God, when/where/how does it stop?!  This was supposed to be a
> simple, humble, protocol of limited scope!  But...

It never stops. Gemini is now bidirectional.

It will reach consciousness  in 3, 2, 1...

When the Yogurt Took Over
LOVE DEATH + ROBOTS  
https://www.imdb.com/title/tt9788494/

Link to individual message.

59. solderpunk (solderpunk (a) SDF.ORG)

On Sun, Jun 14, 2020 at 08:57:57PM +0000, defdefred wrote:
> On Sunday, June 14, 2020 3:22 AM, Sean Conner <sean at conman.org> wrote:
> > [3] http://boston.conman.org/2019/07/09-12
> > http://boston.conman.org/2019/08/06.2
> 
> Should we deduce that a significative part of the internet traffic is fake request?
> That a shame concidering the environmental impact of the digital world.
> Maybe blocking all this non-human request is the solution?

It's true that this is a shame.  As Sean says, however, it's extremely
difficult to actually block all non-human requests.

I am sensitive to this issue and I hope that as part of the general
emphasis on being small and simple, the Gemini community can help also
foster a culture of not treating the internet as an ephemeral magic
thing with no physical impact.  Non-human traffic is not evil and
can serve a good purpose, but we should be careful with it.

In some ways, Gemini is disadvantagaed here with its lack of facilities
for things like conditional fetching.  If we make a norm of using small
self-signed certificates using eliptic curve ciphers, and supporting TLS
session resumption, we might be able to get request overhead down to the
point where clients can address well-known endpoints to query the time
of last change for something without it actually being a losing
proposition most of the time.

But even in the absence of this, we can be smarter.  For example,
software which consumes RSS/Atom feeds gets, for free, information on
the distribution of times between consecutive updates for the last 10 or
maybe more updates.  Instead of polling everything several times a day,
aggregators *could* poll each feed at a specific frequency which is
matched to its typical publication schedule.

Cheers,
Solderpunk

Link to individual message.

60. Koushik Roy (koushik (a) meff.me)

I also get paid to do a lot of networking and infrastructure related 
things for a tech company, so I understand where you're coming from when 
it comes to understanding that ISPs have restrictions in place due to 
good reasons (the amount of abuse on the net is staggering, and so much 
of my job deals with ways to serve legitimate traffic while not allowing 
abuse to impact our services/our users).

I also want to reiterate in this thread the importance of enabling users 
who wish to author gemini content on devices such as tablets and 
smartphones. Imagine a kid who doesn't even have access to a computer 
but has access to an aging smartphone or a feature phone. Or think of 
someone who lives in non-traditional housing (whether by choice or not) 
and travels a lot; for them a tablet may be a better option when it 
comes to space/weight/money. I think it is very important to enable 
users to author content no matter the device.

All that said, I'm not convinced that an in-band Gemini posting 
mechanism is the correct answer. I prefer a solution that involves the 
community standardizing around some other mechanism to upload content, 
and then building/popularizing apps (native or not) that use this 
mechanism. To make this more concrete, I can imagine a scenario where 
apps are built on top of FTPS to allow users to author content and then 
transparently have them appear in a Gemini capsule. Swap FTPS with one 
of many other mechanisms, such as SFTP, NNTPS, Email, what have you.

I just feel that overloading these sorts of concerns onto Gemini will 
lead to greater complexity in the protocol than would be desirable and 
turn off potential implementers of both servers and clients. I think the 
explosion we're seeing of software and content right now is directly 
attributable to how simple the spec is to understand and implement. 
Publishing-oriented clients can then bundle some sort of interface to 
this companion protocol and either shell out to a text editor or open a 
native text editing widget (or even work through some sort of 
Electron-style textbox). I realize that titan:// is a separate protocol, 
but I think something like Passive FTPS may be a better fit here.

- meff

On 6/13/20 6:22 PM, Sean Conner wrote:
> It was thus said that the Great Matthew Graybosch once stated:
>>
>> Let's be honest; it shouldn't be that hard to run a gemini daemon out
>> of a personal computer in your own home, whether it's your main desktop
>> or just a raspberry pi. The protocol is light enough that CPU and
>> memory usage should be next to nothing compared to Firefox or Chrome.
> 
>   ...
> 
>> I think the biggest problem, at least in the US, is that ISPs seem
>> hellbent on keeping residential internet users from using their
>> connections for anything but consumption.
> 
>    As someone who has worked for various ISPs and webhosting companies for
> most of my career, I think this slamming of IPSs is unwaranted.  And as
> someone who runs both a public server *and* a few services on my home
> network [1] there are some things you need to consider.
> 
> 1. Open servers are *attacked* at an alarming rate. At home, I run an sshd
> instance tha is open to the Internet [2].  I am currently blocking 2,520
> hosts that have attempted to log in via ssh.  That count is only over the
> past 30 days (technically, 30 days, 10 hours, 30 minutes, as that's the
> average month length over the year).  Not doing so means my machine will be
> constantly under login attempts.
> 
>    99% of all traffic to my webserver (on my actual public server) is
> automated programs, not actual humans.  Most are just webbots spidering my
> content, some are script kiddies looking for an exploit and some are just
> incompetently written programs that just blow my mind [3].  There's the
> wierd network traffic that just sucks up connections requests [4].  And then
> there's the *wierd* (and quite stressful) situations involving black-hat
> hackers [5].
> 
>    Then there's the issues with running UDP based services [6].  It's not
> pretty on the open Internet.
> 
> 2. If people could run a business server on their home connection, they
> would.  Then they'll bitch and moan about the service being slow, or can't
> the ISP do something about the DDoS attack they're under?  Even if they
> aren't and their service is just popular.  Or why their connection dropped?
> Never mind the power is out, why did my server loose connection?
> 
>    Or in self defense, the ISP cuts the connection because the home server is
> running a port scanner, participating in a botnet, or sending out spam
> emails because of an unpatched exploit in some server being run at home.
> 
> 3. Do people realize they'll need to basically firewall off their Windows
> boxes?  Seriously, the level of exploits on Windows is (was?) staggering and
> the number of services (like file sharing) it runs by default (because
> that's what the users want) it runs is *not* condusive to allowing a Windows
> box full access to the Internet.  The same can be said for Mac and Linux,
> but to a slightly lesser degree.
> 
> 4. It was email that poisoned home-run servers intially.  Spam increased
> dramatically during the late 90s/early 2000s to the point where it because a
> Byzantine nightmare to configure and run an email server due to SPF, DMARC
> and DKIM, along with greylisting and filtering of attachments.  Oh, and as a
> self-defense mechanism, nearly every ISP around the world will block
> incoming/outgoing TCP port 25 to home users.
> 
>> You've got to use a dynamic
>> DNS service like no-ip.com, and even if you manage that you might still
>> find yourself getting cut off over a TOS violation. People are
>> thoroughly conditioned toward using the internet as glorified cable TV,
>> and only expressing themselves on platforms they don't control.
> 
>    That is true too, but I suspect even *if* you could easily run a server at
> home, 99% would not even bother (or know what it is).
> 
>> Then there's DNS, domain names, ICAAN, etc. Maybe if we still used a
>> UUCP-style addressing scheme like
>> <country>.<province>.<city>.<neighborhood>.<hostname> it wouldn't
>> matter what I called my host as long as the hostname was unique to the
>> <neighborhood>. But instead we settled on <domain-name>.<tld>, which
>> needs to be administered by registrars to ensure uniqueness, and domain
>> registration is yet more sysadmin stuff that most people don't
>> necessarily have the time, skill, or inclination to deal with.
> 
>    There are groups working on alternative naming/routing schemes that don't
> require a global namespace.  It's not an easy problem.
> 
>    Also, at one time, domains under the .us domain were restricted to
> geographical names, like example.boca-raton.fl.us.  But they were free to
> register, and as far as I can tell, permanent.  The issue though, is that
> even under the <city>,<state>.us, you still need unique names, although it's
> a smaller area to worry about.
> 
>    I don't think you can do that anymore.  I went down that rabbit hole
> several months ago looking to register a geographical domain under .us and
> couldn't do it (or find out who controls the domains under
> boca-raton.fl.us).  Pitty, I was hoping to get a free domain registration
> for life.
> 
>> I would prefer that public hosts weren't necessary. I think that
>> everybody who wants to should be able to publish from their own device
>> without having to become a sysadmin. As long as operating a gemini
>> service remains the province of sysadmins, we're going to maintain the
>> division between haves (sysadmins) and have nots (people who can't or
>> don't want to sysadmin) that prevented the web from becoming (or
>> remaining) a democratic platform.
> 
>    Never underestimate the lack of giving a damn the general population have.
> I'm sure there are aspects of your life that you lack a damn about that
> other people think you should give more than a damn.
> 
>> This became something of a political rant, and I probably should have
>> put it on demifiend.org instead. Sorry if this doesn't belong here; I'm
>> posting this under a new subject so that it starts a new thread instead
>> of derailing the existing one.
> 
>    I think it's a conversation worth having, as it relates to how Gemini
> expands with new content.
> 
>    -spc
> 
> [1]	Disclaimer: I do pay extra for a static IPv4 address---at the time I
> 	needed it for my job, and now it's a "nice to have" and I can still
> 	afford it.  It's actually not that much over the stock price of
> 	service.
> 
> [2]	My router will forward ssh traffic to my main development system.
> 
> [3]	http://boston.conman.org/2019/07/09-12
> 	http://boston.conman.org/2019/08/06.2
> 
> [4]	http://boston.conman.org/2020/04/05.1
> 
> [5]	http://boston.conman.org/2004/09/19.1
> 
> [6]	http://boston.conman.org/2019/05/13.1
>

Link to individual message.

61. Jason McBrayer (jmcbray (a) carcosa.net)

Koushik Roy <koushik at meff.me> writes:

> All that said, I'm not convinced that an in-band Gemini posting
> mechanism is the correct answer. I prefer a solution that involves the
> community standardizing around some other mechanism to upload content,
> and then building/popularizing apps (native or not) that use this
> mechanism. To make this more concrete, I can imagine a scenario where
> apps are built on top of FTPS to allow users to author content and
> then transparently have them appear in a Gemini capsule. Swap FTPS
> with one of many other mechanisms, such as SFTP, NNTPS, Email, what
> have you.

I strongly agree with this. For me, the attraction of Gemini is that the
web is no longer a suitable protocol for document delivery, because most
sites require very large, complex browsers that are optimized for use as
an application runtime. If every webserver were running Shizaru, I could
use a reasonable browser like Lynx or Dillo, but that's not realistic
today. I want a document-sharing ecosystem that is not going to expand
to require runtimes for untrusted remote applications.

I don't really feel that we are lacking in file copying protocols, or
that any of the existing file copying protocols are problematic in the
same way that http(s) is. While some of them (SCP? Rsync over SSH?
git+ssh?) may be complex to implement from scratch, they also are mostly
encapsulated by small programs that can be scripted. I also realize
titan: is a separate protocol, but I'm not sure it does the same job as
Gemini of solving a problem that needs to be solved. I'm afraid it's
more like the tendency of the web to replace all other protocols with
extensions of itself.

-- 
+-----------------------------------------------------------+  
| Jason F. McBrayer                    jmcbray at carcosa.net  |  
| If someone conquers a thousand times a thousand others in |  
| battle, and someone else conquers himself, the latter one |  
| is the greatest of all conquerors.  --- The Dhammapada    |

Link to individual message.

62. solderpunk (solderpunk (a) SDF.ORG)

On Thu, Jun 18, 2020 at 10:57:56AM -0400, Jason McBrayer wrote:

> If every webserver were running Shizaru, I could
> use a reasonable browser like Lynx or Dillo, but that's not realistic
> today.

I will post something on just that topic quite shortly.  :)  Stay tuned!

Cheers,
Solderpunk

Link to individual message.

---

Previous Thread: [ANN] A new gemini client for acme

Next Thread: sysadmin advice concerning backups