On Thu, 5 Nov 2020 22:09:54 +0100 Katarina Eriksson <gmym at coopdot.com> wrote: > Yes, the server would need to be able to accept and respond to other > connections while the file is transmitted. It also needs either be > able to run CGI scripts or have custom code for that endpoint. Not only that, but how would a request sent separately tell the server which connection of its dozen is the one that we want progress on? That kills a nice feature of the protocol which is being able to serve connections on a queue instead of forking for each connection because once a transmission is over, the connection is over too, this simplifies server code A LOT. > Foster an environment where capsule authors are good enough internet > citizens to always provide the file size on the page? Sure, that > wouldn't be tricky at all. If that becomes a guideline or an etiquette everyone should follow, this is not an environment of division, lets take this mailing list as an example, or for that matter, almost all mailing lists, plaintext emails are a guideline, its a part of the culture, sending HTML email can be seen as disrespectful at times, this is a result of social rules that have evolved over time, I don't see how it would be any different for Gemini. Also, if capsule authors insist on serving large content over this protocol they should make it not so annoying, the protocol should not accommodate such people by serving them more features, if you want HTTP, you know where to find it. > Corrected: > Scenario 3.1: download a big file and the capsule author neglected to > provide a file size Bummer. > I would personally prefer the way we did it with FTP and provide a > large_file.bin.md5sum or now large_file.bin.sha1sum Brilliant, I don't see anything wrong with that. > but I would use another protocol for large files. Ok then, there's nothing to discuss further.
---
Previous in thread (6 of 9): 🗣️ Katarina Eriksson (gmym (a) coopdot.com)
Next in thread (8 of 9): 🗣️ Drew DeVault (sir (a) cmpwn.com)