💾 Archived View for rawtext.club › ~sloum › geminilist › 002411.gmi captured on 2020-10-31 at 14:31:42. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2020-09-24)
-=-=-=-=-=-=-
Kevin Sangeelee kevin at susa.net
Sat Aug 15 17:00:18 BST 2020
- - - - - - - - - - - - - - - - - - -
Gemini currently allows a fetch-then-process model, while a URL thatrefers to a streaming resource forces me to: -
a) intercept the response and make a decision on how to proceed, orb) wait for a timeout
There's plenty of tech for which implementing the above is trivial,but it's currently not mandatory. If my client pipes the output toanother process, there's no reason for either process *not* to waittill the server closes the connection - I currently have every reasonto expect that the server is sending me data for any Gemini request.
Knowing in advance that a server will not close a connection meansthat steaming works, existing clients don't hang or break, new clientsaren't forced to add extra complexity, and unnecessary requests can beavoided entirely.
This is just my take, anyway!
Kevin
On Sat, 15 Aug 2020 at 11:47, cage <cage-dev at twistfold.it> wrote:
On Fri, Aug 14, 2020 at 11:39:20PM +0000, James Tomasino wrote:
Hi!
Honestly i fail to understand why a new scheme is needed here. The
protocol already supports stream as discussed in a previous messages
and i do no see a lott of advantages for using a different scheme
except (as you wrote) to signal to the user that the content will not
end.
Probably i am missing something, please help me to understand.
6. It is still a single client-initiated request happening in the
foreground. We aren't creating background threads of who-know-what
running services. We're getting an ongoing document in real-time,
that's all.
I do not think this is entirely true if you want to update/keep alive
the UI of the client while the content is flowing from the
server. Some kind of concurrent works enter in the equation, i think.
Bye!
C.