💾 Archived View for perplexing.space › 2021 › high-latency.gmi captured on 2022-04-29 at 11:24:41. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2021-11-30)
-=-=-=-=-=-=-
2021-01-01
I find myself out in the woods today, enjoying the new year on a very rainy day in a more remote part of the country. Being rained in, I won't be out hiking and thought I might try out some networked computing in otherwise terrible networking conditions.
I'm on a satellite line, in the rain, which means I'm getting somewhere between 8 KiB *per minute* and 5 KiB/s. It isn't too much of a problem for some tools, like ftp and ssh if I'm willing to forgo much of the interactivity. Mosh is downright delightful, compared to ssh. In the case of gemini clients the real problem seems to be with client-side timeouts. While most servers are content to dribble out bits at a snail's pace, some clients are less robust to these levels of latency than others.
I've found ariane to perform better than deedum except for the fact that certain failure-modes (either timeouts or connection resets, I can't tell which) result in application crashes. Deedum is too eager to timeout connections to make browsing gemini much fun. I may try sending a patch if I can coerce a download from github's servers. The trick with elpher lay in the variable elpher-connection-timeout, which is 5 seconds by default but painless to change — I found 300 works better in my case. While it sounds bad I've found it is tolerable in practice.
DNS is working surprisingly well, resolving domain names without much delay (certainly not thousands of times slower than normal). I'm a little surprised to find how poorly git itself is performing, resetting connections and rolling back some of the progress. It is a little frustrating to watch objects enumerate, count, and compress before finally resetting and losing progress. I don't see anything likely looking in the man page, but surely there is some way around this? I think github is partly to blame, trying to connect via curl shows connection resets without keepalive within 30-60 seconds, which isn't true of all servers. I'm sure I'm ruining someone's P95s.
On a happier note, I had a chance to test my own servers and find that they all perform admirably, not terminating connections too early and serving lightweight pages in a timely manner.
Generally speaking, I've been enjoying the stress test on my own browsing and development practices. mbsync can pull down the latest messages to my inbox when things are convenient; From there I can peruse different mailing lists and write responses. I found my use of smtpmail-send-it in emacs to be too slow to bother with under the current conditions, but it is effortless to toggle smtpmail-queue-mail and save off my replies for later, when connectivity is better.
I'll make an attempt to push this post out via a mercurial push, either it'll work or I'll be forced to queue up my local changes until I re-enter the land of high-speed wireless.
I'm curious if anyone else routinely find themself on a low-bandwidth or high-latency connection and if so, what tips and tricks can be leveraged to maximize productivity. I do recall Ben over at kwiecien.us discussing it at one point, I should probably dig around in GUS for further discussion.