💾 Archived View for dfdn.info › dfdn › gemini-seen-by-http-client-developer.gmi captured on 2024-06-16 at 12:50:20. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2024-02-05)
-=-=-=-=-=-=-
THE GEMINI PROTOCOL SEEN BY THIS HTTP CLIENT PERSON
MAY 28, 2023 DANIEL STENBERG 10 COMMENTS
There is again a pull-request submitted to the curl project to bring support for the Gemini protocol. It seems like a worthwhile effort that I support, even if it is also a lot of work involved and it might take some time before it reaches the state in which it can be merged. A previous attempt at doing this was abandoned a while ago.
This renewed interest made me take a fresh tour through the current Gemini protocol spec and I decided to write down some observations for you. So here I am. These are comments based on my reading of the 0.16.1 version of the protocol spec. I have implemented Internet application protocols client side for some thirty years. I have not actually implemented the Gemini protocol.
Motivations for existence
Gemini is the result of a kind of a movement that tries to act against some developments they think are wrong on the current web. Gemini is not only a new wire protocol, but also features a new documentation format and more. They also say its not “the web” at all but a new thing. As a sign of this, the protocol is designed by the pseudonymous “Solderpunk” – and the IETF or other suitable or capable organizations have not been involved – and it shows.
Counter surveillance
Gemini has no cookies, no negotiations, no authentication, no compression and basically no (other) headers either in a stated effort to prevent surveillance and tracking. It instead insists on using TLS client certificates (!) for keeping state between requests.
A Gemini response from a server is just a two-digit response code, a single media type and the binary payload. Nothing else.
Reduce complexity
They insist that thanks to reduced complexity it enables more implementations, both servers and clients, and that seems logical. The reduced complexity however also makes it less visually pleasing to users and by taking shortcuts in the protocol, it risks adding complexities elsewhere instead. Its quite similar to going back to GOPHER.
Form over content
This value judgement is repeated among Gemini fans. They think “the web” favors form over content and they say Gemini intentionally is the opposite. It seems to be true because Gemini documents certainly are never visually very attractive. Like GOPHER.
But of course, the protocol is also so simple that it lacks the power to do a lot of things you can otherwise do on the web.
The spec
The only protocol specification is a single fairly short page that documents the over-the-wire format mostly in plain English (undoubtedly featuring interpretation conflicts), includes the URL format specification (very briefly) and oddly enough also features the text/gemini media type: a new document format that is “a kind of lightweight hypertext format, which takes inspiration from gophermaps and from Markdown“.
The spec says “Although not finalised yet, further changes to the specification are likely to be relatively small.” The protocol itself however has no version number or anything and there is no room for doing a Gemini v2 in a forward-compatible way. This way of a “living document” seems to be popular these days, even if rather problematic for implementers.
Gopher revival
The Gemini protocol reeks of GOPHER and HTTP/0.9 vibes. Application protocol style anno mid 1990s with TLS on top. Designed to serve single small text documents from servers you have a relation to.
Short-lived connections
The protocol enforces closing the connection after every response, forcibly making connection reuse impossible. This is terrible for performance if you ever want to get more than one resource off a server. I also presume (but there is no mention of this in the spec) that they discourage use of TLS session ids/tickets for subsequent transfers (since they can be used for tracking), making subsequent transfers even slower.
We know from HTTP and a primary reason for the introduction of HTTP/1.1 back in 1997 that doing short-lived bursty TCP connections makes it almost impossible to reach high transfer speeds due to the slow-starts. Also, re-doing the TCP and TLS handshakes over and over could also be seen a plain energy waste.
The main reason they went with this design seem to be to avoid having a way to signal the size of payloads or do some kind of “chunked” transfers. Easier to document and to implement: yes. But also slower and more wasteful.
Serving an average HTML page using a number of linked resources/images over this protocol is going to be significantly slower than with HTTP/1.1 or later. Especially for servers far away. My guess is that people will not serve “normal” HTML content over this protocol.
Gemini only exists done over TLS. There is no clear text version.
GET-only
There are no other methods or ways to send data to the server besides the query component of the URL. There are no POST or PUT equivalents. There is basically only a GET method. In fact, there is no method at all but it is implied to be “GET”.
The request is also size-limited to a 1024 byte URL so even using the query method, a Gemini client cannot send much data to a server. More on the URL further down.
Query
There is a mechanism for a server to send back a single-line prompt asking for “text input” which a client then can pass to it in the URL query component in a follow-up request. But there is no extra meta data or syntax, just a single line text prompt (no longer than 1024 bytes) and free form “text” sent back.
There is nothing written about how a client should deal with the existing query part in this situation. Like if you want to send a query and answer the prompt. Or how to deal with the fact that the entire URL, including the now added query part, still needs to fit within the URL size limit.
Better use a short host name and a short path name to be able to send as much data as possible.
TOFU
the strongly RECOMMENDED approach is to implement a lightweight “TOFU” certificate-pinning system which treats self-signed certificates as first- class citizens.
(From the Gemini protocol spec 0.16.1 section 4.2)
Trust on first use (TOFU) as a concept works fairly well when you interface a limited set of servers with which you have some relationship. Therefore it often works fine for SSH for example. (I say “fine” for even with ssh, people often have the habit of just saying yes and accepting changed keys even when they perhaps should not.)
There are multiple problems with doing TOFU for a client/server document browsing system like Gemini.
A challenge is of course that on the first visit a client cannot spot an impostor, and neither can it when the server updates its certificates down the line. Maybe an attacker did it? It trains users on just saying “yes” when asked if they should trust it. Since you as a user might not have a clue about how runs that particular server or whatever the reason is why the certificate changes.
The concept of storing certificates to compare against later is a scaling challenge in multiple dimensions:
Certificates need to be stored for a long time (years?)
Each host name + port number combination has its own certificate. In a world that goes beyond thousands of Gemini hosts, this becomes a challenge for clients to deal with in a convenient (and fast) manner.
Presumably each user on a system has its own certificate store. What user A trusts, user B does not necessarily have to trust.
Does each Gemini client keep its own certificate store? Do they share? Who can update? How do they update the store? What’s the file format? A common db somehow?
When storing the certificates, you might also want to do like modern SSH does: not store the host names in cleartext as it is a rather big privacy leak showing exactly which servers you have visited.
I strongly suspect that many existing Gemini clients avoid this huge mess by simply not verifying the server certificates at all or by just storing the certificates temporarily in memory.
You can opt to store a hash or fingerprint of the certificate instead of the whole one, but that does not change things much.
I think insisting on TOFU is one of Gemini’s weakest links and I cannot see how this system can ever scale to a larger audience or even just many servers. I foresee that they need to accept Certificate Authorities or use DANE in a future.
Gemini Proxying
By insisting on passing on the entire URL in the requests, it is primarily a way to solve name based virtual hosting, but it is also easy for a Gemini server to act as a proxy for other servers. On purpose. And maybe I should write “easy”.
Since Gemini is (supposed to be) end-to-end TLS, proxying requests to another server is not actually possible while also maintaining security. The proxy would have to for example respond with the certificate retrieved from the remote server (in addition to its own) but the spec mentions nothing of this so we can guess existing clients and proxies don’t do it. I think this can be fixed by just adjusting the spec. But would add some rather kludgy complexity for a maybe a not too exciting feature.
Proxying to gopher:// URLs should be possible with the existing wording because there is no TLS to the server end. It could also proxy http:// URLs too but risk having to download the entire thing first before it can send the response.
URLs
The Gemini URL scheme is explained in 138 words, which is of course very terse and assumes quite a lot. It includes “This scheme is syntactically compatible with the generic URI syntax defined in RFC 3986“.
The spec then goes on to explain that the URL needs be UTF-8 encoded when sent over the wire, which I find peculiar because a normal RFC 3986 URL is just a set of plain octets. A Gemini client thus needs to know the charset that was used for or to assume for the original URL in order to convert it to UTF-8.
Example: if there is a %C5 in the URL and the charset was ISO-8859-1. That means the octet is a LATIN CAPITAL LETTER A WITH RING ABOVE. The UTF-8 version of said character is the two-byte sequence 0xC3 0x85. But if the original charset instead was ISO-8859-6, the same %C5 octet means ARABIC LETTER ALEF WITH HAMZA BELOW, encoded as 0xD8 0xA5 in UTF-8.
To me this does not rhyme well with reduced complexity. This conversion alone will cause challenges when done in curl because applications pass an RFC 3986 URL to the library and it does not currently have enough information on how to convert that to UTF-8. Not to mention that libcurl completely lacks UTF-8 conversion functions.
This makes me suspect that the intention is probably that only the host name in the URL should be UTF-8 encoded for IDN reasons and the rest should be left as-is? The spec could use a few more words to explain this.
One of the Gemini clients that I checked out to see how they do this, in order to better understand the spec, even use the punycode version of the host name quoting “Pending possible Gemini spec change”. What is left to UTF-8 then? That client did not UTF-8 encode anything of the URL, which adds to my suspicion that people don’t actually follow this spec detail but rather just interoperate…
The UTF-8 converted version of the URL must not be longer than 1024 bytes when included in a Gemini request.
The fact that the URL size limit is for the UTF-8 encoded version of the URL makes it hard to error out early because the source version of the URL might be shorter than 1024 bytes only to have it grow past the size limit in the encoding phase.
Origin
The document is carelessly thinking “host name” is a good authority boundary to TLS client certificates, totally ignoring the fact that “the web” learned this lesson long time ago. It needs to restrict it to the host name plus port number. Not doing that opens up Gemini for rather bad security flaws. This can be fixed by improving the spec.
Media type
The text/gemini media type should simply be moved out the protocol spec and be put elsewhere. It documents content that may or may not be transferred over Gemini. Similarly, we don’t document HTML in the HTTP spec.
Misunderstandings?
I am fairly sure that once I press publish on this blog post, some people will insist that I have misunderstood parts or most of the protocol spec. I think that is entirely plausible and kind of my point: the spec is written in such an open-ended way that it will not avoid this. We basically cannot implement this protocol by only reading the spec.
Future?
It is impossible to tell if this will fly for real or not. This is not a protocol designed for the masses to replace anything at high volumes. That is of course totally fine and it can still serve its community perfectly fine. There seems to be interest enough to keep the protocol and ecosystem alive for the moment at least. Possibly for a long time into the future as well.
What I would change
As I believe you might have picked up by now, I am not a big fan of this protocol but I still believe it can work and serve its community. If anyone would ask me, here are a few things I would consider changing in order to take it up a few notches.
Split the spec into three separate ones: protocol, URL syntax, media type. Expand the protocol parts with more exact syntax descriptions and examples to supplement the English.
Clarify the client certificate use to be origin based, not host name.
Drop the TOFU idea, it makes for a too weak security story that does not scale and introduces massive complexities for clients.
Clarify the UTF-8 encoding requirement for URLs. It is confusing and possibly bringing in a lot of complexity. Simplify?
Clarify how proxying is actually supposed to work in regards to TLS and secure connections. Maybe drop the proxy idea completely to keep the simplicity.
Consider a way to re-use connections, even if that means introducing some kind of “chunks” HTTP-style.
Hacker News new | past | comments | ask | show | jobs | submit login
A look at the Gemini protocol: a brutally simple alternative to the web (toffelblog.xyz)
415 points by flatlanderwoman on July 4, 2020 | hide | past | favorite | 347 comments
bluefox on July 4, 2020 | next [–]
It feels like the main feature of this project is incompatibility with HTTP. The protocol is new and primitive, it's easy to write tools around it, write texts about it, etc. A small community forms, and you're part of it. You can advertise it to others, or rant against the mainstream, or whatever. But now what? You lose interest and abandon it, and eventually it dies.
We already have a transport protocol that "everybody" agrees on: HTTP. We can define a specification for a subset of it, and a subset of HTML/browser features we want, create tools around them, and form a small community.
The advantage here is that the community is not an island. Users of Big Browser can still read your latest rants. They can even learn about this project and, while perhaps not using Mom-and-Pop browser, may support it in their sites, since it wouldn't require another server; mostly just having their site work without JavaScript would be a huge step forward. Right, you don't have Google filtering based on Accessibility. The community can create a search engine that does. Now what? You just get on with your life, producing and consuming AccessibleWeb content without the gratuitous incompatibility.
ddevault on July 4, 2020 | parent | next [–]
Incompatibility with HTTP is a feature, and a good one at that. It's drawing a line in the sand: this is bad, and we don't want to be a part of it. We don't want a subset of HTTP. We don't want a subset of browser features. Yes, we are fully aware that it would not work with "modern" "browsers". We want something which isn't rotten from the start. We don't want to click a link and end up back on another maggot infested pile of JavaScript crap - we want a network where everyone who's playing is playing on a sane baseline.
The whole point of this project is to shrug off legacy. Yes, that means reinventing a few wheels.
imtringued on July 6, 2020 | root | parent | next [–]
I like tiny communities as much as anyone but I personally see no need for this to exist. Firefox might consume 2GB but it is also running hundreds of tabs. 8GB of RAM was more than enough. On Linux I rarely went over 4.5GB. Worst offenders are JVM based applications. Notably modded Minecraft can consume absurd amounts of RAM.
I'm more terrified of the average electron app or java app than I am scared of a single browser tab.
wolco on July 4, 2020 | root | parent | prev | next [–]
Or just create a browser without javascript support.
ddevault on July 4, 2020 | root | parent | next [–]
I wonder if everyone in this thread is willfully missing the point, or else if we're just so deep into the brainwashing of the modern web that they can no longer see the surface.
Wowfunhappy on July 4, 2020 | root | parent | next [–]
I’m just not convinced that HTML and especially HTTP are “rotten to the core”. You can make http://motherfuckingwebsite.com and the markup is all very clean.
nardi on July 4, 2020 | root | parent | next [–]
It’s about the network, not the website.
lal on July 5, 2020 | root | parent | prev | next [–]
the markup might be clean but it's pretty ugly and network inefficient because it's full of boilerplate that is only required because html also supports 1000 times more nonsense than this site uses, so you have to type <p></p> to render every line of that text, or <a href=""></a> to just render a link. you need to do <h></h> for headings. if all you want to do in your content is those things, that markup is ridiculous.
Wowfunhappy on July 5, 2020 | root | parent | next [–]
if all you want to do in your content is those things, that markup is ridiculous.
Why? I actually think it’s pretty clean. You have a one-to-two letter opening tag and an equivalent closing one.
Markdown looks aesthetically nicer and is easier to type, but it’s less precise.
gpsx on July 4, 2020 | root | parent | prev | next [–]
Don't worry, if this were to ever catch on it would be incorporated into browsers, meaning the net effect is just to make browsers more complicated.
flatlanderwoman on July 4, 2020 | root | parent | next [–]
meaning the net effect is just to make browsers more complicated.
Maybe it would, slightly. But would it make the browser anymore complicated then the unessential features that Firefox has in its default build right now?
Pocket integration, FF Sync, some screenshot function?
The web is so broken that my Firefox even has some Protection Dashboard thing (about:protections). Not that I've ever used or noticed it before.
ddevault on July 4, 2020 | root | parent | prev | next [–]
That's the browser's problem. We'll still have a new, simpler protocol which we can use without the web stack.
ozfive on July 4, 2020 | root | parent | prev | next [–]
This! I agreed wholeheartedly!
peteyboy on July 5, 2020 | root | parent | prev | next [–]
I think both. But, hey, they are paying attention, I guess?
ozim on July 4, 2020 | parent | prev | next [–]
I totally agree with that. It is not like some web police force is holding people at gunpoint to add javascript and angular or react. People use all those features because they want to and they find it useful.
loup-vaillant on July 4, 2020 | root | parent | next [–]
There kinda is?
While web sites can avoid JavaScript all right, browsing most sites does require JavaScript. As a NoScript user, I'm keenly aware of how many web sites simply do not work without JavaScript.
And I'm not even talking about all the third party spyware.
JamesLeonis on July 4, 2020 | root | parent | next [–]
I had to give up doing this. I couldn't manage all of the different times websites broke over just CDNs. Lord help me if the website is Ad driven.
I encourage every web developer to try running NoScript for a week. You will find it enlightening.
jhardy54 on July 4, 2020 | root | parent | next [–]
If you're already using uBlock Origin you can disable JS per site (or by default, like NoScript). I've found that the UI is way easier to understand compared to NoScript.
ta5hkv56f on July 5, 2020 | root | parent | next [–]
uMatrix is similarly excellent (finer grained control I think is the difference).
For frequently visited sites you can save which bits you want to allow so its not as onerous as you'd think.
Blocks Google Analytics and Facebook's poison by default.
imtringued on July 6, 2020 | root | parent | prev | next [–]
I'm not sure what you would gain from disabling things like the ability to upload multiple files, autocomplete fields and rich text editing. Raging against and disabling Javascript is just stupid in my viewpoint. I'm not going to cut out absolutely essential features just to satisfy some fringe group.
Silke_Mousepad on July 4, 2020 | root | parent | prev | next [–]
What if JS over gemini:// would only be able to render some things and would not have network access? It just is a question of what your html renderer allows. Add Lua if you think that's better! But you will have the same problems if embedded Lua can do network stuff.
...or let's just P2P our Emacsens... the last non divisive browser... :-P
lal on July 5, 2020 | root | parent | next [–]
gemini doesn't at the moment have any formalized systems for client-side scripting, effectively eliminating the need for that, and most of the gemini community seems pretty opposed to ever implementing such a thing, so it doesn't seem like that will ever change.
lal on July 5, 2020 | root | parent | prev | next [–]
This is more intended to be a comfier gopher than a less-shitty http. Most of its early adherents are ex-phloggers attracted to its features relative to gopher, not ex-webbers attracted to its lack of client-side scripting or whatever.
The fact that this has attracted a fair number of gopher users (who have always very strongly opposed http/s) would seem proof enough of it's success, imo, at least within particular circles, and at least within the context of its goals. It was never intended to draw people away from the web, it was intended to be a sort of superpowered gopher.
Symbiote on July 4, 2020 | root | parent | prev | next [–]
This protocol doesn't seem to be aimed at commercial use, but the "web police" called "the manager" or "the marketing department" or whatever are the ones forcing the use of the privacy-invading tools.
gchamonlive on July 4, 2020 | root | parent | prev | next [–]
Legacy and momentum plays a greater role in adopting a framework than sheer choice. If I could choose I would use frameworks in haskell or rust 100% of the time where I work. I don't because there is nothing built around it there and I need to get the job done right now. I would like to be the change I want to see, but sometimes there is just not enough time.
acheron9383 on July 4, 2020 | root | parent | next [–]
That is a solid point, at the end of the day, programming languages and frameworks are just tools we use to build a product that has some use to someone (Be that monetary value or art or whatever). At work, it is almost always better to just iterate on the existing tool stack rather than try to spool up new one. I love writing Rust, but I'd need a good reason (or at least a big project to amortize the cost over) to reach for it over the existing very function c++ libraries I already use for our embedded work.
iamstupidsimple on July 4, 2020 | root | parent | prev | next [–]
The original Facebook and Google did not use much JavaScript, so I'm always skeptical when literal documents need it for anything other than ads (and even then...).
colejohnson66 on July 4, 2020 | root | parent | next [–]
And the screen refreshed every time you did something as it loaded a new page. Just because it’s abused doesn’t mean JS doesn’t have uses.
lal on July 5, 2020 | root | parent | next [–]
the screen refreshed every time you browsed to new content or did something that required accessing new content, as one would intuitively expect
ddalex on July 7, 2020 | root | parent | next [–]
I never understood the hangup on screen refreshes - I mean, why is it important that the screen doesn't refresh?
And if it's that important that EVERYBODY deploys JS to render client side to avoid the refreshes, why isn't this handled at the browser level in the first place (i.e. declare "ExtendDisplayTime"-something on your document and the browser should replace the screen content only AFTER the new page is completely painted).
But at the core of it, the web is a document-display system, and back-hammering and shoe-horning apps that masquerade as documents will always be painful.
petra on July 4, 2020 | root | parent | prev | next [–]
Maybe, instead of the spec allowing/banning javascript, it would only allow usage of curated list of "apps" to be part of the page ?
For example, i'm thinking of a roam research like content platform, without tracking and ads. It would be interesting to see shared content around that.
coronadisaster on July 4, 2020 | root | parent | prev | next [–]
they find it useful
Sometimes it is just the latest trend...
jbverschoor on July 5, 2020 | root | parent | prev | next [–]
Actually they do. If you do not fill in your taxes, which are powered by JavaScript, people with hubs are going to come for you and take you hostage, I. Order to rob you from your “tax” money..
imtringued on July 6, 2020 | root | parent | next [–]
I can file my taxes without using a computer. I've done so for many years.
ForHackernews on July 4, 2020 | parent | prev | next [–]
We already have a transport protocol that "everybody" agrees on: HTTP.
Please no. No more everything-over-http. There are other ports besides 443; there are other protocols besides HTTP.
The Internet was once a general purpose peer-to-peer network, and we should try to keep it that way.
oneplane on July 4, 2020 | parent | prev | next [–]
I wonder if the main usage of the internet isn't the web as we think about it, but largely facebook, youtube, twitter, maybe some google+wikipedia. People don't see it in terms of 'websites' anymore.
krapp on July 4, 2020 | root | parent | next [–]
A lot of people seem to believe that, yet plenty of people have to deal with commercial sites, bank sites, school sites, etc., to say nothing of the third party sites linked to by aggregators. I think people still know what websites are.
Animats on July 4, 2020 | root | parent | next [–]
Yes. When you want to do something, you have to get off Google/Facebook and go to a real web site.
annoyingguy on July 5, 2020 | root | parent | prev | next [–]
From my POV facebook is not a app on my devices but access via the web, meaning html, https, a ton of java and whatever adopted php facebook uses today. What i don't get about this project is why it should need it's own browser beside for the user adaptability that you could technically do via a browser plugin, not to reinvent the wheel all together. Coming from the BBS world i like this idea however, but will miss the illustrations (read pictures). Will however give it a go.
ksec on July 5, 2020 | root | parent | prev | next [–]
I think that is the case especially with Smartphones and Apps. The Internet isn't about "Web" anymore. Even Hyperlinks points to "Apps".
However Web still has its place in Google, Wiki, and Shopping. The three things has one thing in common in that they need multiple tabs to keep data and Information.
Hacker News new | past | comments | ask | show | jobs | submit login
A look at the Gemini protocol: a brutally simple alternative to the web (toffelblog.xyz)
415 points by flatlanderwoman on July 4, 2020 | hide | past | favorite | 347 comments
bluefox on July 4, 2020 | next [–]
It feels like the main feature of this project is incompatibility with HTTP. The protocol is new and primitive, it's easy to write tools around it, write texts about it, etc. A small community forms, and you're part of it. You can advertise it to others, or rant against the mainstream, or whatever. But now what? You lose interest and abandon it, and eventually it dies.
We already have a transport protocol that "everybody" agrees on: HTTP. We can define a specification for a subset of it, and a subset of HTML/browser features we want, create tools around them, and form a small community.
The advantage here is that the community is not an island. Users of Big Browser can still read your latest rants. They can even learn about this project and, while perhaps not using Mom-and-Pop browser, may support it in their sites, since it wouldn't require another server; mostly just having their site work without JavaScript would be a huge step forward. Right, you don't have Google filtering based on Accessibility. The community can create a search engine that does. Now what? You just get on with your life, producing and consuming AccessibleWeb content without the gratuitous incompatibility.
ddevault on July 4, 2020 | parent | next [–]
Incompatibility with HTTP is a feature, and a good one at that. It's drawing a line in the sand: this is bad, and we don't want to be a part of it. We don't want a subset of HTTP. We don't want a subset of browser features. Yes, we are fully aware that it would not work with "modern" "browsers". We want something which isn't rotten from the start. We don't want to click a link and end up back on another maggot infested pile of JavaScript crap - we want a network where everyone who's playing is playing on a sane baseline.
The whole point of this project is to shrug off legacy. Yes, that means reinventing a few wheels.
imtringued on July 6, 2020 | root | parent | next [–]
I like tiny communities as much as anyone but I personally see no need for this to exist. Firefox might consume 2GB but it is also running hundreds of tabs. 8GB of RAM was more than enough. On Linux I rarely went over 4.5GB. Worst offenders are JVM based applications. Notably modded Minecraft can consume absurd amounts of RAM.
I'm more terrified of the average electron app or java app than I am scared of a single browser tab.
wolco on July 4, 2020 | root | parent | prev | next [–]
Or just create a browser without javascript support.
ddevault on July 4, 2020 | root | parent | next [–]
I wonder if everyone in this thread is willfully missing the point, or else if we're just so deep into the brainwashing of the modern web that they can no longer see the surface.
Wowfunhappy on July 4, 2020 | root | parent | next [–]
I’m just not convinced that HTML and especially HTTP are “rotten to the core”. You can make http://motherfuckingwebsite.com and the markup is all very clean.
nardi on July 4, 2020 | root | parent | next [–]
It’s about the network, not the website.
lal on July 5, 2020 | root | parent | prev | next [–]
the markup might be clean but it's pretty ugly and network inefficient because it's full of boilerplate that is only required because html also supports 1000 times more nonsense than this site uses, so you have to type <p></p> to render every line of that text, or <a href=""></a> to just render a link. you need to do <h></h> for headings. if all you want to do in your content is those things, that markup is ridiculous.
Wowfunhappy on July 5, 2020 | root | parent | next [–]
if all you want to do in your content is those things, that markup is ridiculous.
Why? I actually think it’s pretty clean. You have a one-to-two letter opening tag and an equivalent closing one.
Markdown looks aesthetically nicer and is easier to type, but it’s less precise.
gpsx on July 4, 2020 | root | parent | prev | next [–]
Don't worry, if this were to ever catch on it would be incorporated into browsers, meaning the net effect is just to make browsers more complicated.
flatlanderwoman on July 4, 2020 | root | parent | next [–]
meaning the net effect is just to make browsers more complicated.
Maybe it would, slightly. But would it make the browser anymore complicated then the unessential features that Firefox has in its default build right now?
Pocket integration, FF Sync, some screenshot function?
The web is so broken that my Firefox even has some Protection Dashboard thing (about:protections). Not that I've ever used or noticed it before.
ddevault on July 4, 2020 | root | parent | prev | next [–]
That's the browser's problem. We'll still have a new, simpler protocol which we can use without the web stack.
ozfive on July 4, 2020 | root | parent | prev | next [–]
This! I agreed wholeheartedly!
peteyboy on July 5, 2020 | root | parent | prev | next [–]
I think both. But, hey, they are paying attention, I guess?
ozim on July 4, 2020 | parent | prev | next [–]
I totally agree with that. It is not like some web police force is holding people at gunpoint to add javascript and angular or react. People use all those features because they want to and they find it useful.
loup-vaillant on July 4, 2020 | root | parent | next [–]
There kinda is?
While web sites can avoid JavaScript all right, browsing most sites does require JavaScript. As a NoScript user, I'm keenly aware of how many web sites simply do not work without JavaScript.
And I'm not even talking about all the third party spyware.
JamesLeonis on July 4, 2020 | root | parent | next [–]
I had to give up doing this. I couldn't manage all of the different times websites broke over just CDNs. Lord help me if the website is Ad driven.
I encourage every web developer to try running NoScript for a week. You will find it enlightening.
jhardy54 on July 4, 2020 | root | parent | next [–]
If you're already using uBlock Origin you can disable JS per site (or by default, like NoScript). I've found that the UI is way easier to understand compared to NoScript.
ta5hkv56f on July 5, 2020 | root | parent | next [–]
uMatrix is similarly excellent (finer grained control I think is the difference).
For frequently visited sites you can save which bits you want to allow so its not as onerous as you'd think.
Blocks Google Analytics and Facebook's poison by default.
imtringued on July 6, 2020 | root | parent | prev | next [–]
I'm not sure what you would gain from disabling things like the ability to upload multiple files, autocomplete fields and rich text editing. Raging against and disabling Javascript is just stupid in my viewpoint. I'm not going to cut out absolutely essential features just to satisfy some fringe group.
Silke_Mousepad on July 4, 2020 | root | parent | prev | next [–]
What if JS over gemini:// would only be able to render some things and would not have network access? It just is a question of what your html renderer allows. Add Lua if you think that's better! But you will have the same problems if embedded Lua can do network stuff.
...or let's just P2P our Emacsens... the last non divisive browser... :-P
lal on July 5, 2020 | root | parent | next [–]
gemini doesn't at the moment have any formalized systems for client-side scripting, effectively eliminating the need for that, and most of the gemini community seems pretty opposed to ever implementing such a thing, so it doesn't seem like that will ever change.
lal on July 5, 2020 | root | parent | prev | next [–]
This is more intended to be a comfier gopher than a less-shitty http. Most of its early adherents are ex-phloggers attracted to its features relative to gopher, not ex-webbers attracted to its lack of client-side scripting or whatever.
The fact that this has attracted a fair number of gopher users (who have always very strongly opposed http/s) would seem proof enough of it's success, imo, at least within particular circles, and at least within the context of its goals. It was never intended to draw people away from the web, it was intended to be a sort of superpowered gopher.
Symbiote on July 4, 2020 | root | parent | prev | next [–]
This protocol doesn't seem to be aimed at commercial use, but the "web police" called "the manager" or "the marketing department" or whatever are the ones forcing the use of the privacy-invading tools.
gchamonlive on July 4, 2020 | root | parent | prev | next [–]
Legacy and momentum plays a greater role in adopting a framework than sheer choice. If I could choose I would use frameworks in haskell or rust 100% of the time where I work. I don't because there is nothing built around it there and I need to get the job done right now. I would like to be the change I want to see, but sometimes there is just not enough time.
acheron9383 on July 4, 2020 | root | parent | next [–]
That is a solid point, at the end of the day, programming languages and frameworks are just tools we use to build a product that has some use to someone (Be that monetary value or art or whatever). At work, it is almost always better to just iterate on the existing tool stack rather than try to spool up new one. I love writing Rust, but I'd need a good reason (or at least a big project to amortize the cost over) to reach for it over the existing very function c++ libraries I already use for our embedded work.
iamstupidsimple on July 4, 2020 | root | parent | prev | next [–]
The original Facebook and Google did not use much JavaScript, so I'm always skeptical when literal documents need it for anything other than ads (and even then...).
colejohnson66 on July 4, 2020 | root | parent | next [–]
And the screen refreshed every time you did something as it loaded a new page. Just because it’s abused doesn’t mean JS doesn’t have uses.
lal on July 5, 2020 | root | parent | next [–]
the screen refreshed every time you browsed to new content or did something that required accessing new content, as one would intuitively expect
ddalex on July 7, 2020 | root | parent | next [–]
I never understood the hangup on screen refreshes - I mean, why is it important that the screen doesn't refresh?
And if it's that important that EVERYBODY deploys JS to render client side to avoid the refreshes, why isn't this handled at the browser level in the first place (i.e. declare "ExtendDisplayTime"-something on your document and the browser should replace the screen content only AFTER the new page is completely painted).
But at the core of it, the web is a document-display system, and back-hammering and shoe-horning apps that masquerade as documents will always be painful.
petra on July 4, 2020 | root | parent | prev | next [–]
Maybe, instead of the spec allowing/banning javascript, it would only allow usage of curated list of "apps" to be part of the page ?
For example, i'm thinking of a roam research like content platform, without tracking and ads. It would be interesting to see shared content around that.
coronadisaster on July 4, 2020 | root | parent | prev | next [–]
they find it useful
Sometimes it is just the latest trend...
jbverschoor on July 5, 2020 | root | parent | prev | next [–]
Actually they do. If you do not fill in your taxes, which are powered by JavaScript, people with hubs are going to come for you and take you hostage, I. Order to rob you from your “tax” money..
imtringued on July 6, 2020 | root | parent | next [–]
I can file my taxes without using a computer. I've done so for many years.
ForHackernews on July 4, 2020 | parent | prev | next [–]
We already have a transport protocol that "everybody" agrees on: HTTP.
Please no. No more everything-over-http. There are other ports besides 443; there are other protocols besides HTTP.
The Internet was once a general purpose peer-to-peer network, and we should try to keep it that way.
oneplane on July 4, 2020 | parent | prev | next [–]
I wonder if the main usage of the internet isn't the web as we think about it, but largely facebook, youtube, twitter, maybe some google+wikipedia. People don't see it in terms of 'websites' anymore.
krapp on July 4, 2020 | root | parent | next [–]
A lot of people seem to believe that, yet plenty of people have to deal with commercial sites, bank sites, school sites, etc., to say nothing of the third party sites linked to by aggregators. I think people still know what websites are.
Animats on July 4, 2020 | root | parent | next [–]
Yes. When you want to do something, you have to get off Google/Facebook and go to a real web site.
annoyingguy on July 5, 2020 | root | parent | prev | next [–]
From my POV facebook is not a app on my devices but access via the web, meaning html, https, a ton of java and whatever adopted php facebook uses today. What i don't get about this project is why it should need it's own browser beside for the user adaptability that you could technically do via a browser plugin, not to reinvent the wheel all together. Coming from the BBS world i like this idea however, but will miss the illustrations (read pictures). Will however give it a go.
ksec on July 5, 2020 | root | parent | prev | next [–]
I think that is the case especially with Smartphones and Apps. The Internet isn't about "Web" anymore. Even Hyperlinks points to "Apps".
However Web still has its place in Google, Wiki, and Shopping. The three things has one thing in common in that they need multiple tabs to keep data and Information.
jacobwilliamroy on July 4, 2020 | root | parent | prev | next [–]
It's mostly surveill- I mean uh... telemetry and advertising data being passed around by bots.
leephillips on July 4, 2020 | root | parent | prev | next [–]
I think many people still use email.
devmunchies on July 4, 2020 | root | parent | next [–]
email is its own protocol and not related to http and web browsers.
leephillips on July 4, 2020 | root | parent | next [–]
I was replying to “I wonder if the main usage of the internet isn't the web as we think about it”.
perryizgr8 on July 5, 2020 | root | parent | prev | next [–]
For most people email is just gmail.com or outlook.com. They don't realize that email is another protocol separate from the web. I am talking from personal experience :D
agumonkey on July 4, 2020 | root | parent | prev | next [–]
yes I think websites are becoming anachronical at that point.. web will fade into ubiquitous peers for high bandwidth data exchange. messages, multimedia, ar/vr .. the end result will be what matter.
marcus_holmes on July 4, 2020 | root | parent | next [–]
I dunno. I see a backlash happening, people building their own blog sites again. The whole "push to your own site, then syndicate that link to social media" thing seems to be happening more.
I'm tempted to creat a new GeoCities and see how that goes for non-technical folks
agumonkey on July 5, 2020 | root | parent | next [–]
I don't think this will last. It's not my personal opinion or preference, I care few about the modern web but to me .. watching people interacting with computers, browsers and how they use whatsapp or instagram .. I see no value for them[0] into the early simple web. It's like betting against television in the 90s.
[0] meaning in the average folk psychology, of course a simpler cleaner web has value, just not to them AFAIB
colejohnson66 on July 4, 2020 | root | parent | prev | next [–]
Isn’t that what NeoCities aims to be (a GeoCities reboot)? Disclaimer: haven’t used it; don’t know how it works.
marcus_holmes on July 8, 2020 | root | parent | next [–]
never heard of it. Thanks I'll go check it out :)
pkphilip on July 4, 2020 | parent | prev | next [–]
Well said. The problem with the Gemini protocol and its proponents is that they are trying to push a protocol (if you can call it that) which makes no sense when what they want to achieve can be achieved much more easily using an alternate approach.
This is like pushing for stone cart wheels when spoked wheels are already available.
For instance, if the issue is with the use of cookies, all they need to implement is a mechanism for the server to not respond to cookies in any form.
If there issue is with other forms of tracking, they could implement a browser which supports a small subset of the HTTP protocol which does not allow any tracking of any kind.
rpdillon on July 4, 2020 | root | parent | next [–]
The FAQ has a section on this (https://gemini.circumlunar.space/docs/faq.html)
---
The problem is that deciding upon a strictly limited subset of HTTP and HTML, slapping a label on it and calling it a day would do almost nothing to create a clearly demarcated space where people can go to consume only that kind of content in only that kind of way. It's impossible to know in advance whether what's on the other side of a https:// URL will be within the subset or outside it. It's very tedious to verify that a website claiming to use only the subset actually does, as many of the features we want to avoid are invisible (but not harmless!) to the user. It's difficult or even impossible to deactivate support for all the unwanted features in mainstream browsers, so if somebody breaks the rules you'll pay the consequences. Writing a dumbed down web browser which gracefully ignores all the unwanted features is much harder than writing a Gemini client from scratch. Even if you did it, you'd have a very difficult time discovering the minuscle fraction of websites it could render.
imtringued on July 6, 2020 | root | parent | next [–]
I wonder what magic technology they use that makes Gemini servers more discoverable than a regular website. It certainly can't be a Gemini specific search engine because Gemini doesn't have a monopoly on search engines.
roca on July 5, 2020 | root | parent | prev | next [–]
You can constrain the features used by an HTML document using CSP headers. For example setting "img-src 'none'; style-src 'none'" will disable images and CSS styling. So this comment is wrong, basically.
tptacek on July 4, 2020 | root | parent | prev | next [–]
This doesn't make much sense. You can clearly demarcate it by replacing the "http" in the URL with "httpsubset" or something, and by running it on a separate port.
majewsky on July 5, 2020 | root | parent | next [–]
At that point, what benefit is there to forcing yourself to use a subset of HTTP?
tptacek on July 5, 2020 | root | parent | next [–]
See upthread.
rumanator on July 4, 2020 | root | parent | prev | next [–]
would do almost nothing to create a clearly demarcated space where people can go to consume only that kind of content in only that kind of way.
That misses the fact that people don't want any of that; people just want to continue using the current infrastructure without functionalities that rely on a limited subset of features made available by the current infrastructure.
And that doesn't justify the effort of reinventing the wheel.
krapp on July 4, 2020 | root | parent | next [–]
Clearly some people do want that, though, especially here. Evey week or so we have a thread about how terrible the modern web is and inevitably there's a subthread about how someone just needs to create an extremist, minimal fork of HTML with no CSS and no JS and create a new, hip web with blackjack and hookers.
I imagine the exclusivity of something like this is part of the appeal. They don't want to be part of the current infrastructure, they don't want to interact with it, or anyone on it.
cycloptic on July 4, 2020 | root | parent | prev | next [–]
Nothing is keeping it as a demarcated space other than the same conventions that you would have to follow anyway if you followed a "minimal subset" approach. You could easily write a Gemini client that handles application/javascript. In fact if this protocol got any level of popularity, I would expect that to happen extremely quickly.
naasking on July 4, 2020 | root | parent | prev | next [–]
If there issue is with other forms of tracking, they could implement a browser which supports a small subset of the HTTP protocol which does not allow any tracking of any kind.
They did. That's the Gemini protocol. I recommend reading the spec to properly understand the constraints they were trying to meet.
harikb on July 4, 2020 | root | parent | next [–]
With a text like this, you can’t blame the readers assuming incompatibility and losing interest
Now, what does Gemini currently have to offer? The best way to find out is to head over to the official site: gemini.circumlunar.space in your Gemini browser.
pkphilip on July 5, 2020 | root | parent | prev | next [–]
The Gemini protocol throws away a huge number of advances - in effect throwing the baby out with the bath water.
Gemini protocol could have been a subset of HTTP and the document format could have been a subset of HTML. For instance, if you decide to not implement cookies, javascript etc but if you retain the ability to have formatting for texts, tables, images etc it would have been sufficient.
And even in the case of images, the protocol could have mandated that images cannot be accessed cross-site. The same restrictions could have been placed on other aspects such as the style sheets.
jhardy54 on July 4, 2020 | root | parent | prev | next [–]
Gemeni is a subset of HTTP?! I thought it was similar to HTTP, like Gopher. Could you link to the relevant docs?
naasking on July 4, 2020 | root | parent | next [–]
It very closely resembles HTTP 0.9. It's pretty much a single line protocol, like "GET [URL]crlf" over TLS, and that's it.
jhardy54 on July 4, 2020 | root | parent | next [–]
That's not what subset means though.
lazyjones on July 4, 2020 | parent | prev | next [–]
A subset of HTTP and HTML is not an option if you want to keep existing semantics. Both are by default broken, unsafe and unusable on some systems, so you will need boilerplate that clutters your subsets unnecessarily, e.g. a crapload of headers like "Content-Security-Policy", META-headers ("viewport") etc.
derefr on July 4, 2020 | root | parent | next [–]
Just create servers that consume AccessibleHTML, and then adulterate it into fancy regular HTTP/HTML for consumption by most UAs (or, optionally, leave it alone for consumption by UAs that send a `X-Rendering-Policy: AccessibleHTML` header. Either way you can still get at the "real" AccessibleHTML source by sending `Cache-Control: no-transform`.) Think of these as "AccessibleWeb to RegularWeb gateways"—except they'd be deployed as reverse-proxies, so RegularWeb users wouldn't have to know they were there.
This is similar to the driving motivation behind RSS: it was supposed to be something for simple static sites to put up, such that gateways could then poll it, before turning around and doing something more technologically-complex to actually deliver the events, like using WebHooks, or sending emails, or doing whatever PuSH does.
pwdisswordfish2 on July 5, 2020 | parent | prev | next [–]
I do not see the "incompatibility". There is a web-to-gemini gateway at https://portal.mozz.us which I think is an example to how easy it is to write one.
Big Browser is the cause of so many problems that people on HN complain about. Without the control over Big Browser that certain offending corporations have, their empires are considerably weaker.
Mom-and-Pop Browser is probably not an accurate caricature. Maybe something like End User Browser is more apropos.
If most HTTP is being sent over TLS these days, and Gemini is also over TLS, one could argue there is no "incompatibility". Gemini just doesn't need to all the extra ad-hoc functionality that has been built on top of HTTP. It is intended for data retrieval, not something that relies on help from the 20+ million lines of code in Big Browser to achieve.
protomyth on July 4, 2020 | parent | prev | next [–]
If you a looking for a 1:1 clone for the early web, you will probably be disapointed. Gemini takes way more design hints from Gopher then it ever will from the web.
It looks to be a reimagined gopher with some early web parts. Which makes me wonder how close their protocol is to the gopher one.
lal on July 5, 2020 | root | parent | next [–]
in terms of semantics it is almost entirely dissimilar except that it's line-oriented. instead of having any of the content type fields or whatever which enforce half of gopher's grammar, links are just "=> gemini://some.url/tunes.mp3", and when you hit that content it gives you a header with a mime type telling you it's an mp3. so now you don't need any of the structure of gopher and can treat literally everything but links as raw text, and clients can optionally format some agreed-upon subset of markdown, and otherwise, that subset of markdown is so readable without formatting that you don't even really need it.
this means you can create a far simpler protocol semantically: requests are just the uri, responses are just a header of a two digit response code and the mime type, and the rest of the response is just the content as raw text. that's the entire protocol grammar.
smichel17 on July 4, 2020 | parent | prev | next [–]
I am mostly with you. It should be a subset of http. And the content should be a subset of html/css/js --- but the schema ought to have a different prefix. So if I navigate to acc://example.com, Mom-and-Pop browser would perform the same steps that Big Browser would take to fetch http://example.com
This allows hyperlinking to the accessible version of a page, and using different default browsers for the different protocols, so that I (as someone who wants to use Mom-and-Pop browser) can easily fall back to Big Browser when necessary to view a page that won't render over accs://
petra on July 4, 2020 | root | parent | next [–]
But breaking compatibility will make it harder for regular users to read that content.
What about using something like https://gemini.site.com ?
And part of that restricted spec would allow linking only to such links, to stay within the network ?
And the biggest challenge with Gemini is creating a great search engine. But now , searching site:gemini.* KEYWORD , gives us the power of Google and other search engines.
vertex-four on July 4, 2020 | root | parent | next [–]
And part of that restricted spec would allow linking only to such links, to stay within the network ?
Doesn't work over time. You link somewhere, the other site owner a month later decides they want to use Google Analytics, now you're linking your readers back to the web they're trying to avoid.
And the biggest challenge with Gemini is creating a great search engine.
Not really, it's discovery in general, which can be solved in many ways that don't involve search engines; Wikipedia's references are often a good discovery tool, for example. I use aggregators to find Gemini content and follow discussions which are happening across the space.
corny on July 4, 2020 | root | parent | next [–]
You link somewhere, the other site owner a month later decides they want to use Google Analytics, now you're linking your readers back to the web they're trying to avoid.
What if instead of a direct link, there was an intermediary that verified that the source and target of the link conformed to spec? If either side didn't conform, the link would just not work. Ideally the intermediary would be built in to the source's web server for privacy reasons. If the target site decided to quit and break spec, people could still access the site from links posted outside of the network.
Wowfunhappy on July 4, 2020 | root | parent | prev | next [–]
Yeah, so what you really need is a browser that only supports your subset. It’ll ensure that whatever sites you view are fast/safe/whatever, but Gemini sites you make will be equally accessible to users of “legacy” browsers.
vertex-four on July 4, 2020 | root | parent | next [–]
Except that then 99% of people reading your page will be using Firefox, so it's not that big of a deal to just say fuck it and not do this whole SafeHTML thing any more when you want that one extra feature in a year. The 1% of readers who cared are just a minority.
The entire purpose here is to build a community of people who care about this sort of thing, who write content for that community, where taking your content away from that community is not an easy decision of adding a <script> or <style> tag.
petra on July 4, 2020 | root | parent | next [–]
Missing 99% of your potential readers is a big problem, though.
Maybe there's another way?
For example, Gemini could only link through a centralized link management server, and that link server will verify links to be "clean" , and if a link isn't clean it will become dead ?
Of course, that depends whether said link gets most of his traffic from within Gemini or outside.
Hmm...
vertex-four on July 4, 2020 | root | parent | next [–]
I spend most of my time in small spaces - 100 people, max, of who about 10 might be around at any given time. I'm not worried that people are going to miss what I have to say, because I'm talking to the people who are there. The people who aren't might as well be irrelevant.
Have you ever spent any time in small communities? It's lovely to just... not have to care about gaining followers or making arbitrary counters go up or whatever it is people do, and just talk/create for the sake of it. There's no "brand" to care about.
majewsky on July 5, 2020 | root | parent | next [–]
Counterpoint: All these brain structures that make us crave power and influence evolved in a time in which humans exclusively dealt with small communities (going by your "100 people max" criterion).
vertex-four on July 5, 2020 | root | parent | next [–]
The usual process of gaining influence in small communities looks a lot healthier a behaviour than the process of gaining it in large ones, IMO.
petra on July 4, 2020 | root | parent | prev | next [–]
Well that's certainly a good use case.
But will that work, and compete with the web for knowledge sharing communities? I'm not sure.
petra on July 4, 2020 | root | parent | prev | next [–]
Maybe aggregators work.
But many users will expect search. And it's really hard to change people's minds.
vertex-four on July 4, 2020 | root | parent | next [–]
Many users also expect javascript-heavy experiences, that they'll be tracked across the web, and that every resource they're likely to access on the internet is commercial in nature. Gemini is quite explicitly a project to try different things.
acdw on July 4, 2020 | root | parent | prev | next [–]
Actually, gemini://gus.guru is a great search engine in gemini. And there's another one, gemini://houston.coder.town.
oefrha on July 4, 2020 | root | parent | prev | next [–]
You basically just invented WAP 2.0.
smichel17 on July 7, 2020 | root | parent | next [–]
I knew this was a thing, in the context of mms, but didn't know the name, so thanks for that.
peteyboy on July 5, 2020 | root | parent | prev | next [–]
...or more specifically, re-implemented http 0.9 but with TLS. What here is wrong?
thih9 on July 4, 2020 | parent | prev | next [–]
You suggest using a subset of HTTP and list “Users of Big Browser can still read your latest rants.” as the main advantage.
But with a lightweight protocol like this, it seems easy to set up a proxy that would let anyone access the content via web.
acdw on July 4, 2020 | root | parent | next [–]
And in fact they have (there are two that I know of!):
- https://portal.mozz.us/gemini/gemini.circumlunar.space/ - https://proxy.vulpes.one/ (also does gopher)
6c696e7578 on July 5, 2020 | parent | prev | next [–]
Not long ago, someone submitted "Geek ring" which I think is very similar to what you're describing.
https://news.ycombinator.com/item?id=23549471
bob1029 on July 4, 2020 | parent | prev | next [–]
This is how I am approaching one of my app framework projects. HTTP is the delivery vehicle, but I only utilize a very small subset of the various full APIs (HTTP/HTML/JS/CSS) in order to deliver the framework's functionality. One of my objectives is for the framework to support a wider range of browsers than most modern websites are able to handle today. If vendors like Apple and Google begin fully-embracing things like PWA, I could wind up in a really good position regarding this approach.
lucrative on July 4, 2020 | parent | prev | next [–]
I would really like to see something like this taking form and getting traction. I think it would be a nice simplification of the Web.
My question is: what's stopping us from doing that?
avian on July 4, 2020 | prev | next [–]
Blogs, [..] are perfect for the Gemini format.
Gemini lacks in-line images
This is the only part that I don't really understand about Gemini. Even the most basic printed publications can include illustrations. <img> got added to HTML very early on because sometimes it's hard to share some piece of information in anything but a visual form.
I write a (mostly) technical blog that certainly focuses more on text content than images. I would be happy to throw away the header, sidebar and the rest of the "design" cruft (in fact my blog is perfectly usable in a browser that doesn't support CSS or Javascript) But I can't imagine having my posts without graphs, diagrams and photos inserted in the text.
If the fear is that in-line images would lead to frivolous use as ads or "useless multi-megabyte header images", then maybe a better approach would be to limit the number, or size, of images on each page? Some scientific publications do exactly that in an attempt to force the authors to focus on selecting only the most important images that need to accompany their papers.
mrob on July 4, 2020 | parent | next [–]
No technical limitation is suitable. The appropriate number or size of images depends on the accompanying text. Setting it high enough to allow all legitimate uses makes it weak enough that you might as well have no limit. And even a low limit does nothing to prevent annoying use up to that limit.
The best possible limit is "must convince the reader to click it".
avian on July 4, 2020 | root | parent | next [–]
I guess that's a reasonable position to take. It reminds me of paper publications where you have all the figures on color plates bound in the middle/end, so I guess it isn't without precedent
On the other hand, it made me think of old Usenet posts and discussions. That was another medium where you were limited to plain-text only. Posts were often forced to resort to awful ASCII-art drawings of things they wanted to explain and that was just a horrible experience altogether (not to mention how fun those drawings are to decipher today where modern archives have mostly messed up the white space).
rapnie on July 4, 2020 | root | parent | next [–]
There are a number of text-based diagram formats that could be supported, like PlantUml and such.
WJW on July 4, 2020 | root | parent | next [–]
Surely a motivated ad designer could make a "good enough" ad in PlantUml?
rapnie on July 4, 2020 | root | parent | next [–]
Sure. They can also use ascii art or plain text.
vertex-four on July 4, 2020 | root | parent | prev | next [–]
Binary attachments were somewhat fiddly in Usenet, AIUI? I don't think MIME/8-bit clean support was really consistently there at the time. In Gemini, you'd just serve it as a binary file.
fock on July 4, 2020 | root | parent | prev | next [–]
also not really sure, how a protocol wants to be simpler than http (send bytes to port 80, get bytes, print bytes...) if it has baked in content limitations...
86J8oyZv on July 4, 2020 | root | parent | prev | next [–]
What about "inline images can't be linkable?" Caption text could still link to an enlarged/detailed version. But if images can never be clicked/linked, it would be hard to abuse them for ads the way we see today.
warkdarrior on July 4, 2020 | root | parent | next [–]
That is adorably naive. This is why we have interstitials. (see https://en.wikipedia.org/wiki/Interstitial_webpage )
sloum on July 4, 2020 | parent | prev | next [–]
The lack of inline links is less about aesthetics and more about predictability. When you request a gemini resource you know that there will be two things happening: a TLS handshake followed by, hopefully, the server response (hopefully with your requested document).
Adding images requires more requests and breaks the concept of "one url/document == one request". I love that I know that my client will do nothing I do not tell it to do.
If you want to use gemini and you want inline images I believe https://proxy.vulpes.one does inline images of some form or other.
That said, images have other issues beyond causing page loads/requests to be unpredictable: they are an accessibility nightmare (as we have seen on the web).
troupe on July 4, 2020 | root | parent | next [–]
they are an accessibility nightmare (as we have seen on the web).
Audio is an accessibility nightmare for people who can't see, text is an accessibility nightmare for people who can't see or people who can't read, German is an accessibility nightmare for people who can't speak German.
At some point we have to accept that not every way something is presented is going to be equally accessible by every person, but the solution isn't to just decide we should jettison a rich form of communication because there is a small subset that can't fully benefit from it. Even books that are expected to be used by people who can see all the images usually describe the purpose of the image, what it is illustrating, and why it was included.
fouric on July 4, 2020 | root | parent | prev | next [–]
I love that I know that my client will do nothing I do not tell it to do.
By this line of reasoning, you would have to manually approve every init process that your computer would want to start every time you boot it up.
"images" == "client doing things I don't tell it to do" is completely false. Clients have been built that have configurable policies for loading images and scripts, and they're conceptually very simple and easy to use - e.g. "don't load images by default, click to load temporarily, control-click to load and permanently whitelist" is an example of a user-agent policy that not only supports images, but conforms to your extremely convoluted definition of a user-agent "do[ing] nothing I do not tell it to do."
requires more requests
Is the purpose of a document browser to minimize requests, or to actually serve useful information? Images can encode data that cannot be encoded in text, and a vast quantity of information is much more easily read and understood in graphical form. If you want to minimize requests, then just don't use the web at all.
Also, this isn't even necessarily the case. You could encode images as part of the page, as base64 or something.
they are an accessibility nightmare (as we have seen on the web)
The web supports alt-text for images. When people don't provide alt-text, that's not a technical problem, that's a social one.
tantalor on July 4, 2020 | root | parent | prev | next [–]
requires more requests
This is solved by HTTP/2 multiplexing https://en.wikipedia.org/wiki/HTTP/2
breaks the concept of "one url/document == one request"
I don't think anyone cares about this "concept"
rusk on July 4, 2020 | root | parent | prev | next [–]
How’s about serving gzipped bundles of files? Just request that and get all your text and enriched content in one go
imtringued on July 6, 2020 | root | parent | prev | next [–]
What prevents the Gemini client from adding a "click to display inline" button next to the image link?
arp242 on July 5, 2020 | root | parent | prev | next [–]
This problem is easily solved by just not loading images by default but on-demand.
zokier on July 4, 2020 | root | parent | prev | next [–]
If images are inlined in the document, then you can maintain that one request correspondence.
lal on July 5, 2020 | root | parent | next [–]
i don't know if you understand what you're saying?? are you suggesting the binary content of a png file be inserted into the text-readable markup for a document? this doesn't make any sense. in-line images are linking to other resources. nobody just copy-pastes the hex content of an image into their website's html, because it will end up being 95% of the file size and a nightmare to look at in your editor.
dragonwriter on July 4, 2020 | parent | prev | next [–]
I suspect, were gemini format to catch on, user agents would likely get an option to render images from links inline (either instead of links or as thumbnail previews attached to the links.)
zenojevski on July 4, 2020 | root | parent | next [–]
The ability for clients to render the markup as they please is actually one of the most important features, and a stark distinction from HTML, which only has one correct way to render.
I exploit this in my Unnamed Gopher Client[0], a client for the predecessor of the Gemini protocol, where I render links in a familiar files/folder format:
https://i.imgur.com/dw3e4Ou.png
And there are many more creative things that can be done with this.
[0]: https://github.com/zenoamaro/unnamed-gopher-client
nicbou on July 4, 2020 | root | parent | next [–]
which only has one correct way to render
A website can be rendered at different resolutions, with or without stylesheets, in dark mode, in printer-friendly formats, in a text-like format, with user stylesheets, with some elements hidden, as plain text, etc.
8organicbits on July 4, 2020 | root | parent | next [–]
True, but at a given resolution, with CSS turned on, and in dark mode all users will basically see the same HTML view. The user can't have a dark mode HTML page unless the web site offers one or the user has an extension that makes a best effort to create one. HTML with complex Javascript rendering makes it hard to give the user control.
The concept of a user agent that gives the user much greater ability to choose how they want to view content could mean each user will:
I like reader view, which gives me the ability to choose how I view HTML, but only when reader view can figure out how to extract the content (sometimes disastrously missing paragraphs of text...)
drkstr on July 6, 2020 | root | parent | next [–]
This right here should be the headline selling point on Gemini existing as its own separate protocol segregated from standard HTTP.
This thread is the first I've heard, and up until this comment I was thinking in my head, "sheesh, what kind of value proposition would justify that amount of work. I'm just not seeing it."
It's kind of like what REST was meant to be. More about entities than verbs. Cool. I get it now.
bmn__ on July 4, 2020 | root | parent | prev | next [–]
Opera 12 can do all that. I'm sad that you ever only experienced shitty browsers which take away this sort of control from the user.
8organicbits on July 4, 2020 | root | parent | next [–]
Most of my list is available as extensions on other browsers [1] (which I'd generally prefer to reduce bloat).
However, in my experience, the DOM for some sites is such a mess that trying to apply user preferences is a hack. I.E. reader view accidentally loses text. Does the implementation Opera has always work? That would be cool, although I'd still avoid Opera for privacy reasons.
Gemini seems to be to throw away all that complexity, which makes user customization easier. I.E. the problem is HTML/JS/DOM complexity, not a browser or its extensions.
1. https://www.ilovefreesoftware.com/29/featured/free-website-f...
SilasX on July 4, 2020 | root | parent | prev | next [–]
Just as would happen with everything else about the modern web that the author objects to. That’s how we got into this mess in the first place.
Related: why can’t we just point the blind at a protocol optimized for just sharing text documents?
https://news.ycombinator.com/item?id=20225291
empath75 on July 4, 2020 | parent | prev | next [–]
I was writing a client for Gemini and added a toggle to display images inline.
Terretta on July 4, 2020 | parent | prev | next [–]
Images are generally not the point. Formulas, diagrams and illustrations are.
Managing image assets is tedious. The web design community still by and large hasn’t figured out a great standard way to do for images (versions with reversible/cherry-pickable diffs) what git does for code.
Instead, diagrams and formulas could follow the lovely ideas of mermaid, graphviz, dot, and mathjax inlined into the markdown as text. Tooling for VSCode handles inline diagrams beautifully for Markdown already.[1]
And then, inline SVG would let you illustrate nearly anything.
WSJ got by fine without photos, as did most journals for most of my lifetime, and Kindle books mostly don’t have them today. I wouldn’t be too quick to say a medium has to be filled with photos.
1. https://github.com/shd101wyy/vscode-markdown-preview-enhance...
samaxe on July 4, 2020 | parent | prev | next [–]
Just use ASCII art instead! It’s cooler anyway.
naasking on July 4, 2020 | parent | prev | next [–]
The reason to avoid linked images that are displayed inline in the page is because they would permit tracking.
If you mean something like Base64 encoded inline images, then those might be viable.
yoz-y on July 4, 2020 | root | parent | next [–]
Or only allow content from the first party server?
amelius on July 4, 2020 | parent | prev | next [–]
I think a better way to deal with advertisement images is with AI and/or collaborative filtering.
If we can have spam filters for email, we can have ad filters for images.
synctext on July 4, 2020 | root | parent | next [–]
Even the most basic printed publications can include illustrations.
Streaming video is probably here to stay. /s
efitz on July 4, 2020 | prev | next [–]
A large part of HTML, and a large part of modern browsers and other web technologies, are focused on ensuring that the publisher has control over the user's experience. I really like the fact that Gemini breaks that, and I hope that the project owners mercilessly reject any attempt to introduce features that allow the server to control or influence user experience.
I would really like to see structured text that is self-descriptive (e.g. this is the document title, this is a paragraph, this is a header, bullet list, etc.) but have no ability to influence HOW those things are displayed- eventually maybe we'll have browsers that can support rich theming, etc.
Others have noted that lack of images is an oversight. Perhaps the language needs a "binary file download" structure, and if the binary in question is a media file, then the browser could choose to display it. Maybe signal with mime types?
capableweb on July 4, 2020 | parent | next [–]
focused on ensuring that the publisher has control over the user's experience
Worth noting that this is modern browsers/web, and was initially not like this.
The term "user-agent" comes from that the _user_ has control over the experience, no matter what the publisher thinks. The agent (browser) acts for the user, hence user-agent.
User-agent CSS files were rampant back in the days, when a lot of content was unstyled. So you could navigate between websites and they looked the same, as they would use your user agent css files.
But then everyone decided they had to have a unique look on the web (CSS). Then they decided they needed unique functionality on the web (JavaScript). And here we are :)
clairity on July 4, 2020 | root | parent | next [–]
as an extension of this, user stylesheets are interesting conceptually, but practically, web pages became quickly indistinguishable when the same stylesheet was applied to every page. it hindered memory (mental categorization) and recall.
users then predictably wanted pages to look different, to have style, and that's likely the principal cause of user stylesheets' decline, not corporate coercion. that's not to argue against user stylesheets per se, but that they'll likely never have wide usage.
rakoo on July 4, 2020 | parent | prev | next [–]
Would it be too simple to say that you're looking for HTML without CSS ? Because HTML already has semantic tags describing "this is the title" "this is a list" etc...
amelius on July 4, 2020 | root | parent | next [–]
But then you get tools which generate HTML which circumvents that (e.g. a giant table with 1x1 pixel cells, each cell with its own color).
rakoo on July 4, 2020 | root | parent | next [–]
As a producer you can always circumvent the constraints imposed by a software. You can use a "this-is-a-title" tag to make text appear bigger, you can use a "this-is-a-list" tag to make content linear instead of using paragraphs, etc...
What I'm saying is that there can't be a format that isn't hacked and exploited to allow the publisher to do what they want, because ultimately it's their content so they control it. Maybe limiting the existing tags in HTML is a good idea (AFAIK that's one of the strategies of AMP) but reinventing a structured format will just lead to HTML-but-less.
If you want to give control to user, then you have to do that from the User Agent: forbid any publisher-provided styling, allow only certain tags, are going to actually do what you want, instead of inventing yet another format
amelius on July 4, 2020 | root | parent | next [–]
Maybe the best way is to have a user-agent which can OCR the content and convert it to text-only. And perhaps some AI to include some relevant images.
kalleboo on July 5, 2020 | root | parent | prev | next [–]
You'd want HTML without CSS and without all the ancient legacy HTML 3.2 styling attributes (i.e. no way to set table cell sizes/padding, no way to set colors on anything)
vertex-four on July 4, 2020 | parent | prev | next [–]
There is a binary file download feature, the browser can do whatever it wants with binary files, and it's signaled with mime types.
yoz-y on July 4, 2020 | parent | prev | next [–]
A large part of HTML, and a large part of modern browsers and other web technologies, are focused on ensuring that the publisher has control over the user's experience.
It's also the reason why it caught on. On one hand people reject the ability to express individuality on the web, on the other a similar crowd is nostalgic about geocities and praises similar revivals. It's either one or the other.
I would really like to see structured text that is self-descriptive (e.g. this is the document title, this is a paragraph, this is a header, bullet list, etc.) but have no ability to influence HOW those things are displayed- eventually maybe we'll have browsers that can support rich theming, etc.
How about publishing markdown over HTTPS? Then make a client that renders just that?
contextfree on July 4, 2020 | root | parent | next [–]
Why's it either one or the other? You could have one browser application for no-nonsense textual or informational pages and one for wacky personal geocities/myspace style pages. Feel free to enjoy both, either separately, or if you want more integration you're free to use the underlying OS shell to switch between them, read them side-by-side, link between them, etc.
yoz-y on July 5, 2020 | root | parent | next [–]
You could have one browser application for no-nonsense textual or informational pages and one for wacky personal geocities/myspace style pages.
Hence the idea of a separate markdown only browser. I don’t think http is the problem here. So it would be better to reuse as much existing tech as possible.
Note: Personally I don’t think it would catch on, the convenience of handling everything in one program is just too high.
AdrianB1 on July 4, 2020 | root | parent | prev | next [–]
On one hand people reject the ability to express individuality on the web, on the other a similar crowd is nostalgic about geocities and praises similar revivals.
I like to think about books as of an example where content is way more important than presentation. Most of the web sites these days spend a lot more effort on how you present it than what, in the end a 1000 thousand words article now has a very complex architecture behind it, hundreds or thousands of time more than the content.
peteyboy2k1 on July 4, 2020 | root | parent | prev | next [–]
You can do that with gemini. It will serve whatever MIME type, and you could have a server that serves markdown files.
For that matter you can serve images or whatever binary files as individual requests (that is, not inline with another repsonse).
I've been playing with it and I rather like it.
lal on July 5, 2020 | root | parent | next [–]
as an early gemini convert this is one of the reasons I wish solderpunk would split the gemini protocol and text/gemini mimetype specs. gemini can serve more than text/gemini (e.g. markdown as you suggest), so embedding the text/gemini mimetype into the protocol spec seems rather like embedding the html spec into the http one
peteyboy on July 5, 2020 | root | parent | next [–]
I agree in part, and disagree in part. I like that gemini has a "native" markup format, and that its simple and bare-bones as it is. It's a communications baseline, and other things are negotiable between client and browser?
sloum on July 4, 2020 | parent | prev | next [–]
There are a few gemini clients that support theming (inlcluding fonts, font sizes, text color for various elements, list bullet style, link color based on scheme, and page background color). This one comes to mind: https://github.com/MasterQ32/kristall
pndy on July 5, 2020 | root | parent | next [–]
I've tried few around and this one seems to be most user-friendly, tho it looks like it has trouble displaying menus with multiple entries (spacebar interpretation)
dennisy on July 4, 2020 | parent | prev | next [–]
If the browser can support rich theming, I guess it can load any images inline if the user wishes.
tylerchilds on July 4, 2020 | prev | next [–]
I'm a Gemini newb, but I've enjoyed using the AV-98 client so far: https://tildegit.org/solderpunk/AV-98
If Gemini sounds like a dumb idea, I'd highly encourage you to move along. If Gemini sounds intriguing, you'll probably have fun.
Lots of opinions in this thread, but doesn't look like many armchairs have tried it. Personally, I've enjoyed the rabbit hole.
vbezhenar on July 4, 2020 | prev | next [–]
I would take an alternative position in this matter. What we need is a simple yet functional subset of web. The point is to be able to build a browser in a reasonable amount of time with many languages reusing some commonly used libraries, while being able to use latest Chrome to browse those websites as well.
TLS: keep it as it is. Crypto is hard and TLS is proven crypto. Mandate something like 1.2+ and be done with it. Every mature language has TLS implementation or bindings.
HTTP: use subset of HTTP/1.1. Parsing is very easy: it's just bunch of lines. Full HTTP/1.1 is hard and probably unnecessary. Things like connection reuse are not necessary and should be excluded for simplicity.
HTML: use subset of XHTML. It must be valid XML, so parsing is just one call to the XML library which is available on every language.
CSS: I don't really know, that's a tough one. Something like CSS 2 I guess. There must be a balance between complexity of implementation and richness of presentation.
JavaScript: just nope. That rabbit hole is too deep.
If you take this position to the extreme, you can even reduce HTML + CSS to some kind of markdown-like language, but I don't think that we need to go that far.
JohnStrangeII on July 4, 2020 | parent | next [–]
In my opinion, there should be no CSS and no styling at all. The original idea of logical document markup and letting the client render the document is best. It shouldn't be the content provider's business if and how the content is processed on the client machine. Once you open that door, you just duplicate the kind of aberration we already have.
A good WWW provides linked documents in a format that is easy to display and process (e.g. extract links, text, headlines, images, etc.) and makes it impossible to hide content. If you publish a document, it should be publicly accessible.
acdw on July 4, 2020 | parent | prev | next [–]
Honestly, you've just described about 95% of gemini exactly. It's TLS required (even 1.2+), line-based parsing with text/gemini content, it's actually even simpler to parse than XML, since it's line-based (just peek at first 3 characters, you know the line type from that). It doesn't use CSS or JS at all, styling is totally up to the client.
So like I said, you've basically just described gemini :)
vbezhenar on July 5, 2020 | root | parent | next [–]
The difference is you can't just open Gemini protocol in the browser, so you're limiting audience of your resource to a very minority. That's what I want to highlight: something that is compatible with modern web, yet simpler so alternative browsers could be implemented.
acdw on July 6, 2020 | root | parent | next [–]
You actually can open Gemini in a web browser, using a proxy like proxy.vulpes.one or portal.mozz.us. I tried to write a Firefox extension a la OverbiteWX for gemini to automatically redirect links, but my JS-fu isn't strong enough.
dukoid on July 4, 2020 | parent | prev | next [–]
I think "just" going for CSS 2 would be a mistake. In my opinion, it would be preferable take new concepts that are simple to implement and simplify life for everybody (e.g. vh, vw units and flexbox, box-sizing). Goal should be to get a minimal deterministic rendering engine for content that's easily written manually with a minimal subset of modern XHLTML+CSS.
Re:XHMTL: as somebody pointed out here, there are rules to "normalize" unbalanced HTML5, but they have to be implemented and add to the mountain of "implicit" knowledge one has to have and implement...
avereveard on July 4, 2020 | parent | prev | next [–]
a simple yet functional subset of web
a body with sequence of <img>, displayed vertically. you can look at imgur and see such format has been used for simple messages, blogs, collections of memes, recipes, news, informational content, engineering content, fitness advice etc etc.
no css, no nothing, the user agent takes care of formatting them according to the display device etc.
I can't think anything more flexible, simpler and yet capable of doing 90% of what the static web can do today. you can even have a comments section, just add <img> to the bottom and alt the commenting user&Timestamp.
amelius on July 4, 2020 | parent | prev | next [–]
Sounds like AMP.
sangfroid_bio on July 4, 2020 | parent | prev | next [–]
I think WebAssembly+low-level DOM API would be a good compromise for the lack of JavaScript.
als0 on July 4, 2020 | root | parent | next [–]
I think the grandparent's point is specifically against embedded scripts rather than JavaScript itself, since it can be used to make HTML less like a document, and there's also the proverbial can of worms where you automatically run Turing complete code from an unknown person.
vbezhenar on July 4, 2020 | root | parent | next [–]
Actually I'm thinking about complexity of implementation and security consequences. I'm not sure that JavaScript interpreters are so common and bundling V8 just kind of defeats the whole purpose... Implementing JavaScript is not an easy task, it also requires implementing plenty of APIs like DOM access, XHR, complex event system, to be any useful. And ability to evaluate a Turing-complete code poses just another level of security issues.
Whether it makes HTML less like a document is up to author to decide, IMO. Some JS snippets are pretty useful, some are not. You can use JS to implement an interactive learning system or you can use JS to spy on users.
Wasm probably is easier to implement that JavaScript. But it still carries other issues mentioned above.
als0 on July 4, 2020 | root | parent | next [–]
I'm a big fan of WASM. And I imagine that a naive but functional implementation is easy to make. However, as the web becomes increasingly heavy over time, there will be pressure on the dominant browsers to accelerate WASM, and they might resort to the same complex tricks that modern JS interpreters must do to stay competitive.
WealthVsSurvive on July 4, 2020 | root | parent | prev | next [–]
I think anything real Web 2.0 (I'm takin' it back, it's not their word anymore) would allow for both models. Whatever is next needs to be both simpler, more generically useful, and fill the requirements of this project as well as the most progressive "web app". At this point, I am open to "starting over."
rishav_sharan on July 4, 2020 | parent | prev | next [–]
Remove HTML, CSS and JS. just have a markdown for the UX.
rusk on July 4, 2020 | parent | prev | next [–]
I think you could keep JavaScript at this stage. It’s mature enough and very useful. There are a few strong implementations out there. Make it an add-on with well specified interfaces and object models.
jerf on July 4, 2020 | parent | prev | next [–]
'use subset of XHTML. It must be valid XML, so parsing is just one call to the XML library which is available on every language.'
HTML5 defined a concrete, final mechanism for parsing tag soup and presenting it as a standardized tree. While the library itself isn't simple, using it is, and being standardized, most non-fringe languages ought to have a library for it by now. It should probably use that, for all the same reasons trying to use XHTML didn't work the first time. XHTML raises the bar on writing correct HTML too far.
mattlondon on July 4, 2020 | prev | next [–]
I feel like the lack of image support is a missed opportunity.
I know it is an idealogical choice to only have text, but being able to embed standard image formats (on a totally plain, non fancy way) would increase the utility of this hugely. They mention blogs and tutorials and recipes here - those would benefit hugely from having simple inline images within the body of the text, just like you expect in a newspaper etc.
I guess I am not the target market then.
Taikonerd on July 4, 2020 | parent | next [–]
I understand why they didn't allow inline images, but I agree with you it limits a lot of use cases.
If I were designing it, I would say: "you can have images, but they always display as a 'block element', with nothing to either side. No worries about wrapping text; no background images under other elements, etc." I think that keeps the spirit of simplicity.
lal on July 5, 2020 | root | parent | next [–]
you can have images and they don't always display as anything. the author of the user agent decides how they are displayed. you could in-line them if you wanted to, but only a few clients do that at the moment. there are no hints to the client about how some content could, should, or should "always display as".
It's text. The client displays that text and renders links, headings, etc, however it wishes. If it really wants to, it could just not format them at all. There's a gemini client made for plan9's acme text editor that doesn't render links, and instead displays them verbatim, because the plan9 plumber can handle the hyperlinking aspect. All of that is eye candy and fluff.
If a client finds a link to an image, it can in-line it if it wants. If you wrote a client, when it found a link to an image, it would in-line it "with nothing to either side." That's not something that has to be specced.
cjallen88 on July 7, 2020 | root | parent | prev | next [–]
You could have a setting on the client that lets the user specify that that's how they always want to see images, I on the other hand might specify that I want to open them in a new window, or see a thumbnail until i hover over, etc.
Same with how headers are displayed (maybe i want folding or something), whether an ToC is displayed, colours, fonts, etc.
The point is that the user can decide all this stuff, without having to hack it around the author's own styles and scripts.
masklinn on July 4, 2020 | parent | prev | next [–]
just like you expect in a newspaper etc.
Technical paper is what I was thinking about there. Furthermore, since Gemini apparently lacks support for mathematical notation images would be necessary for such even if the paper doesn't intrinsically contain non-textual images (e.g. pictures, charts, or graphs, which are common though not universal).
sloum on July 4, 2020 | parent | prev | next [–]
I'm not sure how recipes, for example, would be an issue without inline images. You click (or otherwise trigger) the image link and look at the image (potentially in a new tab or window) then go back to reading content. It isnt hard or even a bad experience. It is just different than current expectations based on comparing something that isnt the web to the web.
cortesoft on July 4, 2020 | root | parent | next [–]
Being able to look at the image of the preparation step while reading the instructions is nice when making a recipe. Having to go to a separate page is annoying.
rpdillon on July 4, 2020 | root | parent | next [–]
I agree, but I see this as a client-concern. I could imagine clients fetching and inlining images if the user directed it to, or maybe media-focused clients having a text pane alongside a media pane where the images would be rendered. The main advantage I see of this approach is that it takes us from "the server decides what the client does" to "the client decides what the client does".
cortesoft on July 4, 2020 | root | parent | next [–]
Ok, but the recipe maker is going to want to suggest where to put the images (so they are in the proper spot on the recipe).... at which point, how is that different than html? A client can decide not to render the image where it is suggested already.
flatlanderwoman on July 4, 2020 | parent | prev | next [–]
So did I. But I'm not going to let one drawback distract me from something otherwise very good, nothing is perfect after all.
I can even understand why they did it. To keep the doc format very simple.
I hope that more clients will add unique rendering features that will turn this drawback on it's head. It could be in-line rendering or a gallery-like feature.
ketzu on July 4, 2020 | prev | next [–]
I think calling gemini an alternative to the web has a very limited view of the web. It takes an idea someone has of what the web should be: a set of text documents. That's a very small subset of the web, not just of today.
Building separate protocols for all the various use-cases of the web would be interesting, but would still need some interconnection. But I'm not convinced that has many advantages besides not being accidentally linked to a websited of the "Old web." A problem that could be reduced by a browser extension that strictly blocks any external urls and javascript.
kristopolous on July 4, 2020 | parent | next [–]
The separate protocol for everything is essentially what things were like prior to about 1994.
There was a protocol for searching documents, a protocol for looking up someone's email, it was all partitioned out.
The web was seen as just another fish in the pond.
After the web became big, these things still lasted for a while
However spam and crooks changed it all. Usenet became useless, DNS full domain lookups (you used to be able to get a list of all the subdomains of a domain through the command line and you could just browse then out of curiosity), using whois for email (you could just query for a name and get an email address over whois), it's all gone because there's too many snakes trying to scam people and flood the network.
Things used to be much better tools but it turns out they were too good and had no defenses. The dream of everybody connecting has sort of been retracted a bit. RMS, TBL, Torvalds, I could just send them an email in the 90s and they'd respond, it was pretty remarkable.
It's not the case any more. Not even minor players in history (such as an author from a 25 year old book) respond to my questions. People just don't do that anymore.
Spam, harassment, criminals, ill will, this all has to be a big priority if we want to try it again.
The future should be the dreams of our better angels, building better tomorrows...
cortesoft on July 4, 2020 | root | parent | next [–]
RMS, TBL, Torvalds, I could just send them an email in the 90s and they'd respond, it was pretty remarkable.
I don't think this stopped just because of spam, harassment, or other bad behavior. A big part of it is just community size. When the community of internet users was smaller, you could interact with everyone who reached out in a reasonable amount of time. As it got bigger, that is no longer possible because of the sheer number of people.
baryphonic on July 4, 2020 | root | parent | prev | next [–]
^
This is an exceptionally good point. Security is also one of the top problems with the web (alongside the asymmetrical difficulty of hosting content vs consuming it and the lack of consistency for web content). The problems with the web are mitigated by "good enough" solutions from browser vendors, out-of-band third party extensions and even some services on the web itself (e.g. archive.org, though I don't know how sustainable that is, and it's far from perfect).
flatlanderwoman on July 4, 2020 | parent | prev | next [–]
Building separate protocols for all the various use-cases of the web would be interesting, but would still need some interconnection.
Whilst using Castor, www urls would auto-open in Firefox, and the other way around.
sloum on July 4, 2020 | root | parent | next [–]
Same with using Bombadillo in a terminal (assuming a graphical environment is present and the user has set webmode to GUI).
pacifika on July 4, 2020 | prev | next [–]
This is hard to understand for someone not familiar with Gopher, but I’m interested in how menus are handled.
I’ve always wished browsers handle site menus in their chrome, so that the document can be focused on content not navigation. It’s the browsers job!
For a while Opera supported these related links in the head for some pages but the dev was unable to add their own, it was limited to a small number of standard items such as Index. These were shown in a browser toolbar.
Nonstandard navigation have always been a point of friction for users as it precludes universal access by having to relearn how each site works.
vertex-four on July 4, 2020 | parent | next [–]
Gemini documents can contain lines that are text or lines that are links - so a menu looks like a list of links, one per line.
ehnto on July 4, 2020 | prev | next [–]
The web is partly broken due to the sheer expanse of it. The author of the article alludes to how they enjoys looking at the aggregator, and notes they can always find something interesting. If Gemini ever became as popular as the web, that would stop being true.
So the selling point of Gemini is that by staying rudimentary, it can limit it's appeal, and subsequently stay unpopular enough to be more like "The Old Web". I think that's worth noting, because you could get trapped into thinking this is a technology problem, but it's really a people problem.
dancek on July 4, 2020 | parent | next [–]
You have a very good point.
However, Gemini does not exist in a vacuum. The web will be there. There will be social media platforms, multimedia, awesome webapps and all that. And Gemini is just text.
When you have the choice between easily consumable infinite multimedia and just text, you only pick the latter when you really care about the quality of text content. It's not sexy so all the spammers, content marketers and ego boosters have nothing to gain on Gemini. And so there can be this esoteric little corner of the internet, with down-to-earth text content written by ordinary people.
baryphonic on July 4, 2020 | parent | prev | next [–]
If Gemini ever became as popular as the web, that would stop being true.
Your implication may be true, but Gemini will never become as popular as the web (well unless the web becomes extremely unpopular at the expense of something else besides Gemini).
My wife would see "no images" and that would be the beginning and end of using Gemini for her.
nojs on July 5, 2020 | prev | next [–]
I foresee a familiar cycle playing out:
1. Gemini is great, no spam and commercial crap
2. Someone realises it would be great to have simple inline images, and makes a cool client that supports “gemini+img” syntax that they make up. The syntax gracefully degrades, so you can use it in your docs even if your users aren’t using the new browser!
3. Protocol is technically text-only but in reality everyone uses img-enabled browser
4. Repeat with basic styling, then simple scripts. Eventually authors rely on more and more “optional” features and syntax extensions, and we end up with a similar feature set to what we have today.
5. Advertisers move in as Gemini gains mainstream adoption, and we’re back to www
msla on July 4, 2020 | prev | next [–]
I guess I don't understand this:
How is a network protocol proof against being used to transport CSS files? Does the network stack inspect what you're shipping and ensure you're only sending 100% Pure Plain Text?
The Gemini transport protocol is unsuitable for the transfer of large files, since it misses many features that protocols such as FTP or HTTP use to recover from network instability.
Isn't that TCP's job? Is this person saying Gemini doesn't use TCP?
Finally:
Now, what does Gemini currently have to offer? The best way to find out is to head over to the official site: gemini.circumlunar.space in your Gemini browser.
Back in the Gopher days, my "Gemini browser" would be my Web browser. That was one of the reasons Web browsers took off: You could use them to access all of the information on the Internet, including the WWW, Gopher, Usenet, and Email. Only more recently did Mozilla morph from the Netscape Communicator software suite into the slimmed-down Firefox browser without email, spinning off Thunderbird in the process, and only much later did Firefox drop Gopher support from the core binary.
zuppy on July 4, 2020 | parent | next [–]
Isn't that TCP's job? Is this person saying Gemini doesn't use TCP?
maybe he’s talking about higher level features, like the possibility to restart a download from a certain point, without redownloading the initial part? haven’t used this since dialup days, though
flatlanderwoman on July 4, 2020 | root | parent | next [–]
Quote from project FAQ:
Gemini has no support for caching, compression, or resumption of interrupted downloads. As such, it's not very well suited to distributing large files, for values of "large" which depend upon the speed and reliability of your network connection.
https://gemini.circumlunar.space/docs/faq.html
lixtra on July 4, 2020 | root | parent | prev | next [–]
Download managers that speed up downloads use this feature. I.e. they use 2 threads, 1 download from the beginning and the other continueing from the middle of the file. This fools single connection throttling measures.
ivanstojic on July 4, 2020 | root | parent | prev | next [–]
I suspect that it is this partial download that they are talking about.
That said, I can't tell whether or not I've used it recently. I know I don't use it when I play with personal projects, but I don't know what other sites do because I rarely have a console pulled up in my browser when I'm just using it, rather than developing.
bawolff on July 4, 2020 | root | parent | next [–]
Its often used with bulk file downloads (e.g. curl a multi-gb file while transfering between wifi networks), as well as video streaming sometimes (buffering)
csande17 on July 4, 2020 | parent | prev | next [–]
How is a network protocol proof against being used to transport CSS files? Does the network stack inspect what you're shipping and ensure you're only sending 100% Pure Plain Text?
The Gemini specification includes its own format for pages, which is a text-based scheme inspired by Markdown and Gopher menus. You can use the Gemini protocol to transmit things other than Gemini pages, sort of like how you can use HTTP to transmit PDFs and Word documents, but you wouldn't build your whole site out of them. (At least that's my impression, I haven't gotten around to actually visiting many Gemini sites yet.)
flatlanderwoman on July 4, 2020 | parent | prev | next [–]
How is a network protocol proof against being used to transport CSS files? Does the network stack inspect what you're shipping and ensure you're only sending 100% Pure Plain Text?
It's just like the web, the transport protocol (HTTP/S) can be used on any file. But there is a separate spec for the document format (HTML etc.). You could transport CSS over Gemini, just don't expect any of the browsers to render it. Just like how web browsers won't execute alternate scripting languages natively.
Isn't that TCP's job? Is this person saying Gemini doesn't use TCP?
I didn't really elaborate on this point while writing, because I had nothing to add. I will quote from the projects FAQ:
> Gemini has no support for caching, compression, or resumption of interrupted downloads. As such, it's not very well suited to distributing large files, for values of "large" which depend upon the speed and reliability of your network connection.
Hopefully that clears up what I meant.
Back in the Gopher days, my "Gemini browser" would be my Web browser.
You might be interested in Castor[1]. It's a browser for the minimalist internet. Rolls support for Gemini, Gopher and Finger all in one.
I can understand why FF removed support. But hopefully smaller applications, like Castor, can fill this gap.
[1]: https://sr.ht/~julienxx/Castor/
bawolff on July 4, 2020 | root | parent | next [–]
>Gemini has no support for caching, compression, or resumption of interrupted downloads. As such, it's not very well suited to distributing large files, for values of "large" which depend upon the speed and reliability of your network connection.
Which honestly is pretty silly, as lots of caching is about reducing latency for small files not saving bandwidth for large files. Suppose it matters less if the documents are self contained.
dragonwriter on July 4, 2020 | root | parent | prev | next [–]
You could transport CSS over Gemini, just don't expect any of the browsers to render it.
If Gemini is ever of even domain-specific serious use, I'd expect both the format and protocol to be added to what is supported by existing major web browsers (it can't be both tractable for small implementers and intractable for Apple/Google/Mozilla), which, as it turns out, know how to support the combination of HTML/CSS/JS just fine and won't likely forget just because a different transfer protocol is involved. Presenting a DOM mapping for Gemini format pages and exposing it at least to extensions even if there is no way to include page scripts doesn't seem unlikely, either.
wolfspider on July 4, 2020 | prev | next [–]
One of the things I find refreshing about Gemini is there is no standard scripting language in there and the implementations vary wildly on the client side from Rust to Lua to Python to Go as well as the server-side. It made me realize perhaps browser technology for the Web got locked into specific domain-centric technologies which have held it back. There is so much C/C++ required for JavascriptCore and friends in a modern browser there is only one real choice to code in. Mozilla has made great advancements with Rust in Firefox but still a long ways off from a total conversion. It's not that its not possible but if you want to tap into the work which has already been done in JavascriptCore or other technologies you certainly cannot just pick your own backend or language. Gemini's efforts on the other hand are being brought up in parallel and in the open so that is a major strength that the ecosystem is already much more broad from the beginning. Building a modern browser from source nowadays is an intensive process on a single mid-range workstation just due to the fact much of the extra functionality is compulsory and not opt-in. Many of these modules were meant to be pluggable but somewhere along the way they became coupled dependencies of each other. A good example is Electron where in theory it should be just the things you need and a subset of a browser where applicable but instead you need the whole browser engine every single time.
carapace on July 4, 2020 | prev | next [–]
Gemini is awesome and fun. Sure, it's kind of a toy, but's that's kind of the point.
Go read the mailing list. Implement a client in your favorite language, or that weird language you've been wanting to try...
Write a blog, or some fan fiction, or a screed or some poetry. (There's a choose-your-own-adventure you can play.)
Have fun with it.
_-___________-_ on July 4, 2020 | prev | next [–]
I've spent the last few hours browsing Gemini, and it feels more like the early web than anything I've experienced in the decades since. I love it.
algerd on July 4, 2020 | prev | next [–]
Here we go again. Solving social problems with technical staff.
dredmorbius on July 4, 2020 | parent | next [–]
Protocols are actually part of the glue between technical and social domains.
triptych on July 4, 2020 | prev | next [–]
Why not just set up a site that only allows markdown with plain pages and get all the benefits you list without a new protocol?
ivanstojic on July 4, 2020 | parent | next [–]
That's actually covered in the deeper linked docs on Gemini.
Setting it up on a separate protocol / markup allows you to make hard reasoning about what kind of privacy, features and protection you get as a user, rather than relying on the goodwill or current promises of your content provider.
echelon on July 4, 2020 | root | parent | next [–]
This won't get traction.
It'd be better to declare a new doctype and use a reduced html. Just make it simple and make ads and JS bloat impossible.
Enforcing ssl is kind of silly since browsers are starting to do that anyway independent of this. It's orthogonal.
vertex-four on July 4, 2020 | root | parent | next [–]
It’s not meant to “get traction” in the Silicon Valley sense of “everyone must use this or there is DOOM”, it’s meant to be useful for the communities that use it.
If you open the Gemini link posted below about why just defining a new doctype wouldn’t work, give it a read.
Barrin92 on July 4, 2020 | root | parent | next [–]
It’s not meant to “get traction” in the Silicon Valley sense of “everyone must use this or there is DOOM”, it’s meant to be useful for the communities that use it.
If you're building a protocol that enforces certain strict standards, like in this case being text only because the internet is 'bloated' according to the author, then the only point of having it is adoption beyond your community.
If all you want to do is communicate with ardent non-bloat advocates you can already do this on the regular internet, because everyone in that community does it voluntarily already
There's no point in codyfing standards for a community that follows your standards to begin with
vertex-four on July 4, 2020 | root | parent | next [–]
Attracting people who agree with your community's standards to join your community, while hinting that maybe other people might not be interested, is perfectly reasonable. That's... pretty much how communities work.
Barrin92 on July 4, 2020 | root | parent | next [–]
yes but you don't need a disinct technical foundation that is incompatible with your surroundings. If you want to go play at the chess club you can simply go there and everyone is interested in chess, you don't need to start a new chess players only micro-nation that keeps all the other people out, in the middle of the forest where bloated cars can't get to. That'd be pretty unecessary. In fact if you want to attract new people, that's a really bad idea
vertex-four on July 4, 2020 | root | parent | next [–]
Solderpunk has explained the issues with specifying a safe subset of HTML, the article has been linked elsewhere in this thread. If you don't want to use a Gemini browser to access it, you can use this link: https://portal.mozz.us/gemini/gemini.circumlunar.space/users...
This would seem to be a similar sort of issue to when people say "come chat on IRC", or use a mailing list to communicate about a project, or whatever else - you're not going to use those if all you ever want to use is a web browser. And that's ok in my book. I'll hang out with the people who do want to use those tools.
dsr_ on July 4, 2020 | root | parent | next [–]
Except, of course, that there are IRC clients accessed through browsers, mail through browsers, Usenet through browsers... so you don't actually exclude people that way.
If Gemini catches on, someone will write an add-on for Firefox that reads it. And then it's just part of the Web that is fast and looks a little different.
And that link is a long-winded way of saying "It's good to put an artificial barrier in the way."
vertex-four on July 4, 2020 | root | parent | next [–]
When you want to create a space that is qualitatively different from its surroundings, it is good to put an artificial barrier in the way.
acdw on July 4, 2020 | root | parent | prev | next [–]
There's actually already portal.mozz.us as well as proxy.vulpes.one, both of which are Gemini "portals" ala gopher.floodgap.com. I actually began to try and write an addon to open gemini:// links in Firefox based on Overbite, but I couldn't figure it out (and I'd have to change the gemini protocol to something like gemini+web, due to Firefox limitations).
Just because you can do that, does not make it part of the Web -- it's still a different space. Those portals are basically web-based clients to the protocol, which means they're still bound by the rules of the protocol -- they're not going to have JS in them, for example.
acdw on July 4, 2020 | root | parent | prev | next [–]
Hey, I just want to jump in here as a counterexample to your argument -- I came into Gemini from "beyond the community." I was a standard web user; I found out about gemini from another discussion on HN or Masto or somewhere and jumped in, and now I absolutely love the community I joined.
So it's absolutely become adopted beyond its beginning community.
oscargrouch on July 4, 2020 | root | parent | prev | next [–]
If think what the parent is trying to say is; if this protocol is a subset of of we have today with HTTP and HTML, why not to create more of a political movement cheering for this subset to be use, the same way Google does with AMP?
On the technical side of things, it looks like its an old battle replayed with older weapons, just to get go through the same path of HTML. "Oh, but we will not put more features". Ok, but people wont use it then because they can also serve text, markdown, etc... and can chill out, because they know if later they need to serve images, videos or graphics, they can do it with HTML.
And i say this as a person who is also trying to create some alternatives to the web. But instead of going back to the nineties, i tried to think how the technology of 10 years ahead would look like. I've probably not maded it because its really hard to push the envelop when things are almost in the state of the art as Web is. But i also don't think the answer for the future will be in the past.
You know what i think would be a really badass movement. To create a simple spec of the Web, even without Javascript. Because with "feature creep" Google through Chrome is making impossible for others players to create competing browser engines.
So if two folks decide that they will create a web engine in this new language they like, it wont be an impossible goal. Because there's this simple version of the spec, with much less features.
The people behind this might be very good at convincing people and real believers, this thing can float for some time working real hard for this.. But it will be really hard to get this out of a small niche.
Anyway i love the thinking behind this. The meditation, the koan, is really on the right track. We need more rebels and fighters on this front. But i just can see it, how this can compete as a subset of a massively popular and deployed protocol with clients everywhere? How can it really differentiate itself, apart from what the web today already can serve to people?
vertex-four on July 4, 2020 | root | parent | next [–]
The problem is that then you link to https://yourfriendsblog.org, and a couple years down the line your friend decides this safe subset is too restricting for them and decides to replace their blog with Wordpress with all the plugins and Cloudflare captchas and Google Analytics and evercookies and whatever else. The reader is dumped, unceremoniously, back into the big bad web, but could never know this before clicking the link.
When you link to a Gemini URL, you're linking to something you know can't be replaced with something privacy-violating in the future. The worst that can happen is the server shuts down, which is a very different failure mode. And someone is less likely to do that than to switch to a different brand of HTML - or so's the hope.
It's not going to be a major thing that everyone uses. That's ok! Neither is IRC, mailing lists, and whatever else - but people still use them, every day. There's ideas exchanged, friendships made, relationships formed, and they serve a not-insignificant community's needs.
oscargrouch on July 4, 2020 | root | parent | next [–]
The reader is dumped, unceremoniously, back into the big bad web, but could never know this before clicking the link.
Ok, but you know the big majority of the users dont care were the content come from, or how its delivered to them. They care about what is being served instead of how.
If you guys manage to have some 'killer apps' on this protocol, where people will try to reach it no matter how its is implemented, than there's a chance.
IRC killer app is IRC itself, and they manage to push themselves as alternative in the nineties where a lot of popular protocols and alternatives like the web were still in the beginning.
Anyway, if you convince people over time to serve their content through this medium, with enough and interesting content, the user will try/learn to reach them.
But i dont know, i think at least they should be trying to use some p2p DNS system, making it easy for people to serve their own content, or revisit BBS and serve contents in tree-like structures akin to directories..
I feel that there must be something to really differentiate it from everything else. Some things that are unique, and that the web + others are not covering. Because if you think about IRC or Email, they have distinct features that web could never cover even being a mammoth protocol, while this doesnt hold the same way when you think of Gemini proposal.
Anyway, people trying to do something, to change things for what they perceive as the better is a good thing, and it should always be celebrated, because even when the thing doesn't stick it might need adjustments, incremental evolution or just to serve as a influence to something else or through experience inspire the creators to create something even better.
vertex-four on July 4, 2020 | root | parent | next [–]
Ok, but you know the big majority of the users dont care were the content come from, or how its delivered to them. They care about what is being served instead of how.
Who, exactly, without a profit motive, wants the majority of users? I don't want to talk to most people, and I don't want most people to read what I have to say. I, like most people I think, want to spend most of my time in my community sharing things I find interesting with people in my community.
It's the same as tilde servers, or MUDs. They're not going to take over the world. They're small communities and most people will never even know they exist, and that's fine.
IshKebab on July 4, 2020 | root | parent | prev | next [–]
It would be a really good way to accelerate mobile pages, so I think HN might hate it.
DC-3 on July 4, 2020 | parent | prev | next [–]
Gemini's creator solderpunk wrote about this here [1].
[1] gemini://gemini.circumlunar.space/users/solderpunk/cornedbeef/why-not-just-use-a-subset-of-http-and-html.gmi
joosters on July 4, 2020 | root | parent | next [–]
I don't think he makes his point very well or convincingly. You could, if you wanted, make a plugin for a browser that blocks non-simple tags, blocks cookies, blocks images, blocks scripts, etc, and I suspect he's wrong to say that '...such an undertaking would be an order of magnitude more work than writing a fully featured Gemini client'.
He goes on to say 'Supposing you had such a browser, what would you do with it? The overwhelming majority of websites would not render correctly on it.' - A very good point, but equally applicable to a Gemini browser.
IMO, they have confused the network protocol with the presentation. You don't need to drop HTTP in order to change the way websites look. Likewise, you don't have to implement HTTP features that you don't like (e.g. cookies). This just strikes me as another mistaken belief that rewriting code from scratch will solve all your problems.
DC-3 on July 4, 2020 | root | parent | next [–]
The proposition of Gemini is that it creates a separate, deliberately incompatible, ringfenced part of the internet that is self-sufficient; not operating as a subset of a larger whole but as something sovereign and self-contained. This fosters a community spirit and allows one to remain 'within the fence' in a way that would be very hard to do if inhabiting simply a subset of the existing web.
corobo on July 4, 2020 | root | parent | prev | next [–]
I’m sure that would be an interesting read, is it available on a usable protocol?
vertex-four on July 4, 2020 | root | parent | next [–]
You can read it with a Gemini browser, along with a lot of other content! There's a list of clients at https://gemini.circumlunar.space/clients.html, along with SSH and HTTP bridges.
corobo on July 4, 2020 | root | parent | next [–]
Aha nice one!
I’m still not sold on it (you’re allowed to do websites without js/css!), but the effort and skill involved is commendable
cjallen88 on July 6, 2020 | root | parent | next [–]
True, but before you visit that website, you don't know anything about how the site was implemented, or what JS is going to run, or whethere there will be a big autoplay video advert, etc.
When you visit a gemini URL, you know how it's going to serve you a limited capability, text based document, styled according to your own rules.
emersion on July 4, 2020 | root | parent | prev | next [–]
https://portal.mozz.us/gemini/gemini.circumlunar.space/users...
acdw on July 4, 2020 | root | parent | prev | next [–]
https://portal.mozz.us/gemini/gemini.circumlunar.space/users...
daffy on July 4, 2020 | root | parent | prev | next [–]
I found I could read this in EMACS with a package called Elpher, but there's a smiley or something I couldn't see.
daffy on July 5, 2020 | root | parent | next [–]
Maybe the specification should prohibit smilies.
tgvaughan on July 5, 2020 | root | parent | next [–]
Author of elpher here. If you were running under MacOS you were probably hitting the restriction that GUI Emacs explicitly forbids the display of coloured Unicode characters on that platform.
daffy on July 9, 2020 | root | parent | next [–]
I was running it on Arch Linux. I may just lack the font.
gray_-_wolf on July 4, 2020 | prev | next [–]
One thing I don't agree with the protocol (on technical level) is that absent of content-length or of some end-of-message marker.
The server closes the connection after the final byte, there is no "end of response" signal like gopher's lonely dot.
This just seems like a bad idea, especially if one is on shitty connection.
UIZealot on July 7, 2020 | parent | next [–]
I share your concern.
Since the response body is not encoded, there's no safe end-of-response marker byte(s) to use.
So content-length seems like the way to go. But knowing content-length ahead of time is difficult for dynamically generated content (CGI is supported after all), so they also need something similar to HTTP chunked encoding, which does complicate things a little.
I understand that keeping the Gemini client simple to implement is one of their design goals, but I don't think the same is true for the Gemini server. So I hope that they would consider adding these to the protocol. They could probably stuff the content-length or the word "chunked" in the <META> string.
gray_-_wolf on July 8, 2020 | root | parent | next [–]
But since they explicitely state that it is not suitable for large content, server could just cache it until it has it all. Most clients will likely wait for whole response anyway.
I feel like that would strike reasonable balance. Client are still simple (arguably more simple since they don't have to guess if they got everything) and the protocol is still trivial. For dynamic content, it would increase time-to-first-byte and ram usage on server, but both imho would not be an issue for the type of content gemini aims for.
UIZealot on July 9, 2020 | root | parent | next [–]
I agree that always sending content-length would be ideal, if it didn't come with the extra work and costs on the server that you mentioned.
Chunked encoding is simple enough to implement, avoids all those issues, and would allow a Gemini server to serve more requests faster given the same resources, or to run on hardware with more limited resources such as embedded. So I think it's well worth the slight cost in simplicity.
jansc on July 4, 2020 | prev | next [–]
I wrote an ncurses-based gemini and gopher client a while ago: https://github.com/jansc/ncgopher I really like the gemini protocol because of its simplicity and its text-based nature. Spending most days in a terminal, text consumption is so much easier with a gemini client then e.g. lynx for webpages (which won't work 99% of the time).
JohnStrangeII on July 4, 2020 | prev | next [–]
I've been thinking about something similar for a while, but kind of disagree with Gemini's scope and implementation. I think that the lack of inline images is too limiting. In my opinion, a good replacement for the WWW should have the following features:
- ToS that strictly prohibits commercial use and advertising. We have the WWW for that, no need to duplicate it.
- Uses HTTPS or something similar. This allows use of efficient servers like Nginx.
- Based on a virtual display with fixed dimensions and orientations, 2-3 aspect ratios and vertical/horizontal orientation. Fixed virtual pixel solution. Every page is fixed in size and in the length of unicode text it can display.
- Uses a structured document format with a limited number of logical tags. The client displays the page as it likes (no styling directives in the document markup). Every page written in this format is compiled into an efficient and compressed binary representation for transmission.
- Limited number of links, overlays, and images per page. Input fields with validation should be allowed. Inline images and movies are limited in size.
I'm planning to implement something like this in my forthcoming virtual Lisp machine (z3s5.com), though it's going to be a bit less general and probably not be based on HTTPS.
electronstudio on July 4, 2020 | prev | next [–]
Last time this was posted I started working on a Gemini client:
https://github.com/electronstudio/2face
Currently it’s TUI, but will add GUI eventually. It’s fun to have a protocol small enough you can implement it yourself, but I currently have a weird bug where some Gemini servers work and others don’t because they don’t seem to follow the SSL spec.
tluyben2 on July 4, 2020 | prev | next [–]
No mention of JS; not even a Gemini server in Node[0]. Finally a sane place to hang out! Kidding, but only half. The 'everything must be done in JS' is fairly annoying imho.
[0]https://portal.mozz.us/gemini/gemini.circumlunar.space/softw...
flatlanderwoman on July 4, 2020 | parent | next [–]
A Gemini implementation in JS: https://github.com/derhuerst/gemini
tluyben2 on July 4, 2020 | root | parent | next [–]
Great, doesn't disappoint; includes horrible emoji's :]
flatlanderwoman on July 5, 2020 | root | parent | next [–]
Emoji's are part of Unicode. Not bloat.
tluyben2 on July 5, 2020 | root | parent | next [–]
I wasn't thinking of bloat; it just looks unprofessional, in my opinion, and it is also very related to people who work with js almost exclusively.
IshKebab on July 4, 2020 | parent | prev | next [–]
Sure but "you can't use JS at all" is fairly annoying too.
tluyben2 on July 4, 2020 | root | parent | next [–]
Sure, I would never say that; pick the best tool for the job. Just 'everything' with JS is definitely not that.
perryizgr8 on July 5, 2020 | prev | next [–]
Gemini, being a recent protocol, mandates the use of TLS. There is no unencrypted version of Gemini available.
Mandating the reliance on third parties in the protocol itself does not seem to be a great choice. If I have a simple webpage which I use as a daily diary, why should I go to the trouble of asking a random third party to provide me with a certificate?
vertex-four on July 5, 2020 | parent | next [–]
Gemini does SSH-style TOFU, you don't ask a third party for the certificate. From the spec: "Clients can validate TLS connections however they like (including not at all) but the strongly RECOMMENDED approach is to implement a lightweight "TOFU" certificate-pinning system which treats self-signed certificates as first- class citizens."
jeffmcmahan on July 4, 2020 | prev | next [–]
I see a lot of comments expressing that all we need is markdown plus this or that little bit. I think that's unreasonable. It might suit Joe developer just fine for reading blogs and news, but the world benefits enormously from the ability to build complex software applications at low cost. Imagine the alternative: Welcome to Mario's Pizza - you can order right from your own computer after we mail a disc* to your house (*requires Windows 8 or newer)!
Also, some of the CSS and JS hatred is piffle. Publishers absolutely abuse these languages and it gets pretty bad on news websites especially. But I do not find that most or even many of the sites I visit perform badly on my hardware (2016 iPhone SE and a 2017 MBP). They work fine. Moreover, I appreciate nicely designed and competently implemented experiences on the modern web.
I have no interest in trading the modern web - warts and all - for some spartan plaintext utopia.
thayne on July 4, 2020 | prev | next [–]
I think this might be too simple. In particular, the absence of a way for the client to specify a desired language or supported mime types, query strings as the only way for the client to send data (what about uploads? Or non-idempotentent requests like registration?),and the absence of compression all seem to go a little too far to me (compression could be done using a parameter to the mime type).
And really,why replace http? The complaint seems to be mostly with html, so why not just makr a gemini text format,build some browsers that use that as the default instead of html and specify semantics for how tls works, like custom status codes to request a client certificate and recommend TOFU certificate trust. And maybe specify certain headers that shouldn't be used, like cookie.
flatlanderwoman on July 5, 2020 | parent | next [–]
The intention isn't to replace HTTP entirely. By using an incompatible protocol, a hard line is drawn.
squarefoot on July 4, 2020 | prev | next [–]
The web has become much more than a protocol for reading documents: controls that communicate both ways with the server so that they can be also updated in real time is a really useful aspect that won't make them go away anytime soon. I rather wonder if the browser is the best interface for that, or if html is the best protocol for that use. The answer would probably be obvious: "one software doing all is cheaper to produce and maintain than two or three doing each one its own business (and we can still blame the user hardware for the added slowness)."
97 more comments...
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Search: