💾 Archived View for gemini.bunburya.eu › newsgroups › gemini › messages › 1643573802.bystand@zzo38co… captured on 2023-12-28 at 16:06:29. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2022-04-28)

-=-=-=-=-=-=-

Re: Web considered harmful

Message headers

From: news@zzo38computer.org.invalid

Subject: Re: Web considered harmful

Date: Sun, 30 Jan 2022 17:11:57 -0800

Message-ID: <1643573802.bystand@zzo38computer.org>

Message content

David <david@arch.invalid> wrote:

On 2021-12-18, Doc O'Leary wrote:
> On 2021-12-17, Scientific (she/her) wrote:
>> On 2014-03-22, mw wrote:
>>> Over the past decade, the internet has seen a transition from
>>> single-task protocols to the web to the extent that new functionality
>>> is often only exposed as a web-API with a proprietary protocol.
>>>
>>> While the base protocol (HTTP) and information serialization (HTML,
>>> XML, JSON) is standardized, the methods for extracting information
>>> from the received data varies from website to website.

That is the case; also these formats are more complicated than they should

be in some ways but also lack some things unfortunately (e.g. JSON only

supports Unicode text and only floating point numbers, not binary data or

64-bit integers (unless encoded); HTTP, HTML, and XML have more problems).

>>> The solution in the 1990s was to make a standardized protocol,
>>> e.g. IMAP or NNTP, which could be used to access email or news in a
>>> standardized manner.

Yes, and we can still do such things as needed. I also have some other ideas

that I mention farther below.

We can still use protocols such as NNTP, IRC, etc; we can also make up new

protocols if they are needed. Multiple protocols for accessing the same

messages would also work.

I would want to promote supporting any suitable protocols, file formats, etc

but this isn't common.

>>> For interfacing with, say, google mail, however, a client application
>>> will have to speak the google mail API which is incompatible with the
>>> mail API of another provider. This transition is turning the internet
>>> into a collection of walled gardens with the obvious drawback that
>>> most websites -- if an API is present at all -- will only have the
>>> official client implementation to said API available. Mostly there
>>> will be a few closed-source implementations provided by the vendor,
>>> most commonly a combination of the following:
>>> leaving users little choice in case they are using a different
>>> platform or want to collect their data in a unified format.

True. Sometimes specialized formats will be needed for some applications

(and existing formats may be unsuitable), but they should be documented,

and conversion software could be available if appropriate.

>>> Even worse is receiving information from websites where no API exists.

It is bad; yes. In this way it is necessary to do without, but some web

pages have other obstructive things, that can get in the way even if you

are just trying to view it normally, too.

>>> There is no standard for logging into websites which have a mandatory
>>> username/password login prompt and implementations will have to handle
>>> cookies, referer headers (ridiculously many website mandate one for
>>> XSRF protection even though the standard makes them optional) and site
>>> specific form locations to which POST and GET requests will need to be
>>> made in a site specific order.

Actually there is (HTTP basic/digest auth), but it isn't commonly used, and

most web browsers do not provide the user much control over it (such as a

command to log out, options to persist sessions, etc).

>>> For the most part, there has been no effort in changing any aspect of
>>> this problem, which has existed for more than 10 years. On the
>>> contrary, companies have consecutively started to discontinue support
>>> for open web standards such as RSS/Atom.
>>>
>>> Conclusion: The web as it is now is harmful to the open standard
>>> culture of the internet.

I agree. However, even if standards are open does not automatically make them

good (but it does make them better than proprietary systems).

>>> Related readings (please expand):
>>> https://www.gnu.org/philosophy/javascript-trap.html

One of the things that article says is: "Browser users also need a convenient

facility to specify JavaScript code to use instead of the JavaScript in a

certain page." I very much agree with this; it is a very important feature.

Furthermore, there may be some things that a user might want their alternative

scripts to do that the ones included in the document cannot do (e.g. access

other programs and files on the user's computer, bypass CORS, etc).

They also mention Java, Flash, Silverlight, etc. It is true, JavaScript is not

the only way; furthermore, Java and JavaScripts are only the programming

languages, which are not themself bad, but I think embedding them in documents

in this way is bad (but common). There is also WebAssembly, too. So, I will

just call these programs in the document as "document scripts" instead (as

opposed to JavaScript code which is part of the web browser itself, etc).

Even if a program is free software, the user does not necessary want to execute

that program on their computer, so the above is important, as is such things

as whitelisting (possibly with cryptographic hashes to identify them). (User

specified whitelisting should also be how "secure contexts" are implemented;

the existing implementation is no good. Actually, whitelisting by cryptographic

hash both solves spies tampering with data in non-TLS, and the server operator

changing it to undesirable things in TLS, too; secure contexts fail to solve

the latter thing.)

Some of the criteria for nontrival scripts are a bit strange, such as the

criteria that arrays cannot have more than fifty elements. (An actual memory

management system to restrict memory allocation might be better. It could

also restrict execution time, etc, as needed.)

Also in some cases, it may be wanted to change the definition of some functions

before the script is executed.

Free JavaScript code is insufficient, though. There will also need ways to

make the data interoperable, including outside of the web browser.

Also, even if a script is allowed to run, if it requests (for example) camera

access, it should allow the user to specify the command or device filename to

use as input. This way, web apps that use it can work even if you do not have

a camera. The same is true for other things, such as audio input/output, MIDI,

game controls, etc. It is even true for keyboard commands, so it doesn't

override the keyboard commands, or allows user customization, etc.

Another thing to do, other than scripts, is CSS. I thought the idea of "meta

CSS" to allow the end user to customize the interpretation of CSS and all of

the priorities, etc. ARIA also helps a bit (or at least it would, if it were

implemented; I mention this a bit more below). For example, one thing that a

user might want to do is to skip animations (at least, I often find CSS

animations to be annoying, and a waste of energy). Another thing would be to

specify rules that are disabled in the presence of other rules (for example,

sometimes you might want CSS).

>> I have noticed that after all these years too - I fucking hate modern
>> Internet. I fucking hate how social media has taken over us, I fucking
>> hate how hard it is to do anything in modern Web.

I agree; it is difficult to do many things. But, I don't use Facebook, etc.

> I'm right there with you. One of my projects for 2022 is going to be to
> move away from the web as a primary means of sending or receiving
> information. I'm looking at things like Jekyll to get away from having
> a heavy stack for my site(s), but even that might be too closely tied to
> the way the modern web works.
>> I will take the good ol' times of internetworking on Unix command line
>> in 80s over this modern crap every day.

Yes, it is better. Modern designs have problems one is that command-line access

is not working very well, and many other problems, too, including not letting

the user to specify what they want and assuming things other than what the user

had specified, etc. Programs also are not working together very well, unlike

the UNIX which can use pipes, etc to use programs together.

> Well, it's not like everything was perfectly executed back then, either.
> For example, no standardization on configuration files has been a constant
> annoyance for decades. But there is a lot to be said for text file formats
> of increasing complexity based on need. I mean, web browsers do *so* much
> these days, yet if you hand them a bit of Markdown they're left clueless?

There are a few reasons why they would not implement Markdown, one of which is

that there are a few different variants, so they aren't always compatible.

It is common they implement the bad stuff, some of the good features though are

not implemented, and some good feature are even being removed, too.

However, one feature I find useful due to these mess is the web developer

console. Even not being a web developer, it is useful as a end user, too.

In a few cases, the document.evaluate command might be able to extract data.

At least there's reader mode, but that's like using uBlock Origin
instead of serving only what's needed.

There are some other problems with the reader mode too.

I would want to implement a "ARIA view" mode, which mostly ignores the CSS

(possibly with a few exceptions, such as still paying attention to whether

or not it specifies a fixed pitch font) in favour of using most HTML commands

(except those specifying colours) and ARIA properties, to render the document.

(For example, some web applications use custom widgets, but have the suitable

ARIA properties; then they can be used to display standard widgets in place of

the custom ones. Simply disabling CSS doesn't work; I have tried.)

One of my ideas is also to have request/response overriding in the client

software that can be configured by the user. This would make many other

options to be unnecessary, such as cookies, language, etc; this is a unified

method which does this and a lot more, including things that we have not

thought of yet (if the end user can think of it).

Another thing that could be done is alternative providers. In this way, it

is possible to provide things in many ways without being locked in and

without being restricted to specific complicated software, etc.

There is also one more thing I considered in the case of HTTP, which would

allow you to serve Markdown, MathML, FLIF, FLAC, etc, and allows better user

customization, accessibility, efficiency (if native implementations are

available), possibly reducing bandwidth requirements, etc. It is a new

response header, which can occur any number of times. If the response is

not understood, then it can load one of those instead but without changing

the current document URL. This way, if the user has enabled this feature

(the user should always be allowed to disable or override stuff; the above

request/response overriding already does this in this case), then it would

automatically just work as far as the user can see, without needing to do

anything special, etc.

Web browsers (and other programs) need better user control, instead of

removing good features and adding bad ones, or assuming that the user

wanted something other than what is specified, etc. I think UNIX philosophy

is much better, instead.

Some way to specify common links for data and alternative protocols should

also be necessary (possibly <link rel="alternate"> might do). The alternate

protocols might not have a MIME type, but can still specify the URL.

It is unfortunate that fixing it involves more things like that instead of

just making it in a simpler way, but it seems necessary, to me.

Fortunately, much of the above is not needed in the case of Gemini, which

does not have these problems. However, I think that the Gemini format and

protocol is perhaps a bit too simple (while Markdown is too complicated,

and HTTP and HTML are too complicated, and PDF is too complcated, etc; FTP

is also bad but for other reasons). But, for most of the things that Gemini

is used for, it is probably OK (although, in addition to the current

specification, should also implement "insecure-gemini" scheme which is the

same but without TLS and that 6x responses are not allowed).

I may have other things to write, but will do so later, instead of now.

--

Don't laugh at the moon when it is day time in France.

Related

Parent:

Re: Web considered harmful (by David <david@arch.invalid> on Sat, 29 Jan 2022 19:45:22 +0100)

Children:

Re: Web considered harmful (by Doc O'Leary <droleary@2017usenet1.subsume.com> on Thu, 3 Feb 2022 04:04:10 -0000 (UTC))