💾 Archived View for dioskouroi.xyz › thread › 29362274 captured on 2021-11-30 at 20:18:30. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
________________________________________________________________________________
i was think about this like one hour ago, start why i need to config docker to make php website, then migrate to when i start to think i don't funking know anything in linux(after years of using it), how x11 works(like really),not talking about multi threading in the kernel, the idea of know how anything works is relic of the past,i work in some next.js project have more file than some of the old legacy UNIX, this is become insane but i don't think this will slow down any time soon.
Don't feel too bad about X11... iirc it's extremely difficult to be compliant with the standard, since they apparently codified the quirks of the different existing implementations into the standard itself. Wayland is purportedly more sane.
Relevant discussion:
https://news.ycombinator.com/item?id=26418992
The X11 protocol itself[1] isn't actually that complicated: it's large and old, but the fundamentals are mostly consistent and the modern client bindings (libxcb) are even generated automatically from a machine-readable protocol spec[2]. Various extensions and standards composed on top of X11 further complicate things, but most of them are graceful client-side enhancements rather than required components of a functional X11 client and server setup.
[1]:
https://www.x.org/releases/X11R7.5/doc/x11proto/proto.pdf
[2]:
https://cgit.freedesktop.org/xcb/proto/plain/src/xcb.xsd
Oh and you haven’t even started looking at the hardware below all that!
But on a serious note, this complexity is not all useless. A significant part of it is essential complexity that can’t be reduced. And frankly, these layered abstractions allow us to do any real work. Just understand your stack very well and one or possibly two layers below — that is what makes you a great developer.
I would disagree.
Nobody looked at the whole stack for quite some time. But it would be time to do so, imho.
The stack is crufted and the necessary complexity is smeared all over it—hold together with a lot of not essential complexity.
Rethinking the whole stack form the ground up (hw layer) through the OS up to distributed networking systems is long overdue in my opinion. But actually nobody designs _systems_ anymore! It's only pilling up yet another layer on top of all the existing cruft.
This hinders also progress in general as it's not possible any more to try out radically new ideas. All "new" ideas are necessary constrained by the existing stuff. You can't "move sidewards" any more to create truly innovative systems.
It would be a good idea to start to look at the whole stack again. And to move things around to decruft it once again.
That's a lot of work of course at the current state of affairs but it would improve a lot of things I guess.
To point out precedence for the suggestion. They're doing the same currently with the std. network stack: The layers we've got are crufted and there been a lot of hacky solutions to cross-cutting issues around. The way forward now is to _reimagine the whole stack_ in hindsight of present-day knowledge about possible design space and current requirements.
I agree with you in part, but the seemingly insurmountable part of that is, again, backwards compatibility. Like, sure there may be a better way to do block devices, but is it better enough to make both the old way and the new way supported in both hardware and software? It has an insane maintainability price to introduce change that is not backwards compatible.
Just what _is_ inside modern software? Why is Windows 95 Word tiny but office 365 Word needs gigabytes of RAM?
What is happening there? Putting 100K of integers in a linked list gives you +100K of overhead. Putting 100K of text in Word gives you 100M overhead. Ridiculous.
It is ridiculous, yet a linked list of integers is not equivalent to 100K of text in a word processor.
That 100K of text in a word processor has a context. For example, people expect word processing documents to be paginated. Pagination implies an understanding of how the entire document is formatted. There is also an expectation to be able to position the cursor or select text with a mouse, which means the software has to have more detailed information about how the visible portion of the document is formatted to map screen coordinates to characters. Then there is the memory used to render the visible portion of the document. If you're optimizing for speed instead of memory (which probably explains part of the difference from Word 95 and Word 365), all of that context is going to add up to a significant chunk of memory.
In contrast, that linked list of integers has no context. It is simply a data structure where we have no idea of how the surrounding code will impact memory usage. Well, we could probably make a few guesses based upon the data structure being used. Nodes are most likely added or removed from the head or the tail of the list, and any traversal of the list is likely to be done in a single direction. That's enough to suggest that it isn't being used in a word processor.
Heck, even the data types being compared is problematic. 100k of text implies 100,000 characters. If each character is stored as a byte and each pointer as 4 bytes, that 100k of text is now 500k in a linked list (make that 900k on 64 bit systems). If you're using UTF-16, which is likely for Word, that becomes 600k and 1M.
Any word processor that takes a naive approach or is optimized for performance instead of memory is going to consume a lot of memory.
I enjoyed reading what you posted but I think the fact that most people use 365 Word for the same reasons as '95 Word, means there is no usecase-guided optimisations by microsoft. I find that 95 Word runs faster on W95 than 365 Word runs faster on Windows 11.
You're not wrong, but you're missing a pretty fundamental point here, which is that all of the features you listed have been around since before computers measured things in gigabytes.
A lot of modern programmers don't practice non pessimization. Back of the envelope: what is feasibly the best this should perform?
So you end up with things that take up way more resources tham necessary because anyone has at least 8GB of RAM these days.
Back when there were more extreme hardware constraints you couldn't ship if your program was going to take up all your RAM in a few minutes.
I'm guessing this and that it has a lot more features (bot necessarily necessary features) these days so automatically a lot more engineers which means a lot more inexperienced engineers because there are only so many experienced ones.
Office is what, 80M lines of code? Impossible for any human to understand.
I expect it is only added to because 1) nobody understands what would happen if you removed something and 2) if you intentionally or inadvertently remove some obscure feature then it breaks the workflow for people who inexplicably depended on that feature.
i think in addition to what you say, there are probably also the issues of just running a business...
new releases need new features → bloat
managers need promotions → hire more devs → more bloat
devs need promotions/evaluations → refactorings/abstractions → extra bloat
corporate dictums/strategies (add .net this and that) → bloat city
people leave → knowledge is lost → old code gets worked around/stuck → bloat
---
just a crazy supposition, but i'd guess programs/apps that stay under the radar stay small and maintainable, programs and apps that are popular suffer from a smothering of resources/people that inevitably bloat it up (just look at itunes for another example)
I got to learn about Minoca OS, unfortunately it is yet another POSIX clone, we already have enough of those.
The author likes ed.
From this you may draw any further conclusions necessary.
Does anybody have other texts on UNIX or proto-UNIX history? I found the comparison between `ed` and `vi` enlightening.
I was disappointed that Multics wasn't on the list.
A Plan 9 user yelling at clouds.
He does have some valid points, but his insults don't help.
And no, troff is neither elegant nor capable ;)
> And no, troff is neither elegant nor capable ;)
I accurately guessed that this essay/screed(?) was typeset in troff, given the abundance of minor formatting and typesetting errors.
This particular kind of "return to tradition" OS zealotry has always deeply bewildered me: our own dear author can't complete an essay in troff without scattering errors all over the place, so why in the _world_ are they encouraging professional users to give up all of the functionality that _prevents_ these errors?
He has also written comprehensive desktop guides for Plan 9 [1], Slackware [2] and Unix console [3], this time in HTML. He has clearly put a lot of thought into getting familiar with these systems. In terms of aesthetics/visual humor, the 9front FQA [4] may have been an inspiration here.
Really interesting to see Plan9/9front coming closer to laypeople (of which I sure am one) through this kind of discussions year by year. Maybe it is not _that_ much of a fringe OS any more?
1:
https://pspodcasting.net/dan/blog/2019/plan9_desktop.html
2:
https://pspodcasting.net/dan/blog/2018/slackware_desktop.htm...
3:
https://pspodcasting.net/dan/blog/2018/console_desktop.html
4:
9front has been adopted by WSL as the filesystem protocol, so there is that.
https://nelsonslog.wordpress.com/2019/02/16/plan-9-rides-aga...