š¾ Archived View for dioskouroi.xyz āŗ thread āŗ 29405159 captured on 2021-12-05 at 23:47:19. Gemini links have been rewritten to link to archived content
ā¬ ļø Previous capture (2021-12-04)
-=-=-=-=-=-=-
________________________________________________________________________________
There are a few more things that help:
- Reducing DNS calls and server round trips. Loading fewer resources from fewer domains makes a huge difference. Using server push also helps, although it might get deprecated.
- Responsive images. Load small images on small displays. Don't load 2x images on lower density displays.
- Using the right load event for JavaScript. load fires long after DOMContentLoaded on some pages, especially on slow connections.
- Setting the size of elements before they load, to prevent the content from jumping all over the place.
- Loading fewer fonts, and serving them yourself. Google Fonts is significantly slower.
- Gzipping everything you send. It makes a significant difference on slow connections.
- Static caching. This will shave whole seconds off your page load time. Cache both on the server and in the browser, but mostly on the server.
- Consider perceived performance too. It doesn't matter how fast you force a coercive cookie notice on your readers.
- Performance extends to the content of the page. Don't make users chase answers. Give clear, concise information and use typography to guide your users to it.
> Setting the size of elements before they load, to prevent the content from jumping all over the place.
This is especially jarring on a phone. Multiple times a day, I see this, and it pisses me off each and every time. Nothing like reading an article to all of a sudden have some image load somewhere (likely not even visible on screen) suddenly cause what you're reading to disappear off screen. And I'm not on a slow connection, either. I'm typically on uncongested wifi connected to fiber when at home, 5G otherwise.
JavaScript in general is so bad, or rather is being used for so many bad practices, that I've disabled it on my main mobile browser. What little good it's used for doesn't out weigh all of the bad out there. I'm tired of malicious ads hijacking the page, either rendering popups or redirecting me somewhere I don't want to go. If a site doesn't render anything useful without javascript (Twitter is a prime example), I typically turn around and go elsewhere.
Layout shift due to responsive images is now easy to prevent, because most up-to-date browsers now correctly set the aspect ratio of a responsive img element if you add the height and width attributes. Before a year or so ago, you had to use hacks like padding percentages to correctly set the aspect ratio.
https://www.smashingmagazine.com/2020/03/setting-height-widt...
You can automate this in VSCode, very handy.
https://christianheilmann.com/2021/04/14/quick-vs-code-tip-a...
Depends.
There are extensions (Dark Reader, a dark/light mode extension for Firefox, is an example) which trigger multiple redraws of a page on load. On an e-ink device (my principle driver these days), the extension is a near necessity (ironically, to force _black-on-white_ themes on sites, much more readable on e-ink), and page re-paints are all the more jarring and expensive.
Simple pages still load quickly and without disruption. Many complex sites don't. (NPR's website comes particularly to mind.)
> This is especially jarring on a phone. Multiple times a day, I see this, and it pisses me off each and every time. Nothing like reading an article to all of a sudden have some image load somewhere (likely not even visible on screen) suddenly cause what you're reading to disappear off screen.
This is the worst on picture rich recipe blogs. I hate it.
Nothing saves as much as just writing the HTML, CSS, and JS by hand. No build step or anything but an editor. No need to minify anything, keeping it nice and simple.
Responsive images are kind of funny in the sense that my webpages are nearly twice the size on mobile vs my laptop. The screen may be smaller, but the CSS pixel ratio is 1 on my laptop vs 3 on an iPhone.
I think a lot of the suggestions the parent comment suggests are orders of magnitude better savings than writing things by hand...
Perhaps writing things by hand has a questionable nostalgic allure of "keeping one honest" but beyond that I don't think it's super practical.
While true in theory, I believe that keeping things handmade means that the payload couldn't realistically become big enough for the parent's suggested optimizations to actually matter.
An unoptimized payload isn't that big of a deal if it's still smaller than the best optimized one.
Perhaps not all of the advice applies, but when it comes to images and fonts, I don't think any amount of keeping things handmade is going to outstrip the advice regarding static caching and loading fewer fonts.
The best advice for images is to delete them. There's a widespread belief that meaningless pictures at the top of a post are critical. Here's the first link that comes up on Medium:
https://equinox-launch.medium.com/trade-hard-win-big-dopewar...
The value added of that image at the top is negative. This is, sadly, the rule rather than the exception.
Edit: This is an even better example:
https://medium.com/general_knowledge/top-10-crypto-gaming-pr...
How anyone can look at that image and say "that'd be perfect for the top of my next post" is beyond me.
At least the Medium gif-every-paragraph plague seems to have ended. Not sure why anyone thought it was necessary.
Heavy images & fonts on an otherwise plain HTML page will still be acceptable as they would be loading in parallel - in most cases the text will come first anyway.
I'd rather have a half-loaded page with images/fonts missing than a JS-based page which won't give me anything until the full JS payload is downloaded & executed.
Writing HTML by hand is pretty dubious advice (slightly impractical but doable, so not necessarily something to rule out on those grounds, but the factor that dominates in deciding whether or not to do it comes down negligible benefit). But writing CSS and JS by hand is both way more practical than people think, and the gains are real.
If you're not a fan of hand rolled HTML, then what do you think of JSX?
If HTML allowed you to trivially import reuseable components like you can with JSX itād be killer. Thatās the one thing that stops me writing my site by hand.
Iām looking at Astro for my siteās re-write as it seems to find a happy medium here.
>_If HTML allowed you to trivially import reuseable components like you can with JSX_
Sounds like WebComponents.
>_I'm looking at Astro..._
Never heard of it and just gave it a quick glance. Hadn't heard of the "islands architecture" it implements either, although it is similar to some familiar concepts.
In fact, it reminds me of the way we built sites years ago (pre-AJAX even), when static HTML was just delivered by the server and there was little-to-no JS involved, save for some effects.
And, that's the funny thing: in spite of all the myriad paths and paradigms, we seem to be coming full circle. It's funny in 2021 to see a "new" project advocating for a "new" approach that is essentially "SSR static HTML with minimal client JS FTW!"
> Sounds like WebComponents.
Interesting. Iāll check it out.
Update: checked it out. Shame that `Getting Started` > `Web Components` = 404.
https://www.webcomponents.org/introduction
works for me.
WebComponents have been discussed for a while. They were just a dream not too long ago, and I'm not sure what their usability is right now. So, word of warning that I'm not advocating for or against their use at this point. Just mentioning that the description you gave previously reminded me of WebComponents.
i don't. Write by hand and apply the above! But there won't be seconds to shave no more in the first place.
If your project is that small that you can manage and implement on your own without teams having to work together and if you don't care about older browser. In that case you maybe have a point.
If you work on bigger projects you will run into many problems really soon.
Like some Safari strangeness when it comes to CSS. Someone using some old browser which you as a developer would never use but your user base still uses it and it makes 12 % of the revenue.
reminds me I used to check websites under elinks just to see how structure/content would feel without css. You can identify fat quick this way.
This really only works for relatively small sites that don't change much and aren't developed by multiple people. After that you'll at least want some static site generation (which by itself doesn't slow down the experience), bundling of multiple JS sources together (allows you to handwrite JS over multiple files without taking the perf hit of serving multiple files), and likely a CSS preprocessor (e.g. SCSS).
I write my CSS by hand. I just don't like setting up preprocessors unless I have colleagues working on the same code.
No, it does not help. CSS is trivially small, especially after it's compressed. Same for HTML and JS. JS will balloon in size if you start using Webpack, but your own code will stay fairly small.
A single image will outweigh your text files, so the effort isn't really worth it.
> writing the HTML, CSS, and JS by hand.
plus careful image scale, quality and progressive display. Often images appear to be print quality. You need to know about the difference of web and print, however.
Agreed. Most sites don't even need JS, and for those that do, they'd be far better off with a desktop application.
Then you have to get people to install said application
I would hate giving any random piece of software full access to my computer instead of using a sandboxed version in a browser. The perf tradeoff is worth it for stuff like games and photo/video editors, but every random webapp? No way.
You don't need a web browser to sandbox a program.
A large 3x phone in portrait orientation is still only 1-1.5K physical pixels wide.
Wrong. Of course if you use a minifier you save bytes, itās by definition. A decent built tool can still start from static files but also optimize them as much as possible. If thereās little CSS it might even just inline it.
After gzip, minified vs. hand written code size is negligible. You are gzipping your HTML, CSS, and JS, right?
gzip? What year is it? 2014? You better brotli your stuff!
A properly minified JS might be smaller after compression, even if the difference is not huge, but it may also be faster to parse.
Lol ok. Maybe you donāt know what youāre talking about. They are 2 separate optimizations. Repeat after me: _gzip does not replace minification._
But Iām glad you ship your handrolled JS to the client with long variable names, whitespace and comments like itās 1999.
Maybe itās just for me that minification reduces file sizes by 30%+
Dude relax, no need to be snarky
I hate it when ignorant people talk out of their asses.
> Loading fewer fonts, and serving them yourself.
How about _no_ serving of fonts? I don't think I've ever looked at a webpage and thought it would be improved by a particular font family. What exactly can't be accomplished with standard web fonts? Icons? Haha, don't even go there.
I actually disabled loading of webfonts. The only two websites I've encountered where I actually notice it, and remember that I did that, is Google Translate and a particular government website I use about once a year.
Everything, everywhere else, seems to work just fine with using the browser defaults.
_> The only two websites I've encountered where I actually notice it, and remember that I did that, is Google Translate and a particular government website I use about once a year._
That's probably because of the Google Material Icons font, I believe? It shows or used to show text instead of icons when it couldn't load.
> That's probably because of the Google Material Icons font, I believe? It shows or used to show text instead of icons when it couldn't load.
All icon fonts _should_ do this. It's just that the Google Translate site is designed in such a way that the fallback text doesn't render correctly, overflowing and overlapping over everything else, which makes it noticeable. It seems to very much be a 'pixel perfect' design, rather than responsive.
Everywhere else, the fallback fits naturally, or renders to a standardised UTF-8 symbol, so that it isn't noticeable to me for day-to-day usage.
Absolutely, especially with the recent "invention" of the native font stack as a solution that does not require loading external files while showing a nice modern font.
Imagine if the trade-off was more explicit to the visitors: would you want something that made your web browsing slower, your text _flashes in_ like a cheap powerpoint slide, and sent your data to Google, in exchange for being able to read in a different font? External web fonts are purely for the ego of the web developer, not for the reader.
Another option is that you keep 90% of your text in the default font (e.g, -apple-system or the equivalent stack elsewhere), and then load only the specific weights for whatever title(s) or so on you need.
Hell, I've outright just crammed a few characters into a base64 encoded font in inline CSS, which - while I'm sure makes many here grimace - loads faster than anything else I've tested.
(I'm a big fan of just shipping everything down the pipe as one file, short of images - I do this on my personal site and it makes me happy that it "just works":
)
In my case it guarantees a certain level of readability that is consistent across platforms. I spent quite a bit of time fine tuning the font weights to help readability. That costs about 60 kb, so it's hardly a huge sacrifice.
For me it is because fonts are part of the artistic expression of my website.
Also, it helps with formatting code just the way I want it.
"Setting the size of elements before they load"
I host a personal photo gallery online and I've been dreading implementing this for images to get rid of the jump when lazy loading. I'm not even sure there's a good way to do it.
The only way to prevent reflow is to give the browser your image dimensions or aspect ratio:
https://developer.mozilla.org/en-US/docs/Web/CSS/aspect-rati...
Using a css flexbox or grid layout means you don't have to recalculate image div widths and heights when the browser viewport width changes, but you still need to give it enough information to back into the correct aspect ratio.
Also, using a srcset helps only load the necessary bytes, but means you'll need to either build the resized image on the fly (which is cpu-expensive) or prebuild them (which takes up some disk space).
In the interests of a performant UI, PhotoStructure extracts and remembers width and height of photos and videos during imports, but if you're only loading a couple images in a gallery, your could fetch this metadata on the fly as well.
Depending on your UI, you can also load the images into a hidden div (display:none), catch the load event for each image, and display the div once you've detected that the entire image set has loaded.
I want to say I did something like this for an infinite scroll, grid layout _some time_ ago where I didn't have the image dimensions to help the browser. Details are fuzzy but there seemed to be a gotcha here and there, some of which was around the load event.
Of course, the user's perceived wait time goes up a bit, as you're not displaying each image as it loads. But, that can be mitigated somewhat by fewer images per pagination and/or preloads.
The overall solution worked well for our use case.
For my personal image gallery, I have a script to include the image dimensions in the file name when I upload.
Itās kind of a hack, but it lets me compute the final image flow based on the file name before any of the images have loaded.
There might be something in here for you:
https://twitter.com/samccone/status/1279117536479506432?s=20
https://twitter.com/samccone/status/1360730258752741376?s=20
This stuff is hard
For a grid-style gallery you can have a container for each image with a fixed size (the maximum size you want an image to be) and the image itself can use max-width and max-height with the same size of the container. If your aspect ratios are too varied, you might have to fiddle with the ideal size to get an optimal result, but it gets the job done.
If it's another kind of gallery it could work if you accept those constraints. If you don't, then it's better to use width/height anyway.
Do your photos scale in your gallery?
This is off the top of my head but if your images are set to a percent based width so they scale (mine are) and you know the aspect ratio of the image then you can set the CSS ā aspect-ratioā property (or there are some other hacks with padding on a wrapper div) to maintain the height.
Youād probably want to integrate this into the build process but I have not done this.
You don't even need the recently-added aspect-ratio property. In most up-to-date browsers you can now just add the height and width attributes to the img tag and the aspect ratio will render correctly even for responsive images:
https://www.smashingmagazine.com/2020/03/setting-height-widt...
On a semi-related note:
Is there any chance in the future that browsers are going to support / implement lazy-loading of images themselves (i.e. based off what's visible in the viewport). Currently you have to (as far as I'm aware anyway, but I'm only just getting re-acquainted with web-dev) use JS to do it (things like LazySizes), which adds complexity, and is even more complicated if you want to combine that with responsive images...
Or am I missing something that's possible already?
You can set the attribute loading="lazy" on an <img> tag and the browser will do it for you. Looks like Edge and Chrome and Firefox support it, but not Safari yet (
https://caniuse.com/?search=loading
).
Also:
- get rid of bloated adtech (e.g. 6MB of adtech garbage from 60 domains for 20KB of text)
- remove all popups, slideovers, mailing list signup garbage, etc.
- get rid of autoloading/autoplaying video
- avoid bloated frameworks
- avoid external fonts
Server push has been deprecated by browsers for a while now.
To reduce round-trips, consider using TCP Fast Open and 0-RTT with TLS 1.3. Note that 0-RTT can enable replay attacks, so I'd only consider using it for static content. HTTP/3 and QUIC has improved support for 0-RTT.
Gzip is good for dynamic compression, especially with zlib-ng. For static compression, I'd rather use Brotli.
side note: why is it an iPhone with a crazy looking display is only 300 and some pixels wide viewport in the web, that seems odd.
I did check on my Android and it's similar, 1080P display but 412px wide in browser
also that bottom navbar in Safari ahh
Itās points, not pixels. A modern smartphone usually has a scale factor of 2 or 3, giving the same logical screen space with a significantly sharper image.
A point is 1/72 of an inch. A CSS pixel is canonically 1/96 of an inch.
https://developer.mozilla.org/en-US/docs/Glossary/CSS_pixel
It's not (as sibling comment says) points, it's logical pixels as declared by the device/user agent's pixel density. Device pixels are usually (but not always) some integer multiple of the logical pixels.
A number of those are in TFA.
find yourself greeted with a 75vh image carousel, social media links and banners, a subscribe card, a "Chat Now!" banner with a GPT bot typing, xyz.com wants to know you location, or a prompt for cookies
Don't also forget an "Allow xyz.com to send notifications?" pop up and an overlay ad banner with a 10 seconds timeout.
As admirable as the intention is, this is a losing battle. The biggest websites in the world have enormously bloated sites, because they can, and they won't be penalized for it.
Your mom-and-pop website's rank is entirely at the discretion of Google's Core Web Vitals measurements (and other SEO KPIs), but Amazon with its dumpster fire search pages with shifting layouts as content is dynamically loaded in....will be just fine.
The biggest websites in the world (Amazon, Google, Facebook, Twitter) try to push you to use their app whenever possible anyway. The web seems to be an incovenience for them, they certainly don't want you blocking any of the myriad scripts they have running under the hood to tally analytics, A/B testing, user sessions recording and whatever else.
If you have a personal website, as I do, chances are it's relatively lean, because we're the only ones designing it and keeping it updated. Chances are also that very few people visit it, because it's outranked by the thousand other bloated websites that will rank higher for whatever query it was that you typed.
So what's the point?
Sure, but complaining and proving it is possible to have smaller/faster websites is important because it can inspire and lead to better tools, libraries, services, standards, techniques and even new fads (!) in the future.
I understand that lots of software engineers complain about new tooling, and would rather have nothing changing, but the reality is that things will change. So we might as well fight for them to change to the better, by proving it is possible and showing/teaching others.
I also understand that people don't think FAANGs will ever fall, but just as Google disrupted every search before and Amazon disrupted brick and mortar stores, there can be something with a differentiator that will disrupt them in the future, and that differentiator might be going back to the web. Of course, there's the possibility of Google killing this new disruptive corner of the web as soon as its born, but oh well...
The point would seem to be having a superior website for your use case. Discoverability through search is unimportant for many things like websites for private events. As long as people can find a personal site by typing the person's name in either as a search or the url, it's probably good enough.
The point is that MY website will be accessible, and so will the websites I link, and I will be able to enjoy my little corner of the Web from any device and location I choose. A nice bonus is saving a lot of time on maintenance.
Develop for users not bots. Thereās json for that.
>So what's the point?
Integrity. As hard/cringey as it sounds it's because of integrity that the internet is still relatively free and software like firefox exists
I would want to consider the points in the article, but I can not focus to read the text on my 27inch/4k screen as the line width is too big. This also makes me doubt that whoever wrote those lines cares about UX and not only about having a minimal webpage.
Yes indeed! All it would take is a single line of CSS somewhere applying to the copy text:
max-width: 50em
Wikipedia has this quirk as well though, where if you have a large monitor, you can end up with ridiculously long lines that are impossible to read.
For some reason this website is a lot worse than Wikipedia for me. I think it has to do with the font size and spacing. On Wikipedia the font size is smaller, so even though the line is long, the sentences are pretty short and you can read almost an entire sentence without moving your eyes/head. On this site it feels like a journey to read a sentence.
The mobile version of the site fixes this.
Exactly. Apparently these people don't stop to think why the majority of books are made in portrait format, and magazines, newspapers and some books split texts in columns.
Well you could just.. resize your browser window to preferred width.
I could also buy a smaller monitor.
They mention the 1MB club at the end of that page, but even more strict is the 250 kb club:
250kb over <25 requests should really be an established standard at this point. It's not difficult to achieve and makes a massive difference in energy usage and server requirements for high traffic sites. There are plenty of <25kb frontend frameworks, and some of them are arguably better than their >100kb competitors.
I recently made a site that frontloads an entire Svelte app, every page and route, entirety of site data (120kb gzipped), all of the CSS and a font full of icons, a dozen images, service worker and manifest. Clocks in at exactly 250kb, scores 100 on LH, fully loads in 250ms uncached, and all navigation and searching is instantaneous.
That's not a brag, it just goes to show that anyone can make sites which are bandwidth and energy friendly, but for some reason very people actually do that anymore.
How do you front load the whole svelte app?
Disabling SSR and prerender within SvelteKit turns the output into an SPA, and the manualChunks option in vite-plugin-singlefile ensures the output is one file that includes every page of the site.
That's pretty nice. Surely if you have a big enough site with enough content loading all of it one go is not a pleasant experience on slow connections?
Depends on the site and content, for sure. This particular site is aggregating a subset of public profiles and posts from a social media network. To compare, the site I'm pulling the data from uses nearly 20MB to display a single uncached page of post titles and 2MB per subsequent page, while my site is able to display the entire userbase for a cost of 250kb and a single request for a ~5kb image on subsequent pages. The only difference is the choice of frameworks and component construction.
That's really great. Is it open source by any chance?
The existing code is rather specific to that site, but I've considered making a more generalized version that could work for any small app.
What's LH?
https://developers.google.com/web/tools/lighthouse/
There's also
, which ironically is on
.
I would be utterly embarrassed to produce a 1mb payload on anything but an image- or video-heavy website (where the images and videos are the actual content, not superfluous crap). There's absolutely no reason for websites that are primarily text and hyperinks to ever approach anything like 1mb. Even 250kb is excessive for many cases.
My latest website[0] is 7.5kb, and I built it after getting fed up with the official MotoGP no-spoilers list which sends 2mb to surface the same information in a much less user friendly way. This is how the bulk of the internet should look.
[0]
> There's absolutely no reason for websites that are primarily text and hyperinks to ever approach anything like 1mb. Even 250kb is excessive for many cases.
Yep. My personal site (70+ posts) is 26kb. 17kb is the syntax highlighting library.
Another project site of mine is 2mb but that's because it has a lot of screenshots. I should probably shrink those screenshots though...
Well, they all look like garbage txt to me.
I used to run out of high-speed data on my cell phone plan towards the end of each billing period. Many websites/apps don't work at all at such low data speeds. Google Maps is basically unusable at low data speeds. Of course this trade off between performance for users vs ease of development has been with software for decades... see Andy and Bill's Law.
I design for those, because the experience is similar to that of someone in the Berlin U-Bahn. If you're just displaying text on a page, it shouldn't require a 4G connection.
Your personal website is beautiful!
The recipes section inspired me and now I need to add something similar to mine. And props for not tracking.
Thanks! I was mostly talking about All About Berlin. My personal website gets a lot less love these days.
Do you know about the "OK maps" trick in Google Maps to download the map ahead of time?
https://www.wired.com/2014/02/offline-google-maps/
I optimized my website the other week. Some things that I did that were relatively easy included: removing Font Awesome (I replaced the relevant icons with SVGs since I was only using a few of them), removing jQuery (replaced with vanilla JS), lots of CSS (removed a lot of cruft from unused rules, etc from my SASS rules), and got rid of as many full size images as possible (replaced with webp thumbnail links). I got the size of my website down to less than 100 kb gzipped for the pages with images and less than 50kb for those without.
How big was your site before optimization? also, I would like to see it.
It's in my bio
It was roughly 900kb gzipped on most text only pages. Some pages are on archive.org.
I have been thinking about images on the web. And I still unsure, undecided on what sort of file size should we really be aiming at.
1500x1750 and weighing in at 3MB! The webp version of the same size is ~79KB.
79KB is a very low 0.24 BPP ( bit per pixel ). for 1 BPP it would be 328KB.
AVIF is optimised for BPP below 1, while JPEP XL is optimised for BPP above 1. On a MacBook Pro, the first fold of the browser has 3456 x ~2200, an image filling up that screen with BPP 1 would equate to 950KB.
In a perfect would you would want JPEG XL to further optimise BPP below 1.0 and do progressive loading on all image.
We have also bumped out monitor resolution by at least 4x to Retina Standards.
Despite our broadband getting so much faster, file size reduction is still the most effective way to get a decent browser experience. At the same time we want higher quality images and beautiful layout ( CSS ) etc.
Balancing all of these is hard. I keep thinking if some of these are over optimising, when we can expect consumer internet speed to increase in the next 10 years.
> At the same time we want higher quality images
YMMV but 99% the images I see on the web, I don't care about their quality. A large part of them are annoying, and most of the others could be low-resolution and low-quality, I would not even notice it, except for the faster page load. Once in a while, a photo is artistic or interesting enough to gain from a high quality, but most web sites I visit have far too many heavy images.
Yes, please. My components library for lightweight sites:
Whoa, this looks great!
Love the name of the company! Speaking as someone who lives in Santurce. ;) So glad to see this kind of high tech economic activity on the island.
It's not Webpack, minifiers or build tools that make things bigger: if you feed them small inputs, you get small bundles.
Build pipelines don't merely concatenate a few scripts, they're meant to automate the boring parts we used to have to do by hand (like copying assets with cache-busting filenames and update matching references, set the correct image sizes, generate SRI hashes, generate sourcemaps, generate sitemaps, check the quality of pages, etc). This way you can focus on writing the application and keeping it maintainable without having to waste time remembering to use javascript hacks (like `!1` instead of `false`) to save a few bytes like we used to.
It's when people indiscriminately feed them big frameworks that things bloat, same way you won't be lightweight with a junk food diet.
Great to see Puerto Rican tech content on Hacker News! I grew up in PR and have always dreamed that PR could become a Singapore or similar with regards to its tech ecosystem. Saludos boricua from Boston!
FYI, you probably know already, but there are lots of crypto/blockchain people living in PR now.
And, PR could definitely become a Singapore if the government would embrace that possibility (they're already part way there with the low taxes for expats), but I think they are too populist and just don't have that kind of vision for the future of the island.
This is down to no-one caring about speed for brochure websites. The client just wants their website up as cheaply as possible as quickly as possible. The quickest, cheapest way of building a brochure site is usually Wordpress using a off-the-shelf template. And no-one in that entire chain cares about the loading speed of the site.
Also, I'm not sure who the article is aimed at? Local lawyers are not going to read this and be able to fix their site. The owner of the web design agency that they bought the site from isn't going to read it because they moved from tech to biz. The web dev who is implementing the site already knows this, but has to knock out sites as quickly and cheaply as possible, and so this isn't something that matters.
One area where I'll disagree is minification. If you're gzipping (which you should be), try running your build without the minification step and compare the resulting over-the-wire file sizes. In my tests, the difference has been negligible. Which makes sense--you're just doing a limited text-based compression step before you throw the files at a real compressor, so it tends to be wasted effort.
You're certainly never going to enjoy anything like the big shrink of using gzip, optimizing images, or not importing a big library you don't need.
I think it's a big misconception that rendering code is responsible for most of this bloat. The _great_ majority in my experience comes from tracking scripts and adware that get layered on top. This can be just as true on a React-based site as it is on a classic server-rendered Wordpress site, and is often out of the hands of the actual engineers. I suspect Google Tag Manager has done more harm to the web experience than any other single factor.
So glad to see a fellow boricua posting on here.
Working with designers, I'm provided final images at spec and can only optimize them _losslessly_; compressing an image changes its appearance. The CLI commands in this article use _lossy_ compression. The point stands, alternative image formats should be used whenever possible, but I need a lossless compression pipeline to generate WEBP, AVIF, etc.
As someone who _also_ works with designers, I understand at a deep level how you end up in this situationābut I also don't think it should be this way. Image compression should be primarily an engineering issue, and designers should be giving you lossless or effectively lossless[1] images to compress as you see fit.
The one thing I'd designers to be responsible for is image _resolution_, since this is intrinsically tied to website layout and affects white source photos can be used where.
[1] For example, a 100% quality jpeg is perfectly fine to work with.
We need a _LOT_ more of this!
Many sites are becoming practically unusable as more and more crap is loaded for not-good-enough reasons. When websites from social media to banks often take tens of seconds to load, it is obvious that the designers and coders could not care less about the UX.
Even the definition of a "small" site has grown three orders of magnitude. There used to be a 5KB website competition [0], which grew to a 10K contest. Now, the link is to a One Megabyte Club [1] list of small websites.
I don't know how to reverse the trend, but managers and coders need to realize that just because some is good, does not mean that more is better.
[0]
[1]
Edit: line breaks. Also note that GMail HTML mode is much better than the 'standard mode'.
Resizing our image to 1000x1152 shrinks our PNG down to 1.5MB, and finally running cwebp to convert it to a webp we're down to 49KB.
That seems like a lot. What's going on here? Surely the compression level is changing drastically, right?
PNG is lossless, the 49KB WebP will be lossy (WebP can be either). Few images beyond the size of icons warrant losslessness, which comes at an _extreme_ size penalty.
Oh I didn't realize/forgot that PNG was losssless, thanks!
I'm curious how this file size would compare to a JPG at, say, level 85.
If you want a fast overview of a page's JS and coverage:
We developed a quick way to view a treemap of all your pages JS (including inside bundles) from a Lighthouse report. Do it through the DevTools or from PageSpeed Insights - no need to reach for a CLI tool.
Of course, you need to have source maps accessible to see inside bundles.
Here's an example:
https://googlechrome.github.io/lighthouse/treemap/?debug
Probably the coolest trick I've seen is inlining everything into a single HTML file: JS, CSS, images, icons and fonts. One single large request seems to work much better than loads of small ones.
This approach prevents any of those resources from being cached and reused on other pages. Visitors will end up re-downloading all that content if they click on any links on your page.
Mind you, most web developers assume users spend a lot more time on their site than they actually do. Most visitors to your website will not click on _any_ internal links on your page.
Deliberate inlining tends to make you more sensitive to size issues so that thatās also less likely to be an issue. Inlining everything also makes it so that you can at least theoretically do more effective dead code removal on CSS and JavaScript, page-by-page, though Iām not sure if Iāve ever seen that actually being done, though the tools mostly exist; but compared with things like GCC and LLVM, web tooling is _atrocious_ at optimisations.
It's still worth it depending on the page size and average page views per visit. However we're talking about a very small, very fast website. The parse+request time is longer than the 1-6kb of inlined resources.
We're doing this right now with our latest generation web app.
It's 1 big file of 100% hand-rolled html/css/js. The entire file is 1600 lines last I checked. Loads about as fast as you'd expect. I actually found a bug with the gzip threshold in AspNetCore vs chromium (the file is that small), so had to pad the index file with extra content to force gzip. Most of the dynamic UI happens over a websocket using a server-side rendering approach similar to Blazor.
You don't want to do that in most cases because those separate files can be loaded in parallel, where the single HTML file can't
Thatāsā¦ misleading, at best. Depends on how many resources you have, whether youāre using different domains, and whether youāre using HTTP/1 or something newer. Generally speaking, what youāre saying is only true if you have a _lot_ of decently large resources, loaded from different domains or over HTTP/1.
If everything was coming from the same origin already, then:
ā¢ If youāre using HTTP/2 or HTTP/3 your statement is almost entirely false: whether you load in parallel or in serial, itāll be downloading it at the same speed, except that by not inlining youāve added at least one round trip for the client to know that it _needs_ to request these other URLs; and now you mostly canāt control priority, whereas with everything inlined you know itāll load everything in source order, which you can normally tweak to be pretty optimal.
ā¢ If youāre using HTTP/1, the use of multiple connections (typically up to 6) only kicks in if youāve got plenty of stuff to download, but involves at least a couple of round trips to establish the connection, so _even if_ using a single connection reduces the available bandwidth, itās still normally worth it for many, probably most, sensibly-done sites.
which may imply a build tool. I'm not convinced heavier tooling make things more lightweight. Doesn't feel right for me.
On that note has anyone tried to "compress the Web" ? I know the Internet Archive is archiving it, but is there a browser or a tool in general that makes websites smaller while preserving functionality ?
A bit like Reader Mode in some browsers?
it's a bit of a stretch to say it preserves functionality :-) I meant it more in the lossless compression sort of way.
Gzip does that, chrome mobile will proxy and recompress things in data saver mode, cloudflare will minify pages and scripts and recompress images and lazy load things for you if you let it.
You can also just run an ad blocker and most of the weight goes away
Opera Mini used to do this
History API could improve on user experience. But I'm having issue with certain browser on iOS with swipe gestures that take you to the back and forward page. Firefox, Opera GX, etc will simple lose its scroll position whenever I swiped, while it doesn't happen on Safari and Google Chrome. This seem related to the infinite scroll page vs fixed page height.
Most of the site I have explore didn't appear to solve that annoyed issue as it happened to YouTube content on mobile too.
A related rant/con talk:
https://idlewords.com/talks/website_obesity.htm
sadly ages very well.
It seems obvious that website designers, and tools/orgs that provide websites to others don't have "small download size" or "fast" as project requirements.
it's easy to forget that there is a large number of users that are browsing the same websites as I am, however they are using an underpowered mobile device over a 3G network
What is even more interesting is that in some parts of the world 3G is about to be phased out.
Everybody on 4G and 5G?
I'm afraid not, the people who now are limited to 3G will be limited to 2G unless they invest into something new and shiny.
I'm really annoyed of websites that are just showing text+images and then load dozens of JS-scripts.
Sure, for some functionality you may need JS, but in like 90% of the cases JS is overused.
I believe it is against the t&C's for google maps to take a screenshot instead of an embedded map (or at.least it was when I looked a few years back).
Please take care to read the t&Cs.
Poor Google, however will they survive
By suing the pants off anyone who violates the T&C.
https://motherfuckingwebsite.com/
Oddly, iOS Safariās reader mode chops off the entire first paragraph in this article.
-conform to html4.1 spec
-stop using JS
-compress images
Done.
Gemini to the rescue.