💾 Archived View for danq.me › posts › fast-wordpress-the-hard-way captured on 2024-05-10 at 11:01:43. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2023-11-14)
-=-=-=-=-=-=-
2023-11-04
The Internet is full of guides on easily making your WordPress installation run fast. If you're looking to speed up your WordPress site, you should go read those, not this.
Those guides often boil down to the same old tips:
You've heard those tips before, right? Today, let's try something different.
This article is for people who aren't afraid to go tinkering in their WordPress codebase to squeeze a little extra (real world!) performance.
It's for people whose neverending quest for perfection is already well beyond the point of diminishing returns.
But mostly, it's for people who want to gawp at me, the freak who actually did this stuff just to make his personal blog a tiny bit nippier without spending an extra penny on hosting.
You shouldn't use Lighthouse as your only measure of your site's performance. But it's still reassuring when you get to see those fireworks!
Don't start with the hard way. Exhaust all the easy solutions - or at least, make a conscious effort which easy solutions to enact or reject - first. Only if you really want to get into the weeds should you actually try doing the things I propose here. They're not for most sites, and they're not the for faint of heart.
Performance is a tradeoff. Every performance improvement costs you something else: time, money, DX, UX, etc. What you choose to trade for performance gains depends on your priority of constituencies, which may differ from mine. (For my personal blog, I choose to prioritise user experience, privacy, accessibility, resilience, and standards compliance above almost everything else.)
This is not a recipe book. This won't tell you what code to change or what commands to run. The right answers for your content will be different than the right answers for mine. Also: you shouldn't change what you don't understand! But I hope these tips will help you think about what questions you need to ask to make your site blazing fast.
Okay, let's get started...
If there are plugins you can't remove because you depend upon their functionality, and those plugins inject content (especially JavaScript) on the front-end... backstab them to undermine that functionality.
For example, if you want Jetpack's backup and downtime monitoring features, but you don't want it injecting random <link rel='stylesheet' id='...-jetpack-css' href='...' media='all' />'s (an extra stylesheet to download and parse) into your pages: find the add_filter hook it uses and remove_filter it in your theme (If you prefer to keep your backstab code separate, you can put it in a custom plugin, but you might find that you have to name it something late in the alphabet - I've previously used names like zzz-danq-anti-plugin-hacks - to ensure that they load after the plugins whose functionality you intend to unhook: broadly-speaking, WordPress loads plugins in alphabetical order.).
Better yet, remove wp_head() from your theme entirely (I've assumed you're using a classic, not block, theme. If you're using a block theme, you get a whole different set of performance challenges to think about. Don't get me wrong: I love block themes and think they're a great way to put more people in control of their site's design! But if you're at the point where you're comfortable digging this deep into your site's PHP code, you probably don't need that feature anyway, right?). Now, instead of blocking the hooks you don't want polluting your <head>, you're specifically allowing only those you want. You'll want to take care to get some semi-essential ones like <link rel="canonical" href="..."> (WordPress is really good at serving functionally-duplicate content, so search engines appreciate it if you declare a proper canonical URL.).
Now most of your plugins are broken, but in exchange, your theme has reclaimed complete control over what gets sent to the user. You can select what content you actually want delivered, and deliver no more than that. It's harder work for you, but your site becomes so much lighter.
The single biggest bottleneck to the user viewing a modern WordPress website is the JavaScript that needs to be downloaded, compiled, and executed before the page can be rendered. Most of that's plugins, but even on a nearly-vanilla installation you might find a copy of jQuery (eww!) and some other files.
In step 1 you threw it all away, which is great... but I'm betting you were depending on some of that to make your site work? Let's put it back, carefully and selectively, while minimising the impact on load time.
That means scripts should be loaded (a) low-down, and/or (b) marked defer (or, better yet, async), so they don't block page rendering.
If you haven't already, you might like to View Source on this page. Count my <script> tags. You'll probably find just two of them: one external file marked async, and a second block right at the bottom.
The inline <script> in my footer.php wraps a single line of PHP: which looks a little like this: <?php echo implode("\n\n", apply_filters( 'danq_footer_js', [] ) ); ?>. For each item in an initially-empty array, it appends to the script tag. When I render anything that requires JavaScript, e.g. for 360° photography, I can just add to that (keyed, to prevent duplicates when viewing an archive page) array. Thus, the relevant script gets added exclusively to the pages where it's needed, not to the entire site.
The only inline script added to every page loads my service worker, which itself aims to optimise caching as well as providing limited "offline" functionality.
While you're tweaking your JavaScript anyway, you might like to check that any suitable addEventListeners are set to passive mode. Especially if you're doing anything with touch or mousewheel events, you can often increase the perceived performance of these interactions by not letting your custom code block the default browser behaviour.
I promise you; most of your blog's front-end JavaScript is either (a) garbage nobody wants, (b) polyfills for platforms nobody uses, or (c) huge libraries you've imported so you can use just one or two functions form them. Trash them.
Wait, what? That's the opposite of what everybody else recommends. To understand why, you have to think about why people recommend a CDN in the first place. Their reasons are usually threefold:
Claim: A CDN delivers content geographically-closer to the user.
Retort: Often true. But in step 4 we're going to make sure that everything critical comes within the first TCP sliding window anyway, so there's little benefit, and there's a cost to that extra DNS lookup and fresh handshake. Edge caching your own content may have value, but for most sites it'll have a much smaller impact than almost everything else on this list.
Claim: A CDN improves the chance resources are precached in the user's browser.
Retort: Possibly true, especially with fonts (although see step 6) but less than you'd think with JS libraries because there are so many different versions/hosts of each. Yours may well be the only site in the user's circuit that uses a particular one!
Claim: A CDN has more resources than you and so can better-withstand spikes of traffic.
Retort: Maybe, but they also introduce an additional single-point-of-failure. CDNs aren't magically immune to downtime nor content-blocking, and if you depend on one you've just doubled the number of potential failure points that can make your site instantly useless. Furthermore: in exchange for those resources you're trading away your users' privacy and security: if a CDN gets hacked, every site that uses it gets hacked too.
Consider edge-caching your own content only if you think you need it, but ditch jsDeliver, cdnjs, Google Hosted Libraries etc.
Hell: if you can, ditch all JavaScript served from third-parties and slap a Content-Security-Policy: script-src 'self' header on your domain to dramatically reduce the entire attack surface of your site! (Before you choose to block all third-party JavaScript, you might have to whitelist Google Analytics if you're the kind of person who doesn't mind selling their visitor data to the world's biggest harvester of personal information in exchange for some pretty graphs. I'm not that kind of person.)
There's a magic number you need to know: 12kb. Because of some complicated but fascinating maths (and depending on how your hosting is configured), it can be significantly faster to initially load a web resource of up to 12kb than it is to load one of, say, 15kb. Also, for the same reason, loading a web resource of much less than 12kb might not be significantly faster than loading one only a little less than 12kb.
Exploit this by:
$ curl --compressed -so /dev/null -w "%{size_download}\n" https://danq.me/
10416Note that this is the compressed, over-the-wire size. Last I checked, my homepage weighed-in at about 10.4kb compressed, which includes the entirety of its HTML and CSS, most of its JS, and a couple of its SVG images.
Again, this probably flies in the face of everything you were taught about performance. I'm sure you were told that you should <link> to your stylesheets so that they can be cached across page loads. But it turns out that if you can make your HTML and CSS small enough, the opposite is true and you should inline the stylesheet again: caching styles becomes almost irrelevant if you get all the content in a single round-trip anyway!
For extra credit, consider optimising your homepage's CSS so it's even smaller by excluding directives that only apply to non-homepage pages, and vice-versa. Assuming you're using a preprocessor, this shouldn't be too hard: at simplest, you can have a homepage.css and main.css, each derived from a set of source files some of which they share (reset/normalisation, typography, colours, whatever) and the rest which is specific only to that part of the site.
Most web pages should fit entirely onto a floppy disk. This one doesn't, mostly because of all the Simpsons clips, but most should.
Can't manage to get your HTML and CSS down below the magic number? Then at least ensure that your HTML alone weighs in at <12kb compressed and you'll still get some of the benefits. If you've got the headroom, you can selectively include a <style> block containing only the most-crucial CSS, with a particular focus on any that results in layout shifts (e.g. anything that specifies the height: of otherwise dynamically-sized block elements, or that declares an element position: absolute or position: fixed). These kinds of changes are relatively computationally-expensive because they cause content to re-flow, so provide hints as soon as possible so that the browser can accommodate for them.
We don't really talk about content being "above the fold" like we used to, because the modern Web has such a diverse array of screen sizes and resolutions that doing so doesn't make much sense.
But if loading your full page is still going to take multiple HTTP requests (scripts, images, fonts, whatever), you should still try to deliver the maximum possible value in the first round-trip. That means:
Fonts are lovely and can be an important part of your brand identity, but they can also add a lot of weight to your web pages.
If you're ready and able to drop your webfonts and appreciate the beauty and flexibility of a system font stack (I get it: I'm not there quite yet!), you can at least make smarter use of your fonts:
Browsers are pretty clever and will work-around it if you make a mistake. Didn't include an emoji or some obscure mathematical symbol, and then accidentally used them in a post? Browsers will switch to a system font that can fill in the gap, for you.
Don't use font-display: block, which is functionally the default in most browsers, unless you absolutely have to.
font-display: fallback is good if you're too cowardly/think your font is too important for you to try font-display: optional.
font-display: optional is an excellent choice for body text: if the browser thinks it's worthwhile to download the font (it might choose not to if the operating system indicates that it's using a metered or low-bandwidth connection, for example), it'll try to download it, but it won't let doing so slow things down too much and it'll fall-back to whatever backup (system) font you specify.
font-display: swap is also worth considering: this will render any text immediately, even if the right font hasn't downloaded yet, with no blocking time whatsoever, and then swap it for the right font when it appears. It's probably better for headings, because large paragraphs of text can be a little disorienting if they change font while a user is looking at them!
If writing is for nerds, then typography must be doubly-so. But you've read this far, so I'm confident that you qualify...
It's possible that by this point you're saying "if I had to do this much work, I might as well just use a static site generator". Well good news: that's what you're about to do!
Obviously you should make sure all your regular caching improvements (appropriate HTTP headers for caching, a service worker that further improves on that logic based on your content's update schedule, etc.) first. Again: everything in this guide presupposes that you've already done the things that normal people do.
By aggressively caching pre-compressed copies of all your pages, you're effectively getting the best of both worlds: a website that, for anonymous visitors, is served directly from .html.gz files on a hard disk or even straight from RAM in memcached (I've experimented with mounting a ramdisk and storing the WP Super Cache directory there, but it didn't make a huge difference, probably because my files are so small that the parse/render time on the browser side dominates the total cascade, and they're already being served from an SSD. I imagine in my case memcached would provide similarly-small benefits.), but which still maintains all the necessary server-side interactivity to allow it to be used as a conventional Web-based CMS (including accepting comments if that's your jam).
WP Super Cache can do the heavy lifting for you for a filesystem-based solution so long as you put it into "Expert" mode and amending your webserver configuration. I'm using Nginx, so I needed a try_files directive like this:
location / {
try_files /wp-content/cache/supercache/$http_host/$wp_super_cache_path/index-https.html $uri $uri/ /index.php?$args;
}
I'm sure your favourite performance testing tool has already complained at you about your failure to use the best formats possible when serving images to your users. But how can you fix it?
There are some great plugins for improving your images automatically and/or in bulk - I use EWWW Image Optimizer - but to really make the most of them you'll want to reconfigure your webserver to detect clients that Accept: image/webp and attempt to dynamically serve them .webp variants, for example. Or if you're ready to give up on legacy formats and replace all your .pngs with .webps, that's probably fine too!
Assuming you've got curl and Imagemagick's identify, you can see this in action:
(Will give you a WebP image)
(Will give you a PNG image, even though the URL is the same)
The single biggest impact you can have upon the performance of your WordPress pages is to make them less complex.
I'm not necessarily saying that everybody should follow in my lead and co-publish their WordPress sites on the Gemini protocol. But you've got to admit: the simplicity of the Gemini protocol and the associated Gemtext format makes both lightning fast.
Screenshot showing this blog post as viewed via Gemini, in the Lagrange browser.
Writing my templates and posts so that they're compatible with CapsulePress helps keep my code necessarily-simple. You don't have to do that, though, but you should be asking yourself:
A service worker isn't magic. In particular, it can't help you with those new visitors hitting your site for the first time (Tools like Lighthouse usually simulate first-time visitors, which can be a little unfair to sites with great performance for established visitors. But everybody is a first-time visitor at least once (and probably more times, as caches expire or are cleared), so they're still a metric you should consider.).
A service worker lets you do smart things on behalf of the user's network connection, so that by the time they ask for a resource, you already fetched it for them.
But a suitable service worker can do a few things that can help with performance. In particular, you might consider:
Chapters 7 and 8 of Going Offline by Jeremy Keith are especially good for explaining how this can be achieved, and it's all much easier than everything else I just described.
Did I miss anything? If you've got a tip about ramping up WordPress performance that isn't one of the "typical seven" - probably because it's too hard to be worthwhile for most people - I'd love to hear it!
My blog post "Bisect your Priority of Constituencies"
Javascript.info article on the difference between async and defer
A blog note in which I share a 360° photograph
MDN documentation on passive event listeners
WP Rocket's blog on reading and understanding waterfall performance charts
My blog post about danq.me being admitted into 512kb club
MSN guide to the loading="lazy" attribute
Modern Font Stacks on Github: a curated list of system font stacks for every purpose