💾 Archived View for dioskouroi.xyz › thread › 29406175 captured on 2021-12-04 at 18:04:22. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2021-12-03)
➡️ Next capture (2021-12-05)
-=-=-=-=-=-=-
________________________________________________________________________________
This is more about the perils of React switching from component classes where lifecycle methods were explicit and obvious to the multipurpose leaky abstraction known as useEffect and the hooks API. To anyone that went through the class phase, this issue is probably obvious and well known.
Also, Gatsby. If Gatsby is hiding the React warnings when your SSR diverges from client side, then that's on Gatsby. It's hard enough to debug when you know what's going on. This is like going in blindfolded with both arms tied behind your back.
Luckily this issue only comes up a handful of times.
Didn’t read the article, but I have to rant a little bit.
Working with `useEffect()` is so fucking miserable. It’s a bit like owning a car where the gaps between the panels are just slightly too far apart; there’s always the possibility that some data is out of sync because React inexplicably decided to run a thing a tick later than you expected. “Effects” don’t actually mix with with the old class component methods, insofar as you’ll never be able to get an effect to run before component methods, so you end up having to _gasp_ do side-effects in render against a bunch of “refs” to achieve some semblance of object permanence.
The implication behind the naming of “synchronous effects” as `useLayoutEffect()` is laughable, as though the only time you would ever want to run some code synchronously is for “layout” purposes. My guy, JavaScript’s security model is based on execution, you can’t copy to the clipboard, request full-screen/picture-in-picture, do countless privileged operations outside the synchronous execution of a native click handler, for instance, and we’re going to place all these use-cases under the umbrella term “layout?”
React applications are weird, shuddering messes, where callbacks fire senselessly based on “dependency arrays.” Heaven forfend you turn off linting which helps you not violate “the rules of hooks,” and even when it’s on there are countless gotchas about default parameters and callback functions which are recreated every render. At no point will you ever be able to correctly source the initiator of a “render,” because someone in their infinite wisdom decided to unroll the JavaScript call stack as a queue. Stepping out of a component in a debugger always places you in the same `while` loop and you’re left wondering where on Earth did this execution start. I don’t understand how programmers could be so ignorant of how useful a call stack is, and the React team essentially has to recreate call stacks in error messages and Devtools. Seasoned React developers have all seen “React minified error #185,” or “Maximum update depth exceeded.” This is essentially a stack overflow error, except of course they did away with the stack. It’s not normal for stack overflow errors to be happening as frequently as happens in React applications, and this is just when you’re lucky enough to get a warning; a lot of times the page will just hang and no one except your poor users will know.
I find it truly disturbing how professionals can make their and everyone’s lives so difficult, how deeply disconnected React is from the actual practice of programming, which is all about understanding how your code executes. My mind is boggled by 5 by 5 zoom calls of well-paid “staff engineers” chatting about React and writing tutorial upon tutorial like any of this normal. And these are the loud ones. I pray for the entry-level programmers, the contractors, the voiceless, who aren’t beefing on Twitter, who do not have a sense of how strange this kind of development is and don’t have the courage to speak up.
My work seems to have made you angry which makes me sad.
You’ll at least be happy to learn that useEffect will be sync if the render itself was sync in React 18. Ofc if you do an async render those effects won’t work anyway and you need to do it in the event handler itself.
I doubt it’s much of a conciliation though. I agree the programming model is weird. It’s the best idea we had for what we were trying to achieve. Sorry for the trouble it has caused you.
This 1000X's.
At this point React is mostly a jobs program. Whatever useful abstractions it initially provided has been eclipsed by bloat.
I struggle to reconcile the unnecessary complexity with the fact that it has translated into so many well paying jobs.
Convenient reactivity models have existed for a long time. Someone, bring them to the react developers.
https://dev.to/ninjin/main-aspects-of-reactivity-58co
I think you’re using useEffect without an actual reason to use useEffect.
I’ve worked on large react projects for 4 years and I’ve only had to use useLayoutEffect once.
Sounds to me like you need to learn how to use React as a library and not as the only solution your app uses. Guess what: you can absolutely write plain JS (or Jquery etc) alongside your React components.
I understand all the hook rules and all the gotchas around stale closures. But I don't want to have to deal with those as part of my everyday, regular programming, it's just stupid. React is no longer productive and hooks are way out of control. Did you know hooks are even more stupid with concurrent mode on?
Well I love hooks
> This is more about the perils of React switching from component classes where lifecycle methods were explicit and obvious to the multipurpose leaky abstraction known as useEffect and the hooks API. To anyone that went through the class phase, this issue is probably obvious and well known.
I don't see how that follows, as someone who transitioned from class components to functional. A functional component is simply a function that runs top to bottom. I don't see any lifecycle issues in the example given.
Yes, if you run that top to bottom on the server and render the output to the client, it's going to return null, since there's no window object. It's obvious once you think of it as just a function, and not an OO abstraction. That they fixed it with a hook doesn't mean the only way to fix it is with a hook. It's just a misunderstanding about SSR on the author's part, I feel.
If anything, lifecycle is much more obvious now, given that you know that returning a function from `useEffect` will call it when the component unmounts, which admittedly isn't obvious without reading the docs.
I think the common, simpler way is to make the component itself (<AuthenticatedNav>) state-aware. So it always returns that component, but the component itself renders differently (`isLoggedIn && <NavBarSubComponent>`) depending on some upstream state passed as a prop or a shared context via a context provider.
If you return two different tags like the author did, React won't know how to properly inject that into the DOM. They are different elements, after all.
Honestly, it's a very frustrating aspect of modern web development that developers still have to care about this client/server split for initial render.
JavaScript, via Node, threatens to consume server-side implementations of user-facing output for no other reason than this is an annoying problem that gets even more annoying if your client and server are talking two different scripting languages.
Nobody has yet stepped up and delivered a WASM PHP client side renderer. I bet it would even be size competitive with modern JS frameworks!
This is a hilarious idea, bonus points if you somehow wrap it in electron with its own local sql lite instance and pitch it as a cross platform desktop app development toolkit.
You laugh, but I'll bet you there's someone pitching this very product to room full of enthralled VCs right now.
XML/XSLT solved this problem. And it still works in your browser!
I know this is way besides the point, but
Smart people realized that if we could do that rendering on the server, we could send the user a fully-formed HTML document.
Is so funny to read. It’s like history never happened.
Is more like it was unreasonable to do all that work on the client side for a long while.
Then when we could, people suddenly thought it was the best thing since sliced bread.
Trying to couple client server might seem like a good idea, but you are always left guessing as to how the component got in the state it is in, which defeats a lot of the stateless ideas and puts us in a bad place debugging. Express is a good idea, React is a good idea, NextJs and the like make the developers guess until they get burned enough and then by that time a new concept like hooks vs class components comes to popularity and you start the process of learning where the new pit traps are located, until the next big shakeup.
I think this is overstating the case a bit? In Next.js, it's pretty straightforward (and useful, really): the page gets its initial data injected at build time and served as HTML. It's static and stays that way until and unless you purposely add in dynamic elements on the page using useEffect or similar.
It's not really any different than, say, using PHP to generate the HTML and then using Javascript for clientside interactivity, with or without AJAX. That's been the paradigm for decades now. What NextJS gives you is the ability to do use that same paradigm in a single language, React.
Architecturally it's not really all that different, but in terms of dev experience, Next.js + Vercel is SO much better than having to maintain a LEMP stack. Just code the frontend against some CMS or other data store, push to git, and done!
If only there was a way to directly send the data when rendering the page server side...
I'd suggest you re-read this[1] section of the article -- they're talking about "server-side generation" where rendering happens at compile time, not at request time (to avoid delaying responses with on-the-fly render computation). The data you're referring to will not be available at compile time.
[1]
https://www.joshwcomeau.com/react/the-perils-of-rehydration/...
Personally, and I totally recognize that I'm likely arguing a moot point, I don't conflate static-site generation and server-side rendering and I don't think they can or should be used interchangeably as the author indicates. If my site can be hosted by Nginx alone, it's not server-side rendering to me.
On the other hand, if I am paying for the overhead of running a node server to host my site, but it doesn't support this, why not? I'm very suspicious of the argument that it's to save time due to on-the-fly render costs.
It seems that Remix and perhaps React's new server-side components will be solving this problem in a much more coherent way without requiring weird hacks like this.
In the Next.js/Vercel world, this hybrid architecture is primarily (I believe) an issue of cost savings (for them and you). For $20/mo Vercel will host your hybrid Next.js side, yet do all of the backend configuration and maintenance on your behalf, configuring not just NPM but Lambda and Cloudflare Workers, along with nginx and caching and CDN mirroring and invalidations, all seamlessly.
So as a dev you can just code against the Next.js docs and not have to play devops. That's a LOT of time savings for $20/mo, and you can afford way more page hits this way than a barebones $20/mo NPM VM would get you. Yes, backend it's super complex and fanned out to different providers, but Vercel manages all of that for you.
I imagine Gatsby and Netlify are similar, but not sure. Next.js was designed around Vercel specifically (sadly) and there's a lot of vendor-specific lock-in features.
I completely get that this ease of configuration is a huge selling point. Yet I don't think that precludes the possibility of pre-loading information and injecting it at runtime. My main qualm here is probably one where the meaning behind terms like "server-side rendering" are drifting to favor artificial limitations imposed by framework providers.
> Next.js was designed around Vercel specifically (sadly) and there's a lot of vendor-specific lock-in features.
I ran into this while playing around with their middleware. It surprised me that native Node APIs weren't supported which significantly diminishes its utility.
> Yet I don't think that precludes the possibility of pre-loading information and injecting it at runtime.
Can you explain?
Yes, but how expensive it is to render on request time anyways? Is it really a better experience to send a "mostly accurate HTML" and "hydrate" it with JavaScript in client?
I am working on this kind of technologies and I totally understand how it works but I'm not convinced this is the right solution. Actual rendering on edge without all this JS soup is what big websites should do.
Remix [1] solves this problem very well.
Data that is needed by a React component is declared in the same file as the component as a loader function which the server then invokes and renders into the component on the server so the client receives an HTML response with the view ready to go (server side rendering).
I distinguish between server side rendering which is a function invocation that takes place for each request as opposed to pre built HTML which isn't "rendering" anything and instead returning a static, pre-generated file.
With Remix, the client gets an HTML response the server generated with all the data it needs already retrieved (it was retrieved on the server).
The client can then run any additional client side JS but the user doesn't need to wait on the client side JS to see the content they were trying to browse.
Remix can easily be deployed to edge systems like cloudflare workers.
The services that the loader functions hit can be run somewhere else of course as well.
[1] remix.run
It's a cost thing... a lot easier to CDN static HTML files and only make edge calls as necessary.
In our Next.js setup, for example, all pages are served static by default and cached around the world, but every revalidation period (60 seconds or so?), one edge worker will check the data source against the upstream CMS, and rebuild that one page if needed.
That means our max edge worker and API call is 1 per minute, regardless of whether there are 10 visitors or 10 million. The others will just get the cached CDN files.
That makes it a lot more affordable.
If you had infinite budget and could pay for edge functions to re-render the page every visit, with or without caching, then more power to you. But not everyone can afford that.
Edit: Also, a secondary benefit is accessibility. Since most of the content is served as a flat HTML, those with outdated browsers/misconfigured ad blockers/javascript turned off can still see the most important part of the content. Maybe they miss some interactivity, but they can still at least read the gist of the article since that's just HTML.
Depending on your specific edge function configuration (i.e. whether it needs web workers or web sockets or such) some browsers may not load that correctly. Dangerous for that to be the only delivery mechanism. But if your edge function just masquerades as a web server and returns plain old HTTP, that shouldn't be a problem.
> In our Next.js setup, for example, all pages are served static by default and cached around the world, but every revalidation period (60 seconds or so?), one edge worker will check the data source against the upstream CMS, and rebuild that one page if needed.
Are you all using the incremental static regeneration API to accomplish this?
> Are you all using the incremental static regeneration API to accomplish this?
Yep. It's great, if hackish. A good overview (
https://www.smashingmagazine.com/2021/04/incremental-static-...
) or Vercel's own docs (
https://vercel.com/docs/concepts/next.js/incremental-static-...
)
Vercel aside, I think Cloudflare by itself works similarly if you configure it to "cache everything", including HTML pages, on a revalidation period. I find this a great balance between serverside rendering and static builds. Specifically it allows you to rebuild certain pages as things are created/updated (new or edit blog posts, products, etc.) without having to trigger a full rebuild every single time.
My consistent experience is that sites that simply make a round trip to the server and re-render the world on every non-trivial interaction are _much_ faster than "performant" web apps that try very hard never to request actual HTML from a server, ever.
Could this happen in sveltekit?
It seems somewhat unlikely for a number of reasons. The article was quite long, so I didn't read it all, but a key takeaway seemed to be "Gatsby only uses the server-side rendering APIs when building for production". SvelteKit does SSR rendering in development as well, so you wouldn't have the issue of dev and production being different in that way. Also, Svelte's hydration works a bit differently than React's.
This specific issue: no, not likely. It _is_ possible for a SvelteKit application to vary in execution in the `dev` environment versus production because the contexts can be somewhat different, at least with certain adapters, but this isn’t a SvelteKit issue _per se_.
Just to clarify myself. If I have already logged in and now closed the window. Again open a tab and send request to the server, won't the token/cookies/sessioninfo etc. be sent in the initial request to the page? And if the auth info is sent via the headers, can we not check the user login status (expired sessions) at the server and send the right component back?