💾 Archived View for dioskouroi.xyz › thread › 29440966 captured on 2021-12-04 at 18:04:22. Gemini links have been rewritten to link to archived content

View Raw

More Information

➡️ Next capture (2021-12-05)

-=-=-=-=-=-=-

Shopify modular monolith scaled to 30 TB per minute during Cyber Monday

Author: vladmihalcea

Score: 36

Comments: 8

Date: 2021-12-04 14:18:40

________________________________________________________________________________

jlgaddis wrote at 2021-12-04 20:58:56:

_... averaging ~30TB/min of egress traffic ..._

That's 4 Tbps after converting to units that everyone else in the world uses for network traffic (assuming the calculator installed atop my neck is operating bug-free this soon after booting up today).

ksec wrote at 2021-12-04 21:06:06:

Not trying to downplay those results and achievements.

But do each store front get their own instance of both Database and the App? If that is the case isn't that equivalent to running millions of Wordpress instances?

I think I asked this before but I forgot what the answer was.

codegeek wrote at 2021-12-04 16:33:54:

Show this to everyone who shits on monolith and only wants to build a scalable serverless microservices whatever.

cardosof wrote at 2021-12-04 19:04:54:

Some people just want to build CVs, not products.

dmlittle wrote at 2021-12-04 19:23:17:

A service oriented architecture is not a bad thing. It has its benefits as well as it’s downsides. The key is to know when to use the pattern and when not to.

I’m not sure what exactly “modular monolith” means but I’m guessing it’s some kind of mono repo with different services that share a large core but can scale independently. At Shopify’s scale I doubt you can just use a single service and database and not run into problems (both technical and development productivity wise).

speedgoose wrote at 2021-12-04 18:17:08:

The secret is to build scalable monoliths. You probably need a few microservices to achieve that at this scale.

10000truths wrote at 2021-12-04 20:57:22:

The secret is keeping performance in mind throughout your whole tech stack and in your application code. Do that, and you might not even need to scale beyond a couple machines for redundancy, depending on your SLOs.

I’ve said this a couple times before, but it bears repeating - nginx can easily serve a million 1 kB static files per second over TLS from just one machine with a modern Xeon/EPYC CPU. Serving dynamic content doesn’t have to perform any worse than one or two orders of magnitude below that.

yuppie_scum wrote at 2021-12-04 14:21:57:

Near perfect

BilalBudhani wrote at 2021-12-04 15:51:19:

BuT rAiLs DoEsNt ScAlE