šŸ’¾ Archived View for dioskouroi.xyz ā€ŗ thread ā€ŗ 29410225 captured on 2021-12-04 at 18:04:22. Gemini links have been rewritten to link to archived content

View Raw

More Information

ā¬…ļø Previous capture (2021-12-03)

āž”ļø Next capture (2021-12-05)

šŸš§ View Differences

-=-=-=-=-=-=-

Firefox's Optimized Zip Format

Author: throw0101a

Score: 279

Comments: 81

Date: 2021-12-01 22:56:15

Web Link

________________________________________________________________________________

jandrese wrote at 2021-12-02 01:00:55:

At the time optimized jar change broke antivirus scanners, which further sped up Firefox startup :)

I'm pretty sure that counts as cheating.

AV overhead is a real issue though. I was recently copying a directory full of small files from one fast SSD to another (larger) one. I started the transfer and Windows estimated about 8 hours to finish the transfer, and the overall progress seemed to be roughly on track. On a whim I tried temporarily turning off Microsoft's built-in antivirus scanning. Total transfer time dropped to about 40 minutes.

BiteCode_dev wrote at 2021-12-02 09:46:48:

Once, I was working on an intranet web tool generating .odt documents for the user to print.

It worked perfectly, except on one machine, where the document could be downloaded but LibreOffice raised an error when opening it.

At first I though I had a bug. Checked my code, nope.

Then I though maybe there is a problem with LO, but no, same version, nothing special. It worked with a document I brough from an USB disk.

After an afternoon of frustration I noted Kaspersky running in the task bar, and did one last desperate attempt at finding the solution by disabling it.

It worked.

Inspecting the file revealed some parts were missing, Kaspersky assumed it was a malware, and not only intercepted the download, but removed silently part of the data on the fly.

No alarm, no pop up, just a broken file.

Disabling the antivirus was not acceptable for the client, so I created a self signed certificate, and turned https on.

The virtual vigilante couldn't spy on the wire anymore, and the user could download her files, unharmed.

thrower123 wrote at 2021-12-02 13:04:59:

I had a development machine once that had the weirdest problem building one particular C# project. Always complaining that one particular Microsoft DLL was missing. Double check references and Nuget, everything looks right, do a full package restore, no joy. Antivirus was eating this particular DLL every time it was downloaded.

I copied a bit-for-bit identical version of the file from a different machine into the correct directory, and bizarrely that worked fine.

mort96 wrote at 2021-12-02 13:28:43:

I had a very similar issue with BitDefender recently. When downloading the WebRTC C++ library, one of the submodules is harfbuzz, which contains some maliciously crafted font files used by the test suite. BitDefender automatically deletes those test files. That means git sees the repository as dirty, since files are deleted. That means WebRTC's fetch script can't run the 'git checkout' commands it expects to be able to run, and the whole download fails.

While my summary here is brief, debugging this whole mess took a while.

hyperman1 wrote at 2021-12-02 08:32:02:

Let me present you McAfee copying a folder full of zip files, at least this is what it did five years ago. First of all, it is horribly slow scanning any kind of zipped content. Second, it scans a whole folder if you touch one file. Third, it forgets you scanned a file if it gets modified, even before closing the file.

So you copy a buffer full (64k?) of bytes, and mcafee scans the full folder. Then you copy another buffer full, and it scans the full folder again. And so on.

I saw the windows estimated copy time go down from multiple days to a few minutes by adding an exclusion on a folder.

Another fun McAfee tip is to keep your recycle bin as empty as possible, especially for zip files, because windows will look at the content regularly, so mcafee keeps on scanning it again and again.

franga2000 wrote at 2021-12-02 10:34:08:

> Another fun McAfee tip

Here's my favourite one, which was even supported by the man himself: _uninstall it as soon as possible!_

It's not only a pretty bad antivirus, but seems to be straight up dangerous, having had some of the most destructive bugs I've seen in any software (like the time it bootlooped millions of PCs around the world- including hospitals and police stations!). There are so many better options available, I'm amazed anyone actually uses it (unless 100% of their install base is people who got it preinstalled).

hyperman1 wrote at 2021-12-02 12:02:58:

You're my new best friend! I'd give you the contact for corporate IT, but even they ran away to MS Defender.

McAfee's users are not the people who suffer it on their PC, but the manager who love a big name and hate a migration, and the windows domain admins who love the centralized console. The damage caused by user frustration, lost time waiting, troubleshooting, and actual hardware obsolete before its time is not their concern.

I can tell from experience the windows domain admins disable it on their own laptop, as they 'know what they are doing', unlike the rest of us.

formerly_proven wrote at 2021-12-02 08:56:37:

McAfee is still utter garbage (like, well... most "security" software) that will make an NVMe SSD look like a late 90s HDD.

jazzyjackson wrote at 2021-12-02 01:48:24:

Wild that windows feels it important to scan every file going from one local drive to another. my laptop has gotten a lot quieter after turning off virus protection, I wonder how much electricity is burned just idley checking for viruses.

Edit: thanks for the replies, on second thought it is pretty tricky to know a file youā€™re writing already exists without trusting the process doing the copying

passerby1 wrote at 2021-12-02 02:07:58:

Interesting question! When mankind will really need to calculate costs and try to get rid of such an entrenched "habits" as av-scanning (same as to crypto-mining nowadays), it could be too late to do something.

Or take banks. I was amazed to see how much manual work is done in financial institutions. Skyscrapers with bank name on top of it filled with tens of thousands people. Everyone has computer, among other things burning electricity. Separately, datacenters. How does it compares to crypto? I doubt it's less, if sum up all banks.

But anyway, av-scanning is doing much good, but can be socially out-engineered in a single click, making it useless in most critical cases.

jahnu wrote at 2021-12-02 06:52:30:

Most manual financial work in banks is not on mere transactions. It's on higher level structures such as doing due diligence on lending. A small fraction of the employees are actively keeping the basic transaction mechanisms going. Rest assured that fraction that is working on those is, per transaction, a mere fart in the proverbial hurricane that is crypto energy use per transaction.

nightfly wrote at 2021-12-02 02:14:16:

Re: banks/crypto

Crypto's current energy consumption is doing a miniscule fraction of the transaction handling that banks are. If you scaled up current crypto to match banks, I don't think that would even be possible.

therein wrote at 2021-12-02 04:00:14:

Crypto's power consumption is not truly related to the transaction volume.

nwah1 wrote at 2021-12-02 05:19:51:

Right, the transaction volume it can support will never change, but the energy consumption rises as a function of the price, with no upper bound.

Groxx wrote at 2021-12-02 05:38:44:

... every piece of that sentence is straightforwardly incorrect.

jazzyjackson wrote at 2021-12-02 06:48:28:

Do you want to explain how?

AFAIK The number of transactions per block and the rate at which blocks are confirmed has nothing to do with how many people are burning through hashes.

And the price of a token is a kind of bounty for mining a block: the higher the price, the more money can be spent trying to win the bounty while still being profitable. You wouldn't see data centers full of ASICs if the reward was only a few hundred bucks.

So how straight forward is it?

Groxx wrote at 2021-12-02 19:24:10:

Their:

>_the transaction volume it can support will never change_

is plainly ridiculous (as proof: transaction volume _has_ changed, and multiple changes have been successfully made that increase the peak volume), and is also covered by your:

>_AFAIK The number of transactions per block and the rate at which blocks are confirmed has nothing to do with how many people are burning through hashes._

as for:

>_the energy consumption rises as a function of the price, with no upper bound._

though there's obviously _some_ relation to price and electricity use (i.e. hashing power, ignoring tech improvements), that direct relation really only makes sense with the per-block rewards... which keep halving, and that does not immediately lead to 1/2 hashing power. the market has repeatedly demonstrated that price and value-per-block do not directly relate to hashing/electricity. on top of that, transaction fees are taking a larger and larger proportion of it, and that's related pretty much solely to transaction volume (which does not noticeably affect electricity, as you pointed out), not value per coin or electricity.

as far as "no upper bound"... only if you ignore market effects, like if electricity became free globally. since it's market-driven though, as beyond a certain point it's not profitable, you can't ignore that. and again, hashing power _has dropped many times in the past_, clearly demonstrating effective upper bounds at any given time.

ggrrhh_ta wrote at 2021-12-02 07:07:21:

Can you explain in which ways is it incorrect? I am puzzled. I guess that what was is incorrect is using the word "crypto" instead of "POW blockchain-based"?

nfin wrote at 2021-12-02 07:45:05:

after a halving (every 210.000 bitcoin blocks, roughly every 4 years), all other things being equal (bitcoin price, energy price, bitcoin fees, ā€¦ and enough chip production), the energy consumption would get cut in half (as the reward gets cut by two).

This fact should stay true for another 2-3 halvings (including same energy consumption if price doubles which is the same), until the factor 0.5 (energy consumption after halving) starts to increase to 1 when/if higher fees compensate for the diminished issuance of new coins (while issuance of new coins goes to 0).

Because fees for now are very low, in the big picture and almost not relevant in the power consumption calculstions.

But until these about 2-4 halvings happen (=Y), price might increased by factor X and energy consumption only X/2^Y (which could even be a decrease if price does not go up much!). And if price would stay the same, then energy consumption would actually drop by 50% every halving! (if all other things being equal and supply of miners and mining chips were not a problem).

So to sum it up differently: Bitcoin consumes a lot of energy to have a fair new issuance of coins. Instead of being created out of nothing and being distributed unfairly (worldwide privacy issuance of a new coin is REALISTICALLY very very very hard, and has been tried by many projects and all failed), bitcoin gives you new coins for the same amount that you burn energy, so you canā€˜t fake it or corrupt it in the hands of politicians or any other left/right/religious/ā€¦ group

saghm wrote at 2021-12-02 03:29:13:

Sure, but banks those banks are actually providing value for large numbers of real people right now; cryptocurrencies have yet to show that they can do much what has been claimed about them.

atoav wrote at 2021-12-02 07:29:37:

I recall paying for a coffe with bitcoin once. The transaction took 30 minutes despite an above average fee which was much more than e.g. PayPal would have taken. This was when I knew that bitcoin is unsuitable to replace other payment providers.

cycomanic wrote at 2021-12-02 16:58:48:

> Interesting question! When mankind will really need to calculate costs and try to get rid of such an entrenched "habits" as av-scanning (same as to crypto-mining nowadays), it could be too late to do something.

> Or take banks. I was amazed to see how much manual work is done in financial institutions. Skyscrapers with bank name on top of it filled with tens of thousands people. Everyone has computer, among other things burning electricity. Separately, datacenters. How does it compares to crypto? I doubt it's less, if sum up all banks.

You'd be likely wrong. Bitcoin now uses the same energy as the whole country of Argentina with 43 million people. For comparison Bank of America has 200 000 employees. Are there more than 200 banks of that size, maybe but I'm doubtful. Moreover Argentinas energy consumption includes vastly more energy intensive endeavours than banking (manufacturing, mining etc). from that estimate I would be very doubtful that the whole banking sector uses more energy than bitcoin mining. Moreover as others pointed out the banking sector serves many more people with many services that bitcoin can't provide.

Spooky23 wrote at 2021-12-02 02:40:31:

Iā€™ve always found amusement in the ironic carbon navel gazing found on message boards.

dataflow wrote at 2021-12-02 02:51:47:

I think it's probably because the system doesn't see it as "copying", it sees it as "writing arbitrary data that just-so-happens-to-be identical to existing data in another open file". It'd be tricky to figure out (and verify in a _secure manner_) that data is in fact being copied.

saghm wrote at 2021-12-02 03:26:40:

I mean, if you're literally using the Windows utility for copying files (either by clicking copy/paste in Explorer or using whatever the DOS equivalent of "cp" is), it seem like it shouldn't be that tricky for them to make it work at least for the built-in AV, which is what the parent comment was talking about.

dataflow wrote at 2021-12-02 03:38:53:

I think that depends on whether you want to trust already-executing programs or not. If you trust them, then sure, it's pretty straightforward: you just tell the AV what to scan vs. not. If you don't, then it's not: you have to either offload the copying to kernel-mode, or delegate it to a secured user-mode process, because otherwise a malicious program could just modify the buffer you're copying from. The former makes sense from a performance standpoint but the latter also makes sense from a security standpoint.

makeitdouble wrote at 2021-12-02 06:19:57:

An issue is the original files are not guaranteed to be trusted. If for instance your were copying files from a flash drive or a network permission, it would seem natural to rescan every file that is copied to a local disk.

From there the more the "outside"/"inside" line gets blurry (e.g. what if it was synced from Dropbox ? or data got injected by an arbitrary program etc.), the more it becomes justified to just scan every file moving anywhere. If the goal is indeed to scan files in the first place.

buraktamturk wrote at 2021-12-02 03:34:53:

If I remember correctly the CreateFileEx win32 api takes a flag and template file handle to copy when creating a new file. Maybe the copy operations can use this api and expect AV software to ignore the operation. (I have no idea about the performance of such approach)

Or maybe a syscall (preferably at filesystem level) that can be introduced for bulk copying purposes and AVs can ignore such calls.

bmm6o wrote at 2021-12-02 16:40:47:

The template parameter is related to attributes, not contents. AV would still have to scan the data that is written to the file.

--

[in, optional] hTemplateFile

A valid handle to a template file with the GENERIC_READ access right. The template file supplies file attributes and extended attributes for the file that is being created.

cesarb wrote at 2021-12-02 15:29:04:

> It'd be tricky to figure out [...] that data is in fact being copied.

For other operating systems, sure, but the Win32 API has a function literally called CopyFile (and its variants CopyFileEx and CopyFileTransacted).

greggman3 wrote at 2021-12-02 07:18:52:

I don't know under what circumstances Windows scans files but I do no that copying a file to the same drive does not rescan the file. I found this out because grunt-copy-contrib copies incorrectly by opening the src, reading bytes, writing to dest instead of just asking the OS to copy the file. Switch it to ask the OS to copy the file removed the scanning (windows already knows it scanned the file so no need to scan a copy).

thrower123 wrote at 2021-12-02 01:58:11:

One hasn't felt pain until one has dealt with large npm-based projects on a corporate machine that has an hdd and two competing antivirus scanners battling to scan all 18,000 files in node_modules every time you run webpack.

cl0ckt0wer wrote at 2021-12-02 05:39:26:

the built in AV scans on close of file handle. moving this to a background thread will really speed things up.

https://groups.google.com/g/mozilla.dev.platform/c/yupx2ToQ5...

cl0ckt0wer wrote at 2021-12-02 05:40:49:

Found the article that goes into depth on this issue

https://gregoryszorc.com/blog/2021/04/06/surprisingly-slow/

and the bug link:

https://www.mercurial-scm.org/repo/hg/rev/2fdbf22a1b63f7b4c9...

resonious wrote at 2021-12-02 11:41:29:

This kind of thing really makes me wonder how safe these AV systems really are. Code that runs on every single file dropped into the user's hard drive seems like a great vehicle for zero click exploits.

Thiez wrote at 2021-12-02 14:12:52:

At a previous job we discovered that the software was strangely slow when running locally. It turns out that the antivirus (ESET, I believe?) was delaying _every_ WCF request by 0.5 seconds. After adding encryption to the connections the delay magically went away.

roywiggins wrote at 2021-12-02 05:24:43:

Even worse than that, I've had my antivirus freak out and BSOD the system during copying too many files around inside WSL1, where "too many" was "checked out a nontrivial git repo." It was repeatable. It meant WSL1 was just not usable at all.

dgellow wrote at 2021-12-02 06:26:41:

You can add the top directory you use for dev projects to Windows Defender exclusion list. Also, your build tools or IDE processes can be added. For example excluding devenv.exe has a very important impact on Visual Studio performances.

TomGullen wrote at 2021-12-02 08:15:44:

Took me longer than it should of to realise Windows Defender scanning everything in wwroot everytime we published a new build of the site was significantly increasing downtime!

KronisLV wrote at 2021-12-02 09:01:57:

A part of me wants to suggest that you look into using something other than Windows for hosting sites, since in my experience GNU/Linux servers are far less painful to deal with, especially when it comes to services and file permissions. However, it's likely that you have your own reasons for using it (e.g. many projects out there are stuck with legacy versions of .NET and Mono wasn't really stable for that use case either).

On the other hand, i can't help but to wonder about why we don't see more anti virus/malware scanning solutions for GNU/Linux, especially with how coupled our development workflows are to the Internet - i've had Ruby gems with malware in them be stopped by Kaspersky on Windows, and honestly the thought of pip, npm and many other package managers being able to compile god knows what and execute random scripts in a non-sandboxed environment is horrifying.

Market share aside, i bet we still see malware targeting *nix be developed, especially in the age of pre-compiled binary blobs and such, especially when the complexity of everything increases to the point where you can't audit everything that's present with your current resources, which will probably only become a larger problem with time.

So, why don't we have an equivalent to Windows Defender that's either preinstalled or recommended during installation, like you can choose between different desktops while installing distros? Why don't we talk more about something like

https://www.kaspersky.com/small-to-medium-business-security/...

?

roca wrote at 2021-12-02 02:27:54:

Anything that reduces the startup time users experience is not cheating.

alibert wrote at 2021-12-02 12:17:48:

Windows Defender is the worst AV in term of performance impact on the system. It's so significant in IO application that I don't understand why it is not number one top priority for the Defender team. It has been like this for several years. I have a paid AV just for this issue (avoid the free one).

https://www.av-comparatives.org/tests/performance-test-octob...

jijji wrote at 2021-12-02 04:12:24:

That's primarily a Windows problem though, right?

josephcsible wrote at 2021-12-02 15:40:20:

McAfee exists for Linux, it's just as bad as you'd think it is, and some of us have the misfortune of being forced to use it on every workstation and server.

arghwhat wrote at 2021-12-02 12:10:53:

> I'm pretty sure that counts as cheating.

It also shows how useless most AV is.

eatonphil wrote at 2021-12-01 23:57:31:

The oddest part of the zip format to me is the msdos time encoding. It is pretty unpopular today though (at least on Google) because it took me a while to find an explanation of it with code [0] when I was doing an educational project recently on building a zip reader from scratch. But I assume it was a more popular encoding at the time zip was designed.

[0]

https://groups.google.com/g/comp.os.msdos.programmer/c/ffAVU...

donio wrote at 2021-12-02 00:24:43:

It was the easiest to do since that's the format used by the DOS API calls for getting and setting the mtime.

barosl wrote at 2021-12-02 03:25:31:

> Since the lower four bits are /2

Wow, so the second field of every timestamp in a ZIP file must be an even number? That's shocking to me! It seems that RAR and FAT32 share this problem. How could I not know this?

mappu wrote at 2021-12-02 04:52:22:

FAT32 timestamps are also stored as local time instead of UTC, so there's no way to know a real mtime at all, unless you have out-of-band knowledge. Just nonsense all round -

- by modern standards.

The really impressive thing is that our industry has managed to paper over most relics from The Olden Days to the point where this discovery is surprising!

lifthrasiir wrote at 2021-12-02 08:28:54:

Because there are several extensions (extra fields) to ZIP that record the full modified date in addition to creation and access date if the OS supports them.

aidenn0 wrote at 2021-12-02 06:45:21:

You've clearly never rsync'd data to FAT before...

palsecam wrote at 2021-12-02 09:59:21:

Another explication with (JS) code of the MS-DOS datetime encoding is at

https://github.com/PaulCapron/pwa2uwp/blob/79ae42ea43e98eaf9...

https://github.com/PaulCapron/pwa2uwp/blob/master/src/zip.js

in its entirety provides a simple intro, with code, to the ZIP format (archiving only, with no compression)

throw0101a wrote at 2021-12-02 01:04:03:

For anyone wondering about the official documentation of the file format, PKWARE has:

https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT

https://support.pkware.com/home/pkzip/developer-tools/appnot...

6.3.9 was released July 15, 2020.

lulalalalala wrote at 2021-12-02 02:49:36:

And for design spec ambiguities read

https://web.archive.org/web/20210722231119/https://games.gre...

MontagFTB wrote at 2021-12-02 03:21:50:

I recently made a visualization template of ZIP file contents for HexFiend. The format is certainly wonky, and some writers (Iā€™m looking at you, macOS) donā€™t get it quite right:

https://github.com/HexFiend/HexFiend/blob/master/templates/A...

floatboth wrote at 2021-12-02 01:42:29:

renamed .jar files .ja so Microsoft System restore wouldnā€™t corrupt Firefox

OHH that's why it's called omni.ja.. But why was it "jar" in the first place, for a thing that's _not_ Java?

dblohm7 wrote at 2021-12-02 03:49:54:

Another reason was because people trying to open omni.jar with zip utilities were always filing bugs, when the real issue was that the utilities couldnā€™t handle our optimized format.

(I owned the JAR code for a few years after Taras left Mozilla)

ronsor wrote at 2021-12-02 01:51:46:

It's simply a jar of files, not a JAR (JAva aRchive) of files.

koolba wrote at 2021-12-02 02:05:42:

Though even the Java JAR file is itself just a zip file.

js2 wrote at 2021-12-02 02:44:48:

As are apk, aar and ipa files among others. However, they arenā€™t just zip files as they must be in a specific layout, so they are a subset of the zip format.

easrng wrote at 2021-12-02 13:15:20:

You can make a valid APK/IPA/JAR polyglot, just put all the files in one folder, and use the jar tool to make it back into a zip, then rerun jarsigner because the APK has to be resigned after adding the Java and iOS files.

tadfisher wrote at 2021-12-02 02:54:11:

I believe the format specifies the META-INF/MANIFEST.MF path, but in practice the classloader and jar tools can deal with a missing manifest. Otherwise it is a zip, with all of its warts and non-reproducible default behavior.

Did you know you can have multiple zip entries with the same path? Ask me how I found out.

diroussel wrote at 2021-12-02 08:54:14:

Another difference is that In a jar file, all file names are utf-8 encoded. Where as in a zip file the encoding is not specified.

ajb wrote at 2021-12-02 09:08:37:

Most of the old school archive formats allow adding the same path twice. It's to avoid having to rewrite a huge archive just to change a file.

account42 wrote at 2021-12-02 16:01:21:

You don't need the same path twice in the central directory record for that though.

koolba wrote at 2021-12-02 03:11:55:

That sounds like a recipe for a good time with ClassLoaders.

mdaniel wrote at 2021-12-02 04:57:34:

It's been the source of at least two Android vulns that I can recall, since code-path A uses the top manifest for signature verification but code-path B uses the bottom manifest for reading out "the file" and poof!

pvorb wrote at 2021-12-02 20:11:19:

How did you find out?

dolmen wrote at 2021-12-02 12:01:58:

How did you find out?

usrusr wrote at 2021-12-02 12:25:19:

The usual way to find out is using JarOutputStream without thinking about content name uniqueness. I suspect that of those who do know many have learned it this way.

userbinator wrote at 2021-12-02 03:19:21:

There's plenty of Java_Script_ files in there.

josefx wrote at 2021-12-02 07:56:23:

That is nice, but they started to use the extension in 2010, when java had been using it for over a decade already. Sadly applications relying on the file extension alone as file type indication are still common so that was a rather big oversight.

cxr wrote at 2021-12-02 15:35:13:

Don't make the mistake of thinking the commenter you're responding to as authoritative on anything. The comment is bullshit.

ksec wrote at 2021-12-02 12:10:24:

Achievement unlocked! A submission I made on HN leads to a blog post from Taras Glek! On an optimisation that I actually remember when he was still at Mozilla.

For those who dont know Taras was part of Project MemShrink and Project Snappy and lots of other pref issues. ( I think he also worked on profiling? ) A lot of those were what made Firefox useable before e10s and Quantum landed.

pvorb wrote at 2021-12-02 06:06:13:

It's a nitpick but did anyone else stumble over the term "pre-pandemic 2010"? What audience is the author writing for? Historians in a hundred years?

molszanski wrote at 2021-12-02 13:12:53:

Pixar USDZ (fancy ZIP) files also have some tricks to improve performance of ZIP files

https://graphics.pixar.com/usd/release/wp_usdz.html#layout

Basically, important stuff first, no compression and 64 byte alignment

greggman3 wrote at 2021-12-02 07:22:33:

Making a file with that format would work but it would be easy to create a valid zip file that a reader that assumes that format would fail to read. You can easily have a Central Directory near the beginning of the file that the End Of Central Directory does not point to.

usrusr wrote at 2021-12-02 12:19:47:

Or a zip file that would work for both format assumptions but has different content in each. Since both central directories could very well share file blocks you could be quite subtle with intended differences!

userbinator wrote at 2021-12-02 03:17:43:

...and the reason they had to go and optimise it in the first place is because of how much of Firefox is actually not native code, but written in JavaScript! I discovered that the first time I wanted to modify something, and was also rather surprised that it's not been minified either.

db48x wrote at 2021-12-02 10:11:37:

Minifying the javascript wouldnā€™t help anything, as it all gets byteā€“compiled and then cached. It wouldnā€™t even make the download smaller, since the download is compressed.

Jyaif wrote at 2021-12-02 10:34:06:

It would make decompression faster, and in practice it would also make the download smaller.

The reason they were not minifying the JS is probably that at the time it made debugging and/or analysing stack traces easier.

The_rationalist wrote at 2021-12-02 00:09:14:

how about this ->

https://github.com/ebiggers/libdeflate

https://lemire.me/blog/2021/06/30/compressing-json-gzip-vs-z...