💾 Archived View for 9til.de › lobsters › upiyhx.gmi captured on 2023-11-14 at 07:46:12. Gemini links have been rewritten to link to archived content

View Raw

More Information

-=-=-=-=-=-=-

 .----------------.
| .--------------. |
| |   _____      | |
| |  |_   _|     | |
| |    | |       | |
| |    | |   _   | |
| |   _| |__/ |  | |
| |  |________|  | |
| |              | |
| '--------------' |
 '----------------'

Viewing comments for "Four Kinds of Optimisation"

---

david_chisnall commented [1]:

I completely agree with the four kinds of optimisation,

though the first and second are often intertwined. Great

discussion on the difference between best case and common

case on your data - that's often critical and is largely

overlooked in university classes.

Option 4 is the most interesting one. Apple's FSEvents

mechanism scales a lot better than Linux's inotify because

it tells you only the directories that have changed, not

the individual files. You can then look at the modification

time of the files to see which have changed since your

last epoch for that directory. This also has the advantage

that it does some coalescing for free. I'd be inclined to

subdivide these into two categories:

Accept less precision in a first round that can be

recaptured later.

Accept less precision because you don't actually care.

FSEvents is an example of the fist case. You don't care

which files are modified because traversing a directory and

looking at the modification times is cheap (or, at least,

less likely to cause performance problems than blocking

writes until an event notification consumer has caught

up). In contrast, using lower precision floating point

for coordinates in a game because users won't notice if

an object is less than one pixel away from its 'correct'

position is an example of the latter. The first has some

overlap with choosing a better algorithm.

I think there's a higher-level category, which is

'reevaluate your requirements'. Some of the biggest

performance gains that I've made have come from applying

the techniques in the article to a system that was already

carefully optimised. Between the time that it was written

and the time that I'd looked at it, the requirements had

changed significantly. Sometimes this applies to the input

changing as it interfaces to a different system (you built

a thing for processing SMS, now it's used with WhatsApp,

suddenly short 7-bit ASCII strings are replaced with

longer UTF-8 sequences). Sometimes it's that the hardware

changed (globals got slower, passing arguments got cheaper,

parallelism became the big win, cache misses got really

expensive, branch predictor performance became critical).

---

Served by Pollux Gemini Server.