💾 Archived View for thrig.me › blog › 2023 › 01 › 25 › floating-point-fun.gmi captured on 2023-01-29 at 15:31:22. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2023-01-29)
-=-=-=-=-=-=-
FLoating point math did not get the memo that the magnitude of the velocity of a circular orbit should be constant. The direction of that constant velocity is always changing (like a rolling stone) which under floating point math gives ample opportunity for small errors to accumulate and thus consigns the aliens of the week to (probably) the oblivion that is deep space.
Luckily for us, so far, our universe has apparently figured this stuff out, unlike my homegrown orbital simulator (version 2.0 beta).
inline static void normalize(struct vector *v, long double length) { // KLUGE x + y + z != len because of the magic of floating point // math, try to correct for it static long double err = 0.0L; length += err; v->vxx /= length; v->vyy /= length; v->vzz /= length; err = length - (v->vxx + v->vyy + v->vzz); }
It's probably a good thing that I'm not a rocket scientist nor Orbital Trade Mechanic III. I've also seen error correction code go horribly wrong and come to dominate the equation, an issue the above code may or may not have. Knuth meanwhile apparently uses 2147483647 ⊕ 1 to indicate that a computer's "+" is not actually a plus like you probably learned back in factory training school. And if anyone has INT_MAX, it's Knuth. (Computing probably needs a pantheon, so what better time to start than now?)
Other deviant behavior can be observed. Here we have an often copied routine that one can probably find in use in various places.
(defun standard-deviation (seq mean n) (sqrt (/ (loop for x in seq sum (expt (- x mean) 2)) (1- n))))
This will work, until it does not (a common refrain in computing) in particular when the summed squares become large enough that the apple goes all pear shaped. Possibly even negative, which don't make no lick of sense given that the numbers were squared, but, hey, computers! When the "does not work" happens will depend on the system, and whether you have float, double, long double, or extra super duper long doubles. The larger sizes help with chip sales and provide a larger rug to sweep issues under. Maybe you'll never even notice?
(defun standard-deviation (seq mean n) (sqrt (/ (loop for x in seq sum (expt (- x mean) 2)) (1- n)))) (defun trial-by-stone (n) (loop repeat n with seq and sum = 0 do (let ((x (+ 1000000 (random 0.01)))) (incf sum x) (push x seq)) finally (return (standard-deviation seq (/ sum n) n)))) (trial-by-stone 1000000)
With SBCL 2.2.5 on OpenBSD 7.2 on AMD64 this yields a standard deviation of 3557.5227; given that the numbers should all be pretty close to 1000000.005 this answer seems a bit...wrong. Catastrophic cancellation (no, not the McCarthyism of the 1950s) is a term for what is going on here. One fix is to rewrite things using "Welford's method". Another is to use a statistics library that is fast, gets the math right, has good documentation, is easy to use, is portable, and does not come with an 900 pound gorilla and all the rest of the forest, too. Obviously compromises will need to be made.