đŸ Archived View for republic.circumlunar.space âș users âș korruptor âș blog âș 2022-07-03-FixedPoint.gm⊠captured on 2024-06-16 at 12:22:31. Gemini links have been rewritten to link to archived content
âŹ ïž Previous capture (2022-07-16)
-=-=-=-=-=-=-
One thing I never tried back in the day, was 3D. I was crap at maths cos I didnât pay attention at school. Fortunately, every dayâs a school day, and since the 90s Iâve learnt enough Linear Algebra to do the basics.
Time to have a go on the Amiga?
But before all that, I need a quick way to do the maths, which is where Fixed-point comes in.
(If you grew up programming before the Pentium era then all this will be trivial to you. I needed to refresh my memory. I only ever did this once, and that was a very long time ago. I thought Iâd write it up fo posterity.)
The 68000 processor in the Amiga and ST is a hybrid 16/32 bit CISC processor; 32bit instruction set, on a 16bit bus. Unlike modern processors, it has no Floating-Point Unit [FPU], so processing numbers with fractional values is slow in comparison to integer based math.
3D graphics calculates a lot of angles/positions that rely on these fractional values, so we need a way to trade mathematical precision for processing speed. Fixed-Point is a decent compromise.
In fixed-point, numbers are stored in standard C types (ints, short and long, etc.) but the content of the variable differs from that of a normal int type. Fixed-point implies a position within the variable where the decimal point would be, and this is kinda arbitrary. For example, a signed 16bit integer normally has the range -32768 to 32767. If we create a fixed-point type of 8.8 â 8 bits for the integer part, 8bits for the fractional part â the integer range is reduced to -128, 127, but weâve gained 8 bits of fractional precision.
Where to put the decimal point depends on the use-case. The resolution of the Amiga, in PAL low res, is 320x256, meaning unsigned 8.8 fixed-point numbers would *nearly* give us enough range to cover the screen. 12.4 would be more than enough. 16.16 takes us to the moon and back (and may be faster for some operations?).
For trigonometry functions, like Sine, where the output is -1 to 1, we could use a 2.14 fixed-point notation, giving us 14 bits of fractional precision. More than enough for smooth rotations.
Conversion to and from a fixed-point number is simple: Shift by the number of bits in the fractional part of the number. Ie: If youâre using 8.8 fixed-point, shift by 8 bits. A set of pre-processor macros is enough to go to and from any of the fixed-point representations need, but you must be mindful of potential under or overflow.
Addition and Subtraction of fixed-point numbers â assuming theyâre of the same notation, 8.8 or whatever -- works the same as normal integers, but multiplication and division need slightly different handling.
When two fixed-point numbers are multiplied, the result is shifted by the number of bits in the fractional part, so must be stored in a type large enough to hold the extra bits to avoid truncation. (Multiplying 16bit words gives results in 32bit longs.) Also, the result must be shifted back before being stored:
(short) (((long) i * (long) j) >> 8);
Fixed-point division also shifts the result, but in the opposite direction, losing precision. To compensate, code needs to shift one of the arguments up, before the operation.
(short) (((long) i << 8) / j);
Fast trig on a 16bit computer requires the use of pre-calculated look-up tables. And maybe some cheats.
Letâs take Sin/Cos. These functions take an angle as input. To pre-calculate their look-up tables, we need to decide what sort of precision is required for this input. Instead of having 360 degrees in a circle, we can say there are 256, meaning an angle can be stored within a byte and we get angular modulo for free, as the integer overflows (handy).
A precalculated look-up table for Sin/Cos, with this assumption would have 256 entries. The fixed-point precision of the values in the table being arbitrary; 2.14, 16.16⊠whatever is required of the results.
If you take advantage of the symmetry of Sin/Cos, itâs possible to reduce the number of entries: Only the first quadrant [â90 degreesâ, or 64 entries in this example] need be calculated. The result can be mirrored/negated as appropriate, depending on the quadrant of the input [angle] passed to the function.
256 âdegreesâ in a circle would be enough to bounce a sprite up and down smoothly, but it probably wouldnât be enough for silky vector rotations of lines and polygons. In these cases itâs worth bumping up to 4096, resulting in a 1024 entry table, with a precision of 360/4096 = 0.087890625 degrees.
Thatâs a decent trade of memory for speed!
Iâm writing this for sport and it doesnât need to do much beyond clearing out the cobwebs from my old-geezer brain.
You, dear reader, donât need to know any of this. And, since you live in the future, you donât need to code it, either. You can get a battle-tested fixed-point library thatâll work on old computers (or tiny embedded devices, whatever) right from Git Hub:
https://github.com/PetteriAimonen/libfixmath
I hear itâs great, but itâs possibly not as much fun as working all this out [again] on a rainy Sunday after noon đ