One task I almost forgot about was some memory profiling I did a while ago. Sasagawa-san did a great job with CPU utilization, bringing it within a factor of 2 of the commercial implementation. However, I thought RAM usage was a little high.
First, there was some low-hanging fruit. Initialization was setting static memory to 0, which is unnecessary.
Next, I tried to figure out where the memory was going. I did some quick calculations when easy, and printed `sizeof(X)` for certain global variables at the start of `main()` when this was difficult. The outcome was, as I suspected, the `heap` array of `cell` structures was the only thing that mattered.
So then I tried to reduce the size of a `cell`. This involved both packing some enum members (reducing from 4 to 1 byte), and packing the sub-structs too.
Finally I started tuning the number of cells in `heap`, `CELL_SIZE` in *ffi.h*. I wanted to run the benchmarks to completion, and eventually figured out that at least 5M cells is required for this.
The outcome is that RAM usage is comparable to, e.g. Pharo Smalltalk, even through the feature set is much smaller. This is the price you pay for readability and maintainability, a 15 KLOC program versus a major research project ...