đŸ Archived View for gemini.ctrl-c.club âș ~ssb22 âș spare.gmi captured on 2024-08-31 at 12:44:23. Gemini links have been rewritten to link to archived content
âŹ ïž Previous capture (2024-08-18)
-=-=-=-=-=-=-
In the days of sub-100MHz DOS PCs, unused CPU cycles were literally wasted: an idle PC waiting for input would be using about the same amount of power as one that is performing calculations, therefore if you have some free RAM then you might as well run a low-priority background task to compute something while your PC is otherwise doing nothing: itâs *free CPU cycles*. In 1996 I ran an automatic composition experiment by installing a small TSR on some school computers (by permission) and it had no ill effects.
However, modern CPUs, which are orders of magnitude faster, use *more electricity* (sometimes many times more) when occupied with calculations as compared to their use when idle.âTherefore, allowing your CPU to *remain idle* while waiting for input will save electricity unless you are using thermostat-controlled electrical heating (of a type that does not involve a heat pump) at the time.âIt can also save on fan noise and/or component wear.âThis means people participating in distributed computation efforts are likely to do so only if they feel their contribution is worth those costs.
It might seem to make more sense to run background tasks on servers at data centres, especially ones that obtain their power responsibly.âIn those places, hopefully nobody has to sit next to the fan and hardware is probably replaced on a regular schedule.âHowever, this decision is probably best left to the administrators of the *physical* hardware.âMany commercial servers are âvirtualââthey look like separate machines but are actually sharing resources on a single machineâand providers of virtual servers tend to discourage sustained high CPU use *even at low priority*, because the virtualisation software might not be able to combine priority levels across virtual machines.âOn the other hand, some providers actually encourage CPU usage by saying idle machines will be reclaimed, but they might have rules about what kind of calculations are acceptable.
If you want to participate in a long-running project without it generating fan noise then the best option is probably to run it locally but throttle it to consume only a *small percentage* of idle CPU cycles (which would probably give computing power similar to a full-on CPU from the old days), as long as the project does not need your results quickly (not all projects are suitable for this).âSome GNU/Linux distributions can install a cpulimit command to throttle an arbitrary process using SIGSTOP and SIGCONT signals if the process itself doesnât have a throttling option, provided it doesnât spawn child-processes (as, although some versions of cpulimit *can* be set to monitor for child-processes, doing so typically results in 10%+ CPU being taken by cpulimit itself).âLimiting the CPU *will* make work units take longer, and it will still take *some* extra power and warm the hardware, so to be truly âfreeâ itâll have to be done in a place thatâs electrically heated anyway (or from power that would otherwise go to waste) and on hardware whose lifetime you donât mind reducing, such as if itâs going to be replaced on a schedule anyway.
On the other hand, if *new* hardware is purchased with a warranty and needs to be stress-tested before the warranty expires, and there is a choice of running an otherwise-useless load test or participating in a distributed computation, and the distributed computation makes a good enough test, then CPU cycles that would have been consumed by the load test *are* âfreeâ for the distributed computation as long as one or more of its work units can be completed within the duration of the test.â(If using BOINC in a temporary directory, you might want to set --abort_jobs_on_exit so the project âknowsâ not to wait for the deadline before reassigning any work youâve interrupted.)
A final consideration is the projectâs system requirements, some of which might not be made explicit in the projectâs documentation.âIf your client completely fails to return results, or if it does return results but theyâre mostly rejected (check the status), it might be a case of âthe developers didnât expect your OS version/CPU type/etcâ and your contributions canât help until thatâs fixed.âAlso, some projects now prefer programmable GPUs; if you donât have a suitable GPU for these projects then your CPU-only contribution might be dwarfed by those who do, representing diminishing returns in terms of electricity consumed per computational unit.âThis does not apply to CPU-only projects, but some of these do still have unstated system requirements.âFor example, the âOutsmart Ebola Togetherâ project which ran on IBMâs World Community Grid between 2014 and 2018 declared invalid all results computed on old versions of Mac OS X unless run in a suitably-configured GNU/Linux in VirtualBox, and in mid-2024 the Folding@Home project started to give single-core x86-64 CPUs deadlines they were unable to meet, automatically erasing their work once it became too late.
All material © Silas S. Brown unless otherwise stated. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries. Mac is a trademark of Apple Inc. VirtualBox is a trademark registered by Oracle in various countries. Any other trademarks I mentioned without realising are trademarks of their respective holders.