💾 Archived View for thrig.me › blog › 2023 › 12 › 30 › teaching-compsci.gmi captured on 2024-03-21 at 15:12:49. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2024-02-05)

-=-=-=-=-=-=-

Teaching Compsci

Some random commentary on Dijkstra's "On the cruelty of really teaching computing science", an archive of which can be found at:

/science/dijkstra-really-teaching-computing-science.gmi

It is now two decades since it was pointed out that program testing may convincingly demonstrate the presence of bugs, but can never demonstrate their absence. After quoting this well-publicized remark devoutly, the software engineer returns to the order of the day and continues to refine his testing strategies,

Some programmers will read this as an excuse to not write tests at all, instead of moving to formal methods. And it's not like formal methods are problem-free; one might read that they "can provide limited guarantees of correctness too, but, except in safety-critical work, the cost of full verification is prohibitive and early detection of errors is a more realistic goal."

http://people.csail.mit.edu/dnj/publications/ieee96-roundtable.html

Another factor is whether the business folks allow time for tests, formal methods, documentation, and all that jazz. In medical and aviation they are sometimes forced to by regulations, but see the Boeing 737 MAX debacle.

Back to Dijkstra.

Unfathomed misunderstanding is further revealed by the term "software maintenance", as a result of which many people continue to believe that programs —and even programming languages themselves— are subject to wear and tear.

It is not difficult to find software that no longer compiles, e.g. "ld: error: duplicate symbol: X" or because Clang 13 added some optimization, therefore null pointers must be changed to 0x1 pointers, otherwise segfaults. rogue 3.6.3 (1981) took quite a bit of work for the code to be acceptable to modern C compilers (2018). Or, security insights mean that hash key order is now randomized to ward off algorithmic complexity attacks, which means anything that relied on the old ordering is now in error. (Technically the code was already in error, but there was no test forcing the issue.)

Probably some of this is that certain spawns of Algol are messes in motion. Another factor might be folks trying to make money, which often is orthogonal to good software (see: Microsoft, or the modern web).

Famous is the story of the oil company that believed that its PASCAL programs did not last as long as its FORTRAN programs "because PASCAL was not maintained".

I suspect that a random blob of FORTRAN or Common LISP will be much less in need of maintenance than, say, JavaScript or Python. That oil company FORTRAN probably still works, but what of their PASCAL? The hypothesis here is that code in languages that are bad at backwards compatibility will need more maintenance. And what to do when the language falls off the popularity wagon, such as PASCAL or Python 2?

By way of illustration of my doubts, in a recent article on "Who Rules Canada?", David H. Flaherty bluntly states "Moreover, the business elite dismisses traditional academics and intellectuals as largely irrelevant and powerless.".
So, if I look into my foggy crystal ball at the future of computing science education, I overwhelmingly see the depressing picture of "Business as usual".