💾 Archived View for dioskouroi.xyz › thread › 24942671 captured on 2020-10-31 at 01:03:19. Gemini links have been rewritten to link to archived content

View Raw

More Information

-=-=-=-=-=-=-

Edsger Dijkstra – The Man Who Carried Computer Science on His Shoulders

Author: furcyd

Score: 341

Comments: 148

Date: 2020-10-30 14:38:31

Web Link

________________________________________________________________________________

xenocyon wrote at 2020-10-30 16:56:28:

It is remarkable how much he contravened modern academic norms: no work to bring in external funding, relatively few papers, very little collaboration, almost no PhD students. It's not an exaggeration to say that a young researcher with these characteristics today would likely get fired from any high-ranked American research university before even coming up for tenure review. Perhaps our modern academic metrics of merit are missing something?

rramadass wrote at 2020-10-30 17:32:33:

They sure do.

Here is Freeman Dyson; the Physicist without a PhD (

https://www.quantamagazine.org/a-math-puzzle-worthy-of-freem...

);

_Oh, yes. I’m very proud of not having a Ph.D. I think the Ph.D. system is an abomination. It was invented as a system for educating German professors in the 19th century, and it works well under those conditions. It’s good for a very small number of people who are going to spend their lives being professors. But it has become now a kind of union card that you have to have in order to have a job, whether it’s being a professor or other things, and it’s quite inappropriate for that. It forces people to waste years and years of their lives sort of pretending to do research for which they’re not at all well-suited. In the end, they have this piece of paper which says they’re qualified, but it really doesn’t mean anything. _

PopeDotNinja wrote at 2020-10-30 20:23:48:

My friend once said:

- B.S. = bull shit

- M.S. = more shit

- PhD = piled higher & deeper

jayjohn436 wrote at 2020-10-31 02:03:44:

Permanent Head Damage is what my advisor would tell me ;)

Judgmentality wrote at 2020-10-30 23:09:39:

And HBS is 2/3 BS.

krelian wrote at 2020-10-31 03:26:52:

Opened his Wikipedia page out of curiosity and what do you know, some climate change denier decided that his ideas about CO2 level and their effects on climate deserve to be in the opening section.

mcguire wrote at 2020-10-30 20:19:42:

Perhaps. I can't really speak to his career before he came to UT Austin in 1984, he was already very famous (Bjarne Stroustroup at Texas A&M/Columbia or Brian Kernighan at Princeton famous); if I remember, his chair was created for him as a full professor and without many of the usual duties of a professor. (One of his stipulations was that he not be made Chairman of the department.)

According to the article, as a young researcher, he was

* A "programmer" (something like a research assistant, perhaps?) at the Mathematisch Centrum (Mathematical Centre) in Amsterdam.

* Professor in the mathematics department of the Eindhoven University of Technology from 1962-1973. (As I understand, the "external funding" thing is less of a problem in mathematics. Also, I have no idea how tenure works there.)

* Finally, research fellow at the Burroughs Corporation.

It's true, he didn't have many direct PhD students (two?!), but his collaborations were many. (I don't know if the Tuesday Morning Group at UT is still active,[1] but it was rather well known and its members produced a great deal of what is now "theoretical computer science".)

If you're trying to compare Dijkstra to J. Random Young Researcher, J's gonna have a hard time any way you look at it.

[1] UT no longer requires automata theory or formal logic for CS undergraduates. They're dead to me.

Beldin wrote at 2020-10-30 22:48:35:

> Professor in the mathematics department of the Eindhoven University of Technology from 1962-1973. (As I understand, the "external funding" thing is less of a problem in mathematics. Also, I have no idea how tenure works there.)

Back then, i think full professors were appointed by the Queen. (Certainly so for Beatrix, but he became a prof before her inauguration)

Being appointed by the Queen made it really difficult to fire you - one of the reasons (I was told) they stopped this practice.

Moreover, back then research funding worked very different from how it works now in .nl. Actually, I have some idea of how finding worked from the mid-90s on... but I've read that prior to early 80s, in .nl, most PhD students were part-time and not employed by the university.

Not sure how accurate that is. Irrespective of its accuracy, it's clear that academic life had changed drastically since then. (In .nl at least)

User23 wrote at 2020-10-31 04:32:57:

I believe his degree was "Mathematical Engineer." He remarks in at least one of his EWDs that Americans thought that was a very funny title.

microtherion wrote at 2020-10-31 02:10:24:

> Being appointed by the Queen made it really difficult to fire you

Monarchies today being at that awkward stage where they wield enough protection to prevent somebody from being fired but no longer enough power to get them beheaded…

bch wrote at 2020-10-30 22:48:21:

> I don't know if the Tuesday Morning Group at UT is still active,[...] but it was rather well known and its members produced a great deal of what is now "theoretical computer science".)

*Tuesday Afternoon Club [0].

[0]

https://www.cwi.nl/about/history/e-w-dijkstra-brilliant-colo...

marcosdumay wrote at 2020-10-30 17:13:28:

> It's not an exaggeration to say that a young researcher with these characteristics today would likely get fired from any high-ranked American research university before even coming up for tenure review.

He wouldn't even get in an university to get the chance to be fired. And that's not specific to the US, it applies to nearly all developed or developing countries.

That person wouldn't be able to get much past a PhD, even as a student.

gautamdivgi wrote at 2020-10-30 18:56:16:

You’re right. The right of passage for most US computer science PhD programs is publications in the “right” journals and conferences. If you don’t publish you will rarely get past your thesis proposal. I think there are very few, very select professors in the US that prohibit publishing prior to graduating. But their count is likely in the single digits.

Also, by some miracle you manage to graduate by minimal publishing or no publications you will find all doors to getting research funding shut. This means that you are basically unemployable as a tenure track professor (universities prefer their newly minted assistant professors to come with funding).

mcguire wrote at 2020-10-30 20:34:01:

Yes, it's true. EWD published almost nothing.

http://www.netlib.org/bibnet/authors/d/dijkstra-edsger-w.htm...

(The ones marked "circulated privately" are the EWD series.)

fiddlerwoaroof wrote at 2020-10-31 02:23:16:

The best teachers I’ve had published little or nothing: they were teachers first, but researchers who also taught. It’s a shame the modern colleges and universities don’t make this really possible anymore.

hinkley wrote at 2020-10-30 17:15:59:

People used to say of my alma mater that the teaching was decent but the facilities were amazing.

There was a feel on campus that the people who were really going to 'make it' were only putting in B- or C+ work and spending the rest of their time working on side projects. There were very few days in a given year when more than half of the computer labs were full at the same time, and with Unix boxes you can always remote in to the flavor of machine you need.

You don't need much research budget if you already have access to time on equipment. I'd be curious to know if Dijkstra had access to a similar embarrassment of riches.

mschaef wrote at 2020-10-30 18:44:12:

> I'd be curious to know if Dijkstra had access to a similar embarrassment of riches.

Dijkstra spent a good chunk of the latter part of his career at the University of Texas in Austin. There was plenty of access to facilities there, but that wasn't really what his research was about. He wrote longhand (scans here:

https://www.cs.utexas.edu/users/EWD/

) and didn't really use much technology directly in his work.

This makes sense, because after all, Dijkstra is responsible for this famous quote: "Computer science is no more about computers than astronomy is about telescopes".

https://www.quora.com/What-did-Dijkstra-mean-when-he-said-Co...

mcguire wrote at 2020-10-30 20:26:11:

Famously, Dijkstra did not have a computer for most of his time there. He either typed his work on a manual typewriter (early in his career) or wrote it longhand in his elegant Dijkstra font (

https://news.ycombinator.com/item?id=7796919

). His secretary would print out his emails for him to read and respond to.

Eventually, he got a Macintosh (IIRC, Cynthia Matuszek (

https://www.csee.umbc.edu/~cmat/

) was the one who set it up for him. (Hi!))

datalus wrote at 2020-10-30 19:14:58:

Hamilton Richards (maintainer at the time of the EWDs on the site) was my analysis of programs professor at UT. I had arrived there in 2003 as a freshman, the reverence and impact of Djistrka was incredibly palpable.

I remember some of my classmates were turned off by how theoretical some of the core courses were. A lot of people saw it as less practical. Also the height of Java's OO take over of CS programs, it was definitely a mix of classes. Although the theory courses tended to either care less about language choice or focused on Haskell.

AnimalMuppet wrote at 2020-10-30 19:31:32:

This is the difference between computer science and software engineering.

And the vast majority of people who graduate with a CS degree are going to work as software engineers, not as computer scientists. They are therefore being mis-educated to the degree that their CS program agrees with Dijkstra's philosophy. (Which doesn't mean that Dijkstra is wrong about CS. It just means that a CS degree shouldn't be the passport to a career in software engineering, or even a job as a grunt programmer.)

opportune wrote at 2020-10-31 01:53:18:

Having a good understanding of theoretical CS and its fundamentals makes programming a relatively trivial activity. I don't think a CS degree miseducates you at all, as long as you actually internalize the CS you learn.

mcguire wrote at 2020-10-30 20:29:37:

Yes, this is why computer science students from schools like Stanford, Carnegie Mellon, MIT (well, it's an engineering school, anyway), and UT Austin (EWD's stomping grounds) are almost completely unemployable in the software-related industries. All of the employers learned pretty quickly that they were all grossly incompetent.

AnimalMuppet wrote at 2020-10-30 20:43:44:

First, even UT Austin doesn't run their CS program as EWD would have wanted. See his comments when they switched to Java.

Second, I suspect (but cannot prove) that this problem is why computer science grads need a couple of years of close supervision on the job before you can trust them with anything.

dkarl wrote at 2020-10-30 21:30:27:

_I suspect (but cannot prove) that this problem is why computer science grads need a couple of years of close supervision_

There's more to it than that, a lot of which has more to do with learning how companies are structured and how people communicate in the corporate world. Among other things, college students have been trained to tackle any assignment with the assumption that the information they have been given is accurate and comprehensive, and the professor would prefer to hear nothing from them until the assignment is turned in as complete as possible right before the deadline.

If you want to educate people to be immediately productive in a corporate environment, that's not what college is for anyway.

Beldin wrote at 2020-10-30 22:56:12:

Interestingly, Eindhoven University of Technology did use (more-or-less) Dijkstra's approach throughout the 90s. Sure, there were some odd bits, but the core of the curriculum was Dijkstra style.

Not sure what they're doing today.

Jtsummers wrote at 2020-10-30 20:53:12:

They need supervision because they haven't been apprentices gaining practical experience yet. College (especially the sciences) is not about apprenticeships. If you _just_ want "grunt programmer[s]", we need proper apprenticeships and trade schools. Like electronic technicians and plumbers go through.

mcguire wrote at 2020-10-30 21:29:03:

Fortunately, you can hire newly minted software engineers and they require no supervision whatsoever to solve your difficult problems.

AnimalMuppet wrote at 2020-10-30 22:19:45:

Nice strawman. Oh, wait, no it isn't. It's a really cheap strawman. But in the end, it's just a strawman.

But it _is_ my position that a graduate of a proper software engineering degree would take _less_ initial supervision than a graduate of a CS degree.

Why? Because they would have already seen ambiguous requirements, and would have an idea of what to do. They would have already seen a multi-hundred-thousand-line code base, and would be more likely to have some idea how to navigate in one. They would have some idea of what production-level code looks like, and how to go about achieving that level of completeness. And so on.

908B64B197 wrote at 2020-10-30 23:53:24:

What's more confusing is that Software Engineering, Computer Engineering and Computer science overlap quite a bit, but are distinct fields.

Furthermore, there really is two kinds of CS departments: the ones that were spun out of a Math department and those that emerged out of an Electrical Engineering one.

Gibbon1 wrote at 2020-10-30 20:10:54:

Dijkstra is totally of the European tradition where if you don't belong to the guild it's flat out illegal for you to work in the field.

HarryHirsch wrote at 2020-10-30 23:38:18:

The European tradition is that if you are not admitted to the guild you must not work in the field _without supervision_. Nothing wrong with that, tbh. You get something out of it, too, your sponsor is invested in your success.

mhewett wrote at 2020-10-30 18:04:37:

In the article it says that Dijkstra didn't use a computer, TV, or cell phone.

User23 wrote at 2020-10-31 04:36:54:

Socrates didn't use writing because he thought it was a mental crutch that would atrophy his memory and mind.

ampdepolymerase wrote at 2020-10-30 18:17:28:

Which school did you go to?

enumjorge wrote at 2020-10-30 17:23:40:

That reminds me of the anecdote where one of Google’s hiring committees realized that they wouldn’t have gotten hired had they been judged by their own standards. It makes you wonder how much talent gets filtered out when we apply very stringent metrics to grade candidates.

texuf wrote at 2020-10-30 19:01:48:

I can't find it but I saw a presentation (I think from someone at Etsy) where the presenter described being on an interview panel that failed all of the candidates in a batch, only to find out that they had been given their own packets to review.

akshat_h wrote at 2020-10-31 02:03:51:

If anyone is curious, it might be this talk.

Moishe Lettvin - What I Learned Doing 250 Interviews at Google. It is on Etsy Eng channel

Youtube url with the relevant time frame

https://youtu.be/r8RxkpUvxK0?t=532

pkaye wrote at 2020-10-30 18:12:00:

Working at a smaller company, I knew a few people in the hiring committee that had unreasonable standards. We used to tease them that they wouldn't pass their own standards when they were hired. Fortunately we didn't need unanimous approval of candidates.

elvis70 wrote at 2020-10-30 17:42:57:

"The Humble Programmer" is an exercise in self-mockery, as Dijkstra has always been very conscious of the value of his work.

https://www.cs.utexas.edu/~EWD/transcriptions/EWD03xx/EWD340...

_Nat_ wrote at 2020-10-30 18:42:22:

Good read!

That said, I don't know about self-mockery. A good line from the linked lecture transcript:

> The competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague.

He seems to be addressing the issue of inflated egos influencing people to be "_real programmers_". For example, do you use a simple GUI interface? -- HA! I can type an arcane string into a command line with my TOES! Clearly, I art teh superior to thou. Which is stupid, because a proper GUI that shows the full scope of the API is awesome, while memorizing arcane strings is a silly practice. But, some folks let feelings of intellectual superiority cloud their judgement and beliefs, which Dijkstra was speaking out against.

He repeats this theme several times, with the overall lecture being a tad repetitive and redundant as it states the same thing over and over in a repetitive series of repetitions, but for a specific example:

> Finally, although the subject is not a pleasant one, I must mention PL/1, a programming language for which the defining documentation is of a frightening size and complexity. Using PL/1 must be like flying a plane with 7000 buttons, switches and handles to manipulate in the cockpit. I absolutely fail to see how we can keep our growing programs firmly within our intellectual grip when by its sheer baroqueness the programming language —our basic tool, mind you!— already escapes our intellectual control. And if I have to describe the influence PL/1 can have on its users, the closest metaphor that comes to my mind is that of a drug.

In short, Dijkstra strongly advocates intellectual minimalism. The "_humility_" he advocates is fortitude against ego-driven peacockery in decision-making; taking the simplest possible approach, that's the easiest to do, even when doing so doesn't show off one's intellect.

User23 wrote at 2020-10-31 04:40:01:

Dijkstra strongly advocates not making a mess of things. He then elaborates this means the primary challenge the programmer faces is keeping his programs intellectually manageable, which among other things means not burying himself in needless complexity.

galaxyLogic wrote at 2020-10-30 19:14:26:

That is good stuff (well he was Dijkstra). I think we've all seen such hubris play out. When you want to prove that you are intellectually superior you will demonstrate that you can master the most difficult tools (say category theory?) without understanding that your real goal should be to find simple solutions to difficult problems.

A simple solution does not look like much because it is so simple, but it can take a genius to come up with it. Whereas complicated solutions are easier to come by and do demonstrate your intellectual skills.

User23 wrote at 2020-10-31 04:41:36:

Bear in mind though that spending hours upon hours to find a simple solution to a hard problem will tank your professional career. Why? Because your manager will look at your simple solution that you took such pains to derive and then ask why it took you so long to produce such a small amount of such obvious code.

rramadass wrote at 2020-10-30 18:28:19:

>"The Humble Programmer" is an exercise in self-mockery

Definitely not! It is a philosophical paper.

random3 wrote at 2020-10-30 19:14:53:

One of my past career pet peeves was not being able to work in the same team with some people I was really excited to work daily with. I almost joined the research group, passed interviews, got accepted, but eventually everything got stalled because some "credentials" weren't there.

nurettin wrote at 2020-10-30 17:12:09:

It would be very funny to watch foremost experts in relatively new fields being denied because they don't have phd students. I somehow doubt this is the case, though.

jes5199 wrote at 2020-10-30 19:11:41:

his papers are quite good though, and he did a lot of writing, just not for journals

todd8 wrote at 2020-10-30 19:34:30:

I read Dijkstra's _goto considered harmful_ in college, circa 1970. I was a young programmer at the time, and it made me think and reconsider how and why big programs were often hard to understand.

Years later, in grad school, my research was on program verification. Dijkstra was a big influence on the field. I attended a lecture at UT Austin that he gave before moving to the US.

Dijkstra's approach to proving the correctness of programs is hard to apply in practice. Like a geometry proof, there are ways in which a proof of correctness can have mistakes that aren't obvious. Here's an example, prove that a sorting algorithm works. If the specification is that the output must be in sorted order, a proven program can still fail to sort correctly. The specification has an error, the output must not only be in sorted order, it must ensure that all of the input elements make it somewhere to the output. But this isn't good enough. A sort proven to meet this specification might leave out some duplicate values. Requiring that the output be the same length as the input still isn't good enough, multiple copies of one element might hide missing duplicate input values for another element. The specification requires that the output be a sorted _permutation_ of the input.

Sorting is a well understood problem, but real verification must deal with the strange mathematics implemented by computers. The mathematical theories that we are accustomed to from school assume infinite precision and no overflow, real life hardware doesn't act this way.

All of this makes program verification hard, and my hope is that AI programming assistants will someday aid in the construction of meaningful specifications and work with the programmer to verify the correctness of programs or determine their safe operating boundaries. Progress has been slow, and the tools are still impractical for use by most programmers on most programs.

userbinator wrote at 2020-10-31 04:38:54:

_AI programming assistants_

Given what I've seen of AI today (works reasonably well at the best of times, but is sensitive to unusual cases and can produce totally nonsensical results otherwise; and nearly impossible to actually debug), I remain highly skeptical.

amw-zero wrote at 2020-10-31 00:47:18:

This is such a great and actually important comment. I’ve been obsessed with program correctness / verification for a while now. It just seems like the holy grail that we can’t fully attain yet. But you keep bumping into certain walls, and the fact that correctness can only be relative to a given specification is a profoundly huge wall.

There’s no checking to see if your specification is under-specified.

Jtsummers wrote at 2020-10-31 00:55:30:

https://news.ycombinator.com/item?id=24917644

This submission from yesterday may interest you.

mcguire wrote at 2020-10-30 20:38:43:

Take a look at Frama-C or Ada Spark sometime; they apply the predicate transformer semantics style to code and do pretty well. But yeah, incomplete specifications are always fun.

mysterydip wrote at 2020-10-30 19:12:09:

I wish seeing the name Dijkstra didn't immediately remind me of his oft-quoted BASIC comment:

"It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration."

The problem is it's quoted regardless of how much the BASIC dialect being discussed resembles the one he was talking about [1]. There are lots of good BASIC languages out there running all kinds of useful software that are dismissed out of hand by a subset of developers simply from that comment.

[1]

https://programmingisterrible.com/post/40132515169/dijkstra-...

microtherion wrote at 2020-10-30 22:28:34:

In my opinion, The Proper Way To Teach Computer Science was the one area where Dijkstra made his most detrimental contribution, by promoting a particular school of instruction based on strong opinions with weak to no empirical underpinnings.

In _Coders at Work_, Knuth offers a counter-argument that I find compelling:

> Seibel: It seems a lot of the people I've talked to had direct access to a machine when they were starting out. Yet Dijkstra has a paper I'm sure you're familiar with, where he basically says we shouldn't let computer-science science students touch a machine for the first few years of their training; they should spend all their time manipulating symbols.

> Knuth: But that's not the way he learned either. He said a lot of really great things and inspirational things, but he's not always right. Neither am I, but my take on it is this: Take a scientist in any field. The scientist gets older and says, "Oh, yes, some of the things that I've been doing have a really great payoff and other things, I'm not using anymore. I'm not going to have my students waste time on the stuff that doesn't make giant steps. I'm not going to talk about low-level stuff at all. These theoretical concepts are really so powerful-that's the whole story. Forget about how I got to this point." I think that's a fundamental error made by scientists in every field. They don't realize that when you're learning something you've got to see something at all levels. You've got to see the floor before you build the ceiling. That all goes into the brain and gets shoved down to the point where the older people forget that they needed it.

> Peter Seibel. _Coders at Work_

It is clear from the reading the EWDs that Dijkstra considered actually doing something with Computers detrimental to the practice of Computer Science as he conceived it, but the kind of Computer Science he practiced past 1975 or so was not particularly useful to Computer Engineering, because it refused to engage with it, and, as the article points out, it was not even particularly good math, because it refused to engage properly with that field as well.

taeric wrote at 2020-10-30 22:44:49:

I love that quote. Reminds me of a similar one by Taken, I think. Basically that many seem to think they can get the end product without growing it.

908B64B197 wrote at 2020-10-30 23:47:23:

Knuth also starts his book by defining his own assembly language and computer architecture.

Jtsummers wrote at 2020-10-30 23:52:16:

Because at the time he wrote it there wasn't a good standard or sufficiently common one that had the properties he wanted to use for exploring algorithms. See MMIX for something closer to a modern architecture than the original MIX.

908B64B197 wrote at 2020-10-31 00:11:31:

I meant that he clearly had hardware in mind when writing his algorithms.

analog31 wrote at 2020-10-31 04:27:13:

I wear it as a badge of honor, since I first learned programming in BASIC -- the good old line numbered variety -- on a mainframe. Then again I've never claimed that I'm _not_ mentally mutilated.

Jtsummers wrote at 2020-10-30 19:41:42:

And there are a lot of people that dismiss his comment and the rest of his writings by ignoring what BASIC meant at the time of writing. Visual BASIC certainly wouldn't have been his favorite language, but had the features that BASIC at the time lacked and that he disliked it for (when you start digging into it: lack of recursion, non-reentrant functions, massive use of global variables, preference for goto versus structured control flow).

nitrogen wrote at 2020-10-30 19:47:28:

My first language was actually GWBASIC on an ancient hand-me-down computer, followed by QBASIC. When I saw that quote as a teenager I was terrified that I had killed my career before it had even started.

I've since become quite proficient in C, Ruby, Java, Kotlin, HTML, JS, shell scripting, and many many other languages and environments. So IMO there is very little modern merit to the idea that exposure to BASIC (or any other mocked language) is "mentally mutilat[ing]".

root wrote at 2020-10-30 20:11:45:

BASIC only ruins you if you have no intellectual curiosity. BASIC was my first language and with each new language I’ve learned since I’ve always been excited at the new possibilities and paradigms it opened up.

microtherion wrote at 2020-10-30 22:35:43:

BASIC got a whole generation of people interested in programming.

I've yet to meet somebody who got interested in computers through correctness proofs.

nickdrozd wrote at 2020-10-30 15:56:37:

ALGOL 60 included a series of novel features, such as recursion, which was supported in a much more complex manner than logicians had ever envisaged...Recursive procedures in ALGOL 60 are much more complex than in Lisp.

Can anyone explain how this "much more complex" recursion works?

todd8 wrote at 2020-10-30 18:30:20:

Early LISP implementations used dynamic scope. This means that non-local references in a function search for the variable's binding in the call chain: if A calls B then when B uses nonlocal variable x it refers to the x in A, not necessarily the x that belongs in the lexical parent of B as seen in the source code.

(Unlike C, Algol/Pascal/LISP can all define functions inside of other functions. These nested function definitions introduce lexical non-local scopes.)

Dynamic scoping is easier to implement than lexical scoping. With lexical scoping, the run-time must incorporate some mechanism to find the stack frame of a lexical parent (as seen in the program's source code) to access a nonlocal variable's value. In dynamic scoping the run-time can just follow the stack frame order (starting at the top) to find a nonlocal variable's binding.

Early LISP used dynamic scope. Scheme had lexical scope from the beginning. Common Lisp had both mechanisms and programs can use both.

The implementation of recursion also has to address proper access to nonlocal variables in functions that are values themselves and are passed as arguments to other functions. Do these functions access the nonlocal variables based on their call location or their lexical location in the source code? These problems with functions as arguments are known as the upward and downward funarg problems. Knuth wrote the _man-boy_ program to test Algol 60 implementations in these areas.

Another complication for implementors of Algol 60 was parameter passing by-name, which is neither by-value nor by-reference. I don't recall any other well known language that has subsequently made that non-intuitive (to me) choice. I believe that Algol 68 abandoned call by-name, but it's been 25 years since I looked at the Algol 68 standard.

lkuty wrote at 2020-10-30 19:35:22:

Which reminds me of The Funarg Problem Explained by Joseph Weizenbaum

https://www.computerhistory.org/collections/catalog/10272070...

. I used its "diagrams" to understand the closure mechanism (definition vs invocation environments) during the course of a Scheme class in 1996.

samatman wrote at 2020-10-30 22:08:27:

A link to the actual paper, for the curious

http://www.softwarepreservation.org/projects/LISP/MIT/Weizen...

geophile wrote at 2020-10-30 19:22:55:

I may be fuzzy on this, it's been a long time. But isn't by-name a special case of passing a function? (And then referencing the parameter in the called procedure invokes the function.)

todd8 wrote at 2020-10-30 22:09:22:

Yes, that's how "Jensen's Device" is used to implement call-by-name. However, note that the expression passed to an Algol 60 function by name causes the expression to be evaluated in the context of the function being called. Each time the formal parameter appears in the called function the expression passed to that parameter is re-evaluated.

mcguire wrote at 2020-10-30 20:43:46:

Haskell's call-by-need is a close relative of by-name (the resulting thunk is only evaluated once).

Someone wrote at 2020-10-30 16:41:04:

I think this refers to making the implementation efficient.

They wanted performance on par with existing languages that didn’t allow recursion, and could simply assign fixed addresses to all function arguments and local variables.

Lisp didn’t have that. It allocated environments left and right. That meant that a function call allocated memory and potentially could trigger garbage collection.

The major improvement was to use a runtime stack. That was fairly novel. Earlier compilers used a stack at compile time to make it possible that those fixed addresses for arguments and locals could be shared between functions that didn’t call each other, directly or indirectly, but that stack didn’t exist at runtime.

And yes, the idea of a stack seems trivial nowadays, but it wasn’t at the time. Quite a few CPUs didn’t even have “jump to subroutine” or “return from subroutine” instructions.

Algol’s “call by name” feature also may have complicated this, but I don’t see how that’s more difficult than passing lambdas as arguments.

timonoko wrote at 2020-10-30 17:39:23:

In 1972 the Nova 1200 Jump-to-Subroutine wrote the Program Counter contents above the subroutine address and thus Subroutine-Return was a short jump via that address. It was possible at compile-time to check if this mechanism was enough. Otherwise the subroutine call was three words, because it was possible to make one word Pushs and Pops in that advanced architecture.

I made lisp-interpreter and toyed with compiler.

wilsonthewhale wrote at 2020-10-30 17:21:52:

Do you have any references or papers that developed the idea of having a stack at runtime? I love reading into the fundamentals of modern computing.

mjn wrote at 2020-10-30 17:26:28:

Dijkstra's paper "Recursive Programming" (1960) is where he lays out his proposal for implementing recursive Algol-60 subroutines with a stack:

https://www.ics.uci.edu/~jajones/INF102-S18/readings/07_dijk...

throwaway_pdp09 wrote at 2020-10-30 17:32:53:

I'm not sure that's fair. If you have objects that escape downwards (so X calls Y, y creates some item i, i gets returned to X) then you can't have a stack. i may be overwritten at the next call of anything else, as the stack grows over it.

I guess they do escape analysis and where items are provably local, stack allocate them. If not, plonk them on the heap.

A call frame is an object like any other, so they go on the heap. This allows you to create closures.

Someone wrote at 2020-10-30 19:46:12:

I got the stack reference from

https://eprints.illc.uva.nl/383/1/PP-2010-04.text.pdf

, which says:

_“Dijkstra’s quest to generalize led him to use the well-known concept of a stack (i.e., a continuous portion of computer memory) as a run-time object, rather than a mere compile-time object as was the case in Samelson and Bauer’s ALCOR compiler.”_

Digging deeper, I found

https://www.illc.uva.nl/Research/Publications/Reports/MoL-20...

(worth reading, IMHO) which indicates Samuelson and Baur used a stack to parse expressions and generate machine code for them, not for assigning memory locations.

So, I agree with you it can’t have been as simple als I implied.

I doubt they would have plonked _anything_ on the heap, though, because languages of the time didn’t have a heap (FORTRAN certainly didn’t), and the language Samuelson and Baur wrote didn’t even allow functions to call other functions.

throwaway_pdp09 wrote at 2020-10-30 20:46:00:

I was unclear I'm afraid, I was elaborating on your comment about lisp allocating madly, so lisp and not programs in general.

Moving to a stack costs, in that you can't do some things, per my original comment. You gain efficiency for what you can do though.

> used a stack to parse expressions and generate machine code for them

Prob this

https://html.duckduckgo.com/html?q=dijkstra%20shunting%20yar...

which I thought came from dijkstra but your digging suggests not, didn't know! (edit: it did

https://en.wikipedia.org/wiki/Shunting-yard_algorithm

) infix -> rpn which matches a stack very well (can evaluate as an interpreter, or emit code as a compiler. Not very efficient but it works).

Edit: much credit for digging and reading before posting, wish I could give you several upvotes for that.

Joker_vD wrote at 2020-10-30 17:46:14:

See also:

https://en.wikipedia.org/wiki/Man_or_boy_test

niea_11 wrote at 2020-10-30 16:29:32:

I did a quick search and according to this article [1], dijkstra's implementation was not restricted compared to other implementations at the time :

_It is Dijkstra's generalizing style which stands out when a comparison is made with the work of his contemporaries. Rutishauser, for example, had limited the order of his procedure activations in his run-time system [59], while Dijkstra had no such restriction. Floyd's work [60, p.42-43] relied on three specialized “yo-yo” lists (i.e., stacks) instead of one general stack. Likewise, the MAD translator [61, p.28] used several specialized tables. Also, and most importantly, the ALCOR compilers were severely restricted in that they could not handle several ALGOL60 language constructs, including the recursive procedure (cf. Section ). Finally, though the Irons-Feurzeig system [54] did implement recursive-procedure activations and by means of one general run-time stack, it was, in the interest of efficiency, sophisticated in its run-time capabilities and, hence, unlike the run-time system of Dijkstra and Zonneveld ._

[1] :

https://www.dijkstrascry.com/node/4

azhenley wrote at 2020-10-30 16:27:36:

There is an interesting write-up about Algol 60's recursion [0] that was discussed here before [1].

[0]

https://vanemden.wordpress.com/2014/06/18/how-recursion-got-...

[1]

https://news.ycombinator.com/item?id=10131664

throwaway_pdp09 wrote at 2020-10-30 17:29:01:

The answers to this question are extremely interesting and show how much work has been done to get where we are today, a position I took for granted and never really understood until now (even in name: 'yo-yo list', for heaven's sake!)

It's a bit like parsing, these days we either reach for a technique (eg. recursive descent) or an implementation of a tool (such as flex/bison) and never think there was a time before these where people had to struggle conceptually with these things.

thelazydogsback wrote at 2020-10-30 17:59:08:

One fun item I'm lucky enough to own is a core memory stack from Dijkstra's Electrologica computer, ca.1958 or so -- _very_ low density obviously. This machine is where the first working Algol compiler was written.

enchiridion wrote at 2020-10-30 18:16:39:

How do you come across something like that?

thelazydogsback wrote at 2020-10-31 00:04:03:

ebay baby! Went on a core memory kick a few years ago and just happened to catch an auction at the right time.

Jtsummers wrote at 2020-10-30 17:22:43:

If your opinion of Dijkstra is mostly negative and the result of reading others' opinions or selective readings of his EWDs (some of which are incredibly negative in tone), it would behoove you to read his EWDs. Just go through the archive and grab random ones based on title, then later do a more systematic reading of them.

dryrunes wrote at 2020-10-30 17:26:54:

What is an EWD?

ninly wrote at 2020-10-30 17:35:30:

He typed (edit: and handwrote) hundreds of these open letters, technical notes, reports, etc. Here's the archive at UT:

https://www.cs.utexas.edu/users/EWD/

Jtsummers wrote at 2020-10-30 19:29:37:

https://www.cs.utexas.edu/users/EWD/welcome.html

That's the archive, talked about in the linked article. Letters (mostly) that he wrote and sent out to people. Today we'd consider their content and length equivalent to a technical blog. Some were much longer, though.

FireBy2024 wrote at 2020-10-30 17:33:08:

> IN 1959, DIJKSTRA began writing a series of private reports. Consecutively numbered and with his initials as a prefix, they became known as EWDs.

w0mbat wrote at 2020-10-30 18:58:06:

My only beef against Dijkstra is that I applied for a fairly crappy job I was eminently qualified for in terms of experience and someone told me to code Dijkstra's shunting yard algorithm on a whiteboard (well not in so many words, but that was the solution). It's a good algorithm but way beyond the scope of what you should ask in a whiteboard interview. I didn't get the job.

_Nat_ wrote at 2020-10-30 19:35:25:

I get the sense that Dijkstra would've agreed with you and perhaps had harsh words for that interview process.

Sometimes these interview stories sound like selecting a professor of mathematics by picking the one who can calculate the square-root-of-2 to 12 digits in their head the fastest.

qazpot wrote at 2020-10-30 17:18:34:

Arrogance in computer science is measured in nano-dijkstra. - Alan Kay

baxrob wrote at 2020-10-30 18:09:11:

"This quote keeps on showing up out of context. Edsger and I got along quite well." - Alan Kay

https://news.ycombinator.com/item?id=11796926

joe_the_user wrote at 2020-10-30 19:06:41:

Well, the whole comment you link doesn't really deny that Dijkstra could be arrogant and dismissive, it just goes on to say essentially that he was a lovable old coot and that they listened to him only as much as they wanted; "_Dijkstra was much more funny than annoying for anyone who had any sense of self. The two biggest problems in our not-quite-a-field were those who listened to him too carefully and those who didn't listen to him carefully enough._"

ben7799 wrote at 2020-10-30 19:12:42:

I read it more like people back then weren't so sensitive and didn't even take that kind of straight talk as arrogance and dismissiveness.

But then maybe Alan Kay will show up and clarify.

I'm just old enough to remember when corporate culture was like this too. It used to be OK to say when people were wrong or get mad when programming errors lit the building on fire. That started to become NOT OK in the past 15 years. A lot of older engineers really had to to work hard to adapt because it's so frustrating today when people make careless mistakes over and over again and no one can say a word without being accused of not being a team player or something.

People were more likely to admit their own mistakes then too. In that environment it was better to come clean then try to hide a mistake even when coming forward can solve the problem faster.

We've overcorrected this stuff.. correcting it helps make the field more inclusive but at some point after we're diverse and inclusive we need to walk some of this behavior back a bit.

galaxyLogic wrote at 2020-10-30 19:25:32:

Part of the reason may be the advent of online. It is much easier to offend people online than in person for several reasons:

1. You do the offending in public

2. You can not get and give immediate feedback

on how what you're saying affects your counter-part

3. What you say is carved in stone

pinewurst wrote at 2020-10-30 18:46:31:

From everything I've read, Dijkstra was usually pleasant face-to-face. It was the later written backstab that one had to be prepared for.

jamez wrote at 2020-10-30 20:29:02:

Good read. For anyone interested in seeing the human side of Dijkstra, I recommend this documentary:

https://www.youtube.com/watch?v=RCCigccBzIU

When I watched it I was both studying Computer Science and learning Dutch, so it was twice as interesting. The version I linked provides English subtitles.

microtherion wrote at 2020-10-30 22:04:20:

This is a really nice article on Dijkstra, both more detailed about his contributions and more balanced about his limitations — many articles about him are too hagiographic to my taste.

rramadass wrote at 2020-10-30 17:26:05:

A well written portrait of a truly great and original "Scientist" in the true sense of the term.

I have always wondered why his approach of constructing correctness proof along with the implementation never really caught on. I think it was what allowed him to come up with his novel algorithms. Have we lost something of the real essence of logical thinking here? Perhaps the paper mentioned in the article; _Social Processes and Proofs of Theorems and Programs_ which he was vehemently against, will give me the other side of the argument.

ativzzz wrote at 2020-10-30 18:33:00:

Probably because the cost associated with constructing such proofs for the vast majority of modern software use cases (like showing ads on 8 different browsers) is too high (cheaper to just hire QA teams), and most programmers are not simultaneously software engineers, computer scientists, and mathematicians.

rramadass wrote at 2020-10-31 02:37:33:

> cost associated with constructing such proofs ... is too high

People assume this is true because it has been repeated endlessly. I think it is an issue of granularity. For example; both Dijkstra's GCL and Hoare's Triples (fine granularity) vs. Meyer's DbC (coarse granularity) can be expressed using Assertions to build Preconditions/Postconditions/Invariants as "correctness proof" throughout a real-world software system. But their systematic usage is never taught nor enforced; we have to learn it by trial and error.

amelius wrote at 2020-10-30 16:33:44:

What are the greatest advances in CS of the last decade?

hinkley wrote at 2020-10-30 17:02:00:

The older I get the harder it is to time-slot things, but I can think of a few things where much of the interesting work happened in the last ten years (similar to how a lot of pivotal work on GC algorithms happened in the late 90's).

Most of the good work on escape analysis started around ~2010

Rust got us rolling on borrow semantics (I have a feeling the '20s will be remembered as the the golden age)

I'm sure there's a thing or two in or around LLVM that would rank, but I'm not paying close enough attention.

From a 'renaissance' standpoint:

- LZ-family compression algorithms

- Multi-tenant process isolation (containers)

- transport protocols

- Consensus algorithms

ThenAsNow wrote at 2020-10-30 18:49:56:

Would you say there are real CS advances when it comes to LZ family compression algorithms and containers? Wasn't LZ well known prior to the past 10 years? And is there any kind of fundamental or theoretical advancement in containers, or is this an example of a new(er) type of implementation?

hinkley wrote at 2020-10-30 23:08:29:

I think they're solving a different problem now.

Zip was an ahead-of-time compression algorithm until HTTP 1.1, and so the draw was storage and transmission bandwidth, but the deciding factor was decompression time and space complexity.

Now we've been using it for online compression so long, and bandwidth is a secondary concern. Eventually someone had to take the other constraints seriously. We made LZO and friends but the results weren't that exciting, and I think a lot of people wandered off for another 10 years to work on other things, until bandwidth became a bigger concern for producers than consumers.

For containers, I'd love to talk to some mainframe OS designers about how much of containers and The Cloud are merely democratizing techniques that were proven out on big iron 30 years ago. I suspect the answer is "a lot".

MaxBarraclough wrote at 2020-10-30 17:11:36:

> similar to how a lot of pivotal work on GC algorithms happened in the late 90's

Speaking of which, there's been great progress in low-pause garbage collection over the last few years, although it could be said that Azul was doing it a while back.

hinkley wrote at 2020-10-30 17:20:59:

Some would say the GC advances in the 90's were a renaissance, but I think I'd agree we are in the middle or tail end of a second one.

Many of those 90's GC algorithms complained of a lack of a cheap instruction for write barriers, and some were specifically about amortizing write barriers by using coarser grained ones. Azul's pivot to software only solutions came on the shoulders of Intel's VT instruction set, which allows, among other things, for nesting of virtual memory. The Azul JVM essentially behaves as a VM.

[ETA] and some of the other GCs now leverage the vast open space of 64 bit addressing to alias memory addresses to give references 2 or 3 flavors, and use that to coordinate parallel GC activities.

macintux wrote at 2020-10-30 17:19:40:

Related to consensus: CRDTs

jandrese wrote at 2020-10-30 17:06:42:

Has to be GAN and related ML developments. They're opening up problem spaces where progress had been largely stalled for decades. I don't think they'll be the ultimate solution in this problem space, but I do think they're an important stepping stone.

bobsmooth wrote at 2020-10-31 03:49:19:

Graphics cards got powerful enough and cheap enough to run decades old algorithms in a reasonable amount of time.

jnsie wrote at 2020-10-30 16:40:23:

Editor support for Dark Mode? (I jest)

wyldfire wrote at 2020-10-30 16:45:41:

To me -- a casual novice -- it sounds like a lot of recent CS work results in things like CAP Theorem/consensus, NLP/AI models, AGI, practical tools for verification/formal methods.

sideshowb wrote at 2020-10-30 18:13:38:

I don't think we've made much progress towards AGI, but having reliable approaches to object recognition in images is fairly new, as is putting enough of that sort of thing together to make a self driving car.

Rexxar wrote at 2020-10-30 18:05:45:

We will probably only know it in 10 or 20 years. Very often it takes time to see what is really valuable and what was just a trend.

caleb-allen wrote at 2020-10-30 16:58:53:

Session types may be up there? Or maybe progress on homomorphic encryption. Also, progress in quantum computing.

dimroc wrote at 2020-10-30 18:17:22:

The crown has to go to machine learning and neural network advancements. This builds on the increase in data and processing capacity, but it's still an amazing advancement over a mere 10 years. One may argue that the actual CS has been around for longer but I don't think that's giving enough credit to recent advancements.

1. classification (Andrew Ng @ YouTube):

https://www.wired.com/2012/06/google-x-neural-network/

2. Language Translations (NLP, word2vec):

https://www.technologyreview.com/2020/10/19/1010678/facebook...

.

3. Transfer learning

4. SnapChat's Face Filters and real time ML for AR:

https://screenrant.com/snapchat-anime-face-filter-lens-effec...

5. Apple iPhone's neural engine and Face ID

gnulinux wrote at 2020-10-30 17:07:25:

Dependent type theory has been very useful for automated theorem proving and PLT static analysis (Coq, Agda, Idris are languages that POC'd this).

wongarsu wrote at 2020-10-30 17:11:12:

Real-time ray tracing is finally here, in large part because of much better denoising technology. Similarly many types of stimulation are becoming vastly cheaper and more capable thanks to the use of machine learning. Image synthesis made huge leaps thanks to the invention of GANs in 2013.

glouwbug wrote at 2020-10-30 16:56:09:

Leet code styled interviews

mcguire wrote at 2020-10-30 21:04:24:

As an aside, I took a History of Logic course from Krzysztof Apt (

https://inference-review.com/author/krzysztof-apt/

) while I was an undergrad and he was briefly at UT Austin. He is an excellent teacher and a very nice man.

(He loaned me a copy of a book on Frege's logic during the course---it had been signed and given to him by the person who let him stay on their floor when he first came to the US.)

olodus wrote at 2020-10-30 17:09:25:

I have wondered what Dijkstra would have said about that most (from my limited view) verification advances nowadays focus on functional programming (dependant types and so forth). I get the feeling he wasn't too fond of functional programming.

Or maybe he even was able to comment on it, I am not sure how far the field had moved while he was still with us. I have searched for it but could find any comment from him on the subject.

Ericson2314 wrote at 2020-10-30 17:20:25:

https://www.cs.utexas.edu/users/EWD/transcriptions/OtherDocs...

Actually, based on the above I think he would have been all for it.

Well, I mean knowing Dijkstra he would have found something to complain about :D, but it wouldn't have been that it wasn't imperative enough!

znpy wrote at 2020-10-30 17:26:43:

«Edsger W. Dijkstra, “Recursive Programming,” Numerische Mathematik 2 (1960): 312–18, doi:10.1007/bf01386232. »

The link brings to a paywall. FFS do they really have the courage to ask 42,64€ for a paper from 1960 ?

v64 wrote at 2020-10-30 20:56:53:

Actual paper is here:

https://sci-hub.st/10.1007/bf01386232

Joker_vD wrote at 2020-10-30 17:59:47:

Why not? The LotR was published in 1954, and won't enter the public domain for another 25 years or so... Copyright is a wonderful thing like that, surviving the death of the creator by another human lifetime.

jakespoon wrote at 2020-10-30 22:05:59:

Backus literally inspired functional programming with his Turing acceptance and Dijkstra attacked him for it. Dijkstra attacked the paper that inspired Haskell and everything else that currently exists in FP. He tried to destroy it. He should be a footnote in history.

amw-zero wrote at 2020-10-31 01:00:29:

I wouldn’t say he tried to destroy functional programming. If you read his response he pretty clearly says that he just thinks that FP as a solution to all problems is “overselling” the solution.

And I 100% agree. Thinking that functional programming solves all problems is a fundamental misunderstanding of computation.

The first thing that opened my eyes to the overselling of FP as a panacea was reading Leslie Lamport’s work, specifically Computation and State Machines:

https://lamport.azurewebsites.net/pubs/state-machine.pdf

. Specifically, in Section 3: Programs as State Machines, he demonstrates how the same algorithm for computing factorials can be implemented via mutation or recursion.

With TLA+, Lamport also showed that mutation and state can be reasoned about mathematically, contrary to the central claim of functional programming purists.

The point is simple: functional programming doesn’t in any way _inherently_ help with correctly implementing programs. The fact that FP is currently in vogue is simply another example of how humans don’t require any evidence of anything at all before following a belief system blindly.

FP does prevent certain classes of mistakes from happening, i.e. unintentional bugs due to mutation creating implicit state machines. But preventing bugs and aiding in correctness aren’t exactly the same thing.

Jtsummers wrote at 2020-10-30 22:17:28:

Functional programming already existed in a few forms by the 1977 Turing Award lecture. ML was around since 1973, which defined the ML family (Ocaml, SML, F#) and influenced Haskell and others. The Lisp family of languages had been around since 1958. Some parts of the lecture can be seen as related to languages in the APL family (originated in 1965). These developments influenced the lecture itself.

To gain context into Dijkstra's response, it may be worth reading it [0]. Here's the conclusion of his response:

> _In short, the article is a progress report on a valid research effort but suffers badly from aggressive overselling of its significance, long before convincing results have been reached. This is the more regrettable as it has been published by way of Turing Award Lecture._

Regarding "He tried to destroy it." Dijkstra is known, and often criticized by others, for his hyperbole. Perhaps instead of imitating this heavily criticized aspect of his, you could tone down your rhetoric and actually create an argument rather than try to dismiss someone (and all their work) for one thing you disagree with.

[0]

https://www.cs.utexas.edu/users/EWD/transcriptions/EWD06xx/E...

jakespoon wrote at 2020-10-30 22:45:08:

So, looking back from 2020, did Backus oversell it? We aren't there yet but it still looks like the right direction to me.

Jtsummers wrote at 2020-10-30 23:51:04:

I will agree, Backus' ideas have panned out in many ways. Not having read his lecture in quite a while I can't comment on whether or not, in 1977, he was overselling the idea at the time, especially as I'm not familiar with the state of the art in research at the time (having been born 5 years later, and CS concepts not being taught in chronological order).

Regardless, your original comment made two absurd claims:

1. Dijkstra attacked the paper that inspired Haskell and everything else that currently exists in FP. _He tried to destroy it._ [emphasis added]

Which is what my quote of Dijkstra's actual statements was meant to rebut. He did _not_ try to destroy it. He criticized it, and some of his criticism may have been wrong then, and I'd say is wrong now (with greater understanding of the ideas Backus was presenting), but he did _not_ try to destroy it.

2. He should be a footnote in history.

Another absurd position. Because of one response (and if you wanted to you can probably find a number of others) exaggerated beyond reason ("He tried to destroy it."), you want to dismiss some 50 years of work.

jakespoon wrote at 2020-10-31 02:21:58:

I should have been clear with my response that I accept and agree with your criticism of my post.

"He tried to destroy it" is too strong. I will back down from that.

rightbyte wrote at 2020-10-31 00:01:40:

His hopes for functional programming machines seems like overselling his dream of abandoning von Neumann atleast. Arcane Lisp-machines was actually made as a comparison. I don't get the feeling Backus meant that a "FP system" should be an abstraction ontop of a von Neumann machine/language.

amw-zero wrote at 2020-10-31 01:02:24:

Based on what? What evidence is there that FP actually has any effect on a project?

Wilem82 wrote at 2020-10-31 01:00:15:

By "attacked" you probably meant "criticised"? Criticising something is not only a perfectly normal activity, it's a necessary one and is an important part of the scientific method.

thotsBgone wrote at 2020-10-30 22:11:44:

The Whine Law:

There will always be someone in every Hacker News comment section to whine like a baby. Every topic, every thread. Someone will whine.

assap_cocky wrote at 2020-10-30 23:32:10:

We both can't be right!

AnimalMuppet wrote at 2020-10-30 17:06:30:

Or at least who thought he carried computer science on his shoulders.

He made significant, important contributions. He wasn't the only one. Hoare, Backus, McCarthy... there were a lot of people who made important, fundamental contributions.

Structured programming caught on. Proofs did not, for reasons that would have offended Dijkstra - they took time, and it was actually more valuable (both socially and financially) to produce mostly-working software quickly than to produce proven-correct software slowly.

But he never would have accepted that the real world was that way. He insisted that his approach was correct. As Backus said of him, "This guy’s arrogance takes your breath away."

So: Important contributions? Absolutely. More than anyone else's? Perhaps. Carried computer science on his shoulders? No, but he probably thought he did.

marcosdumay wrote at 2020-10-30 17:20:19:

Well, static types and analysis are in fashion now, and always moving on the direction of more powerful proof systems.

He was very likely too much ahead of his time on that. Decades after him we still don't have good enough tech to do the kinds of proof he expected... And yes, maybe too arrogant to realize that it's not the entire world that is dumb, but it's a core piece of the system that is missing.

On the other hand, he did work to improve proof systems, so who knows what he really thought?

yters wrote at 2020-10-30 18:34:20:

I have multiple CS and CE degrees, and learned his proofs, but industry is pretty antithetical to his approach and most CS pretty irrelevant. I am not sure he has that much real influence.

Not that I don't wish it were otherwise. I love the beauty of mathematics and do wish software were more elegant.

andi999 wrote at 2020-10-30 17:29:26:

Somehow this gives me the impression he is a bit overrated, and he somehow developed a bit of cult-like following.

lou1306 wrote at 2020-10-30 17:45:00:

I genuinely wonder what gave you this impression. Dijkstra made seminal contributions to at least three subfields of computer science: algorithms (his optimal shortest path algorithm), programming languages (ALGOL -with others, but still- and his "goto considered harmful" paper), and concurrency theory (semaphores), just to state the most well-known ones...

luizfzs wrote at 2020-10-30 18:20:33:

I agree. I get the impression that he formalized several problems and provided a solution to them, e.g. the Dining Philosophers Problem:

https://en.wikipedia.org/wiki/Dining_philosophers_problem

I think he is actually underrated. I'd have liked to be introduced to his proof notations during some formal courses on my CS formation, which was never mentioned.

rramadass wrote at 2020-10-30 18:15:40:

Not at all true. The Dutch are famously known to be blunt and direct which seems to be a national trait(eg:

https://www.thrillist.com/travel/nation/dutch-culture-brutal...

).

Think of Dijkstra as Sherlock Holmes thought of himself; viz.

_My dear Watson,” said he, “I cannot agree with those who rank modesty among the virtues. To the logician all things should be seen exactly as they are, and to underestimate one’s self is as much a departure from truth as to exaggerate one’s own powers._

socialdemocrat wrote at 2020-10-30 18:41:12:

I Northern European/Germanic trait in general. As a Norwegian living in the Netherlands I found Dutch people very easy to relate to but I know many of my other fellow foreigner had major problems with their direct communication.

But you see the same with Linus Thorvalds. His tone offends a lot of people in particular Americans.

But as a fellow Nordic I think the passive aggressiveness often observed in America or fake politeness more difficult to deal with.

Of course Dijkstra and Linus are quite different personalities but both suffer from an English speaking majority thinking their concepts of proper etiquette is universal.

amw-zero wrote at 2020-10-31 01:04:52:

> Linus Thorvalds

The Norse God spirit of Linus Torvalds.

silicon2401 wrote at 2020-10-30 19:39:33:

do you know of any media that has good depictions of this kind of culture? Ideally as part of the show, rather than a one-off foreigner in an American show. As an American this is how I sometimes wish our culture was like, but at the same time I realize I can't even imagine the reality of it since our culture can be rather extremely towards the opposite end of the spectrum

cambalache wrote at 2020-10-31 02:00:47:

Thing is any local media wont display it "in your face" like an America show would do it, it is just part of the environment. Paraphrasing Borges, "How do you identify an actual desert people literary work? it's the one which does not mention camels"

This reminds me one thing I find irritating about SF in general and SF movies in particular. They hardly portrait technology use in an organic way. Do you want to see a great SF movie? Watch a current normal movie with 1990s eyes. People will text, call her friends, use Uber, buy on Amazon but nothing self conscious " See? See how cool is what I am doing!?"