💾 Archived View for yujiri.xyz › software › 6-dumbest-ideas.gmi captured on 2023-05-24 at 17:52:32. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2023-03-20)
-=-=-=-=-=-=-
"The six dumbest ideas in computer security"
The article is from 2005, but I saw it getting linked favorably in some Lemmy posts today, so here we go.
He's basically right about this one; blacklisting is, in general, ineffective and a sign of a flawed approach.
https://blog.codinghorror.com/blacklists-dont-work/
But what is this?
Another place where "Default Permit" crops up is in how we typically approach code execution on our systems. The default is to permit anything on your machine to execute if you click on it, unless its execution is denied by something like an antivirus program or a spyware blocker. If you think about that for a few seconds, you'll realize what a dumb idea that is. On my computer here I run about 15 different applications on a regular basis. There are probably another 20 or 30 installed that I use every couple of months or so. I still don't understand why operating systems are so dumb that they let any old virus or piece of spyware execute without even asking me. That's "Default Permit."
Operating systems *don't* let anything execute without asking you. When you double click on an executable file in a typical desktop environment, you are telling the operating system "I want to execute this". This isn't an application of Default Permit, it's just a computer doing what you tell it. Welcome to being the sysadmin.
I would've liked to see him mention some genuine really widespread and really harmful applications of Default Permit, like Javascript and XHR in particular. (I'd bet he doesn't actually have a problem with those.)
This point is a repeat of the first one.
It also manages to argue it equally bad:
Why is "Enumerating Badness" a dumb idea? It's a dumb idea because sometime around 1992 the amount of Badness in the Internet began to vastly outweigh the amount of Goodness. For every harmless, legitimate, application, there are dozens or hundreds of pieces of malware, worm tests, exploits, or viral code. Examine a typical antivirus package and you'll see it knows about 75,000+ viruses that might infect your machine. Compare that to the legitimate 30 or so apps that I've installed on my machine...
It's nonsense to try to measure the ratio of malicious to legitimate software in the internet. Do you count the number of programs that have ever been in circulation, or just the ones that are still in significant circulation now? How do you define that threshold? Do you weigh them by popularity? If so, you have to at least specify what metric you used, and if not, the result is dominated by fringe programs that no one's ever heard of.
The funny part is that no matter which of these answers you pick, obviously there's way more legitimate software than malicious software. What he's comparing is the number of malware programs checked for in antivirus software, presumably *the total number of malware programs in history*, to only the number of legitimate applications *on his machine*. What a nonsensical comparison.
In fact, if I were to simply track the 30 pieces of Goodness on my machine, and allow nothing else to run, I would have simultaneously solved the following problems:
* Spyware
* Viruses
* Remote Control Trojans
* Exploits that involve executing pre-installed code that you don't use regularly
As above, that's exactly how it works. if you have a fixed set of 30 programs you want on your computer, why don't YOU stop downloading and executing something else.
Wait a minute. Looking closer at this list, #4 is exactly a type of problem that *isn't* solved by not executing malicious programs. Many vulnerabilities work by tricking a trusted and legitimate program (eg. Nginx) into executing a part of its code that leads to unintended behavior.
Also! The idea that the 30 apps he's thinking of are the only legitimate programs on his system is embarrassingly ignorant. Is it so hard to open the task manager, or `top`, or whatever you got on your system? Look how many processes are running that you don't even know about! All of those are legitimate parts of your system.
There's an old saying, "You cannot make a silk purse out of a sow's ear." It's pretty much true, unless you wind up using so much silk to patch the sow's ear that eventually the sow's ear is completely replaced with silk. Unfortunately, when buggy software is fixed it is almost always fixed through the addition of new code, rather than the removal of old bits of sow's ear.
This sounds like he's talking about software bloat, but I doubt he would do something so enlightened. Let's wait for this line to be ruined by context.
In other words, you attack your firewall/software/website/whatever from the outside, identify a flaw in it, fix the flaw, and then go back to looking.
Yes, this is an important form of testing! If you don't look for flaws, you won't find the flaws before the attacker does.
One of my programmer buddies refers to this process as "turd polishing" because, as he says, it doesn't make your code any less smelly in the long run but management might enjoy its improved, shiny, appearance in the short term. In other words, the problem with "Penetrate and Patch" is not that it makes your code/implementation/system *better by design*, rather it merely makes it *toughened by trial and error*.
Even a well-designed system will have implementation flaws in its first version. Those flaws have to be found and fixed.
I like that he even screwed up the grammar in this sentence. ESR was right: the correlation between sloppy thinking and sloppy writing is, indeed, strong.
Richard Feynman's "Personal Observations on the Reliability of the Space Shuttle" used to be required reading for the software engineers that I hired. It contains some profound thoughts on expectation of reliability and how it is achieved in complex systems. In a nutshell its meaning to programmers is: "Unless your system was *supposed to be hackable* then it shouldn't be hackable."
"If your system wasn't supposed to be X then it shouldn't be X". No shit.
"Penetrate and Patch" crops up all over the place, and is the primary dumb idea behind the current fad (which has been going on for about 10 years) of vulnerability disclosure and patch updates. The premise of the "vulnerability researchers" is that they are helping the community by finding holes in software and getting them fixed before the hackers find them and exploit them. The premise of the vendors is that they are doing the right thing by pushing out patches to fix the bugs before the hackers and worm-writers can act upon them. Both parties, in this scenario, are being dumb because if the vendors were writing code that had been designed to be secure and reliable then vulnerability discovery would be a tedious and unrewarding game, indeed!
That's brilliant! Why don't vendors just write perfect code on the first try!
Let me put it to you in different terms: ***if "Penetrate and Patch" was effective, we would have run out of security bugs in Internet Explorer by now***. What has it been? 2 or 3 a month for 10 years?
That's because the web has been recklessly adding features the whole time.
One clear symptom that you've got a case of "Penetrate and Patch" is when you find that your system is always vulnerable to the "bug of the week." It means that you've put yourself in a situation where every time the hackers invent a new weapon, it works against you. Doesn't that sound dumb? Your software and systems should be *secure by design* and should have been *designed with flaw-handling in mind*.
try: main() except Vulnerability: dont_let_the_hacker_in_lol()
One of the best ways to get rid of cockroaches in your kitchen is to scatter bread-crumbs under the stove, right? Wrong! That's a dumb idea. One of the best ways to discourage hacking on the Internet is to give the hackers stock options, buy the books they write about their exploits, take classes on "extreme hacking kung fu" and pay them tens of thousands of dollars to do "penetration tests" against your systems, right? Wrong! "Hacking is Cool" is a really dumb idea.
He should know that the reason for two different views of "hackers" is that that there have long been two meanings associated with that word. See this article by a famous hacker and major free software contributor for information:
The article I'm rebutting is maliciously conflating "hackers" meaning skilled, creative programmers who can find vulnerabilities, with "hackers" meaning a subset who take advantage of vulnerabilities to harm innocents. He makes it sound like the latter are the ones teaching people how to write more secure code, but that's obviously not true.
The #4th dumbest thing information security practitioners can do is implicitly encourage hackers by lionizing them. The media plays directly into this, by portraying hackers, variously, as "whiz kids" and "brilliant technologists" - of course if you're a reporter for CNN, anyone who can install Linux probably does qualify as a "brilliant technologist" to you. I find it interesting to compare societal reactions to hackers as "whiz kids" versus spammers as "sleazy con artists." I'm actually heartened to see that the spammers, phishers, and other scammers are adopting the hackers and the techniques of the hackers - this will do more to reverse society's view of hacking than any other thing we could do.
No one lionizes evil hackers. When glorified "hacker" characters appear in fiction, they're always using their skills for good.
If you're a security practitioner, teaching yourself how to hack is also part of the "Hacking is Cool" dumb idea. Think about it for a couple of minutes: teaching yourself a bunch of exploits and how to use them means you're investing your time in learning a bunch of tools and techniques that are going to go stale as soon as everyone has patched that particular hole. It means you've made part of your professional skill-set dependent on "Penetrate and Patch" and you're going to have to be part of the arms-race if you want that skill-set to remain relevant and up-to-date. Wouldn't it be more sensible to learn how to design security systems that are hack-proof than to learn how to identify security systems that are dumb?
Strategy is two-sided: only a fool thinks he can defend his castle without understanding castle siege.
Educating users is a dumb idea? Uh oh, this is gonna be the worst section yet, isn't it.
"Penetrate and Patch" can be applied to human beings, as well as software, in the form of user education. On the surface of things, the idea of "Educating Users" seems less than dumb: education is always good. On the other hand, like "Penetrate and Patch" ***if it was going to work, it would have worked by now***. There have been numerous interesting studies that indicate that a significant percentage of users will trade their password for a candy bar, and the Anna Kournikova worm showed us that nearly 1/2 of humanity will click on anything purporting to contain nude pictures of semi-famous females. If "Educating Users" is the strategy you plan to embark upon, you should expect to have to "patch" your users every week. That's dumb.
This "if it was going to work, it would have worked by now" reasoning is invalid. "Work" is a deceptive term in the context of social outcomes; it implies that success is binary and that there must be a strategy that would "work". But both of those assumptions are false. Consider this argument using his same logic: "Outlawing murder is a dumb idea because despite doing it for thousands of years people still murder. If it was going to work, it would have worked by now." Takeaways:
He's also wrong to say "you should expect to have to 'patch' your users every week". That's not how educating users works. Users don't need to know about every new vulnerability or type of scam; general knowledge about computers and skepticism about scams goes a long way toawrd protecting them, and those things haven't changed much in the last 20 years.
The real question to ask is not "can we educate our users to be better at security?" it is "why do we need to educate our users at all?" In a sense, this is another special case of "Default Permit" - why are users getting executable attachments at all?
I once sent an executable file to a friend because compiling it on her machine would've required installing a bunch of development tools which would've taken a long time (she ran Gentoo). Neither of us regretted having me compile the binary and send it to her.
But if you mean "why are users getting *malicious* executable attachments", let's go back to my previous comparison: "why are people murdering at all? That must mean our strategy is wrong, right?" Nope. Just like preventing all murder, you will never prevent all computer scams (even if you did somehow stop all malicious attachments, they'd just use vectors other than email attachments), and any strategy that claims to offer the impossible will involve a harmful and controlling social system like... well, he's about to show us in a couple of paragrams.
Why are users expecting to get E-mails from banks where they don't have accounts? Most of the problems that are addressable through user education are self-correcting over time. As a younger generation of workers moves into the workforce, they will come pre-installed with a healthy skepticism about phishing and social engineering.
Giving people a healthy skepticism about phishing and social engineering is exactly what "educating users" means! He's literally saying "don't educate users because it doesn't work and also the problem is self-correcting through users getting educated"!
Dealing with things like attachments and phishing is another case of "Default Permit" - our favorite dumb idea. After all, if you're letting all of your users get attachments in their E-mail you're "Default Permit"ing anything that gets sent to them. A better idea might be to simply quarantine all attachments as they come into the enterprise, delete all the executables outright, and store the few file types you decide are acceptable on a staging server where users can log in with an SSL-enabled browser (requiring a password will quash a lot of worm propagation mechanisms right away) and pull them down. There are freeware tools like MIMEDefang that can be easily harnessed to strip attachments from incoming E-mails, write them to a per-user directory, and replace the attachment in the E-mail message with a URL to the stripped attachment. Why educate your users how to cope with a problem if you can just drive a stake through the problem's heart?
He's gone from stupid to treacherous. What he's advocating is for email providers to *tamper with incoming messages* (which is incompatible with end-to-end encryption) and harass *all* users with the need to open a web browser to receive *any* attachments.
Also, MIMEDefang is *free software* (GPL 2). *Freeware* means proprietary software distributed without cost:
https://en.wikipedia.org/wiki/Freeware
So add screwing up software licenses to his list of mistakes.
In this one he basically says "never be the first person to adopt something", which is good advice for most situations but not all: