💾 Archived View for 9til.de › lobsters › rbnnt9.gmi captured on 2023-11-14 at 07:46:20. Gemini links have been rewritten to link to archived content

View Raw

More Information

-=-=-=-=-=-=-

 .----------------.
| .--------------. |
| |   _____      | |
| |  |_   _|     | |
| |    | |       | |
| |    | |   _   | |
| |   _| |__/ |  | |
| |  |________|  | |
| |              | |
| '--------------' |
 '----------------'

Viewing comments for "Stop deploying web application firewalls"

---

cadey commented [33]:

This is a good sentiment, however people's expensive cyber

insurance requires them to deploy web application firewalls.

Welcome to 2023.

> fs111 commented [22]:

security seems to be mostly about compliance these days.

Make sure you tick all the boxes so that nobody sues you in

case of a problem. -_(tu)_/-

> david_chisnall commented [3]:

I get incredibly frustrated by the large number of 'Zero

Trust' things that increase the size of the TCB, increase

the size of the attack surface, and are sold as security

tools and then mandated by corporate IT.

> elobdog commented [3]:

this has been my experience too, so much that most "security

teams" these days are made of 80% compliance people and

20% real engineers. and communication between both camps

is horrible because they view the same problem from totally

different perspective.

an auditor had a finding against our service since he wanted

to see an "adaptive" web-application firewall, and the

cloud service we use makes no mention of this word in their

documentation or any marketing material.

> cetera commented [8]:

The current generation of auditors are a lost cause for

sure, but fingers crossed for the next generation!crossed

fingers

z3bra commented [16]:

While I do agree with the author's arguments against WAFs,

I find the solutions quite disappointing as they require

direct changes to the application, which isn't always under

your control.

I'm part of the ops team and in charge of securing the

application access, not the application itself, and a WAF

would let me do that (albeit in a clunky and painful way).

> danielrheath commented [8]:

I'm quite curious about that arrangement!

Does the dev team carry the pager? If not, do you have

difficulty getting them to fix issues that cause you to

get paged?

Are the dev team also responsible for securing the

application? If not, how do you get security issues fixed by

the development team?

> Vaelatern commented [7]:

What if it's a vendor's application?

> danielrheath commented [2]:

Then it seems especially difficult to get alignment between

dev and ops, and I'm especially curious how others go about

making that work!

I'm running one vendors application presently (mattermost).

We have the source and a CI/CD pipeline; if some new exploit

becomes apparent, I'm able to adjust the code myself, as

well as escalating to the vendor, but as far as I can tell

that's a very rare state of affairs.

> z3bra commented [4]:

Of course the dev team is responsible for securing the

application, and we're working hard with them so that

security gets taken into account from day one when they spin

up a new microservice.

But life isn't always that easy, and there are many legacy

stuff that is either abandonware or so old that nobody dares

changing anything to it, let alone refactor the code to

bring better security to it.

Also no, they don't carry the pager, and my point is that

a WAF is a solution I can deploy on my own to increase the

security of an application, and not wait to be breached to

raise an issue to the dev team that will maybe fix it on the

next sprint, because priorities. Chug

> mort commented [1]:

I'm guessing the WAFs you're talking about are looking at

the HTTP header (path, with, whatever) and maybe source IP

address? That sounds almost more like a proxy with access

controls. The article mostly complains about the WAFs which

run a bunch of regexes on the request body. If you do make

use of the body inspection features I'm curious about how.

(Also hi z3bra :)

> z3bra commented [3]:

The WAF I'm talking about is fairly advanced, and provides

an "auto-learning" feature which will eat thousands of

requests to build up an allow list based on the tokenization

of the accesses (both URL, headers and body).

It also has access to the OWASP rulesets which features many

regexes and tokens matches, etc ..

And to be fair, I do agree that WAF rules are very complex,

can be bypassed, and not self-sufficient to provide

security for an application. But as an ops, I can set it

up to increase the application security with no (or little)

adherence with the dev team.

isra17 commented [13]:

In my experience the choice come down to:

Spend hours arguing with the auditors and customers'

security due diligences about why WAF are bad and we don't

have them.

Spend 5 minutes turning on a cloud WAF with basic ruleset

that check the box.

Same goes for anti-malware solution and other compliance

non-sense.

> elobdog commented [3]:

hahaha .. been there, done that. option #2 wins hands down,

largely because a large portion of auditors have exactly

zero technology/engineering background.

jcspencer commented [8]:

At $WORK, we run WAFs pretty heavily. For the most part, I

agree with the author's assessment of WAFs.

The difficulty always comes back to actually getting to a

place where everything is "secure-by-design"; legacy apps

(either in house or from a vendor) generally do benefit from

a WAF simply because a lot of them become abandon-ware that

simply weren't built with common attack techniques in mind.

Of course, every WAF vendor is going to brag about their

MITRE ATT&CK coverage and how they "would have caught

Log4j", which in most cases is flat out wrong. If it does

happen to be correct, it likely means their rules are so

poorly tuned that you'll have to have a dedicated engineer

developing exceptions.

Turning on every WAF rule possible and hoping for the

best to appease an auditor tends to give developers the

impression that they don't need to be secure-by-design.

I think WAFs do have a place in defense-in-depth, though.

They're a blunt instrument, sure, but in many cases it's

useful to feed data from a WAF back to developers to

highlight ways they can improve things. I think it really

depends on whether developers see them as the be-all and

end-all or part of a defense-in-depth strategy.

I kind of see WAFs as technical solution to a cultural

problem. As security people, we want developers to put

security at the forefront; the reality is, everyone has

different priorities. At least a decent WAF helps you start

identifying the problems and help developers move towards

solutions.

> Corbin commented [4]:

Note that capability-oriented design can provably force

users into safe workflows. The cost is that the users would

then bear responsibility for mishandling capabilities,

and the typical user is not emotionally prepared to be

responsible for their own security.

hoistbypetard commented [5]:

As someone who has deployed multiple WAFs, I like the

sentiment.

But there was a time when mod_security was extremely helpful

in letting us keep some things online (that our business

really needed online) while we were waiting for patches from

vendors. And its heuristics weren't wrong very frequently,

and there wasn't all that much overhead (or at least not so

much that it mattered to us).

Even though the sentiment in this article isn't wrong, it

does feel a little bit to me like removing Chesterton's

fence without understanding/acknowledging why it was there

in the first place.

> cetera commented [4]:

I originally did have a subsection about virtual patching

being a valid use-case but ended up removing it for space!

I wish a "virtual patching only" WAF existed that only

operates on specific 0days until a real patch can be

applied.

> hoistbypetard commented [7]:

That was certainly their best, highest purpose for us. We

had a very light hand on all the other rulesets.

There was this one time, though, that the WAF in our

development environment (which was used for testing

potential production rulesets) taught us something important

about a very expensive product. We were getting ready to

roll out a new service, and whenever we enabled the WAF,

that service would break badly. The WAF logs would complain

that it looked like an LDAP filter injection attack.

After quite a bit of analysis, we concluded that the rule

was right and the log message was right. This product was

effectively depending on LDAP injection for its web UI.

We wound up rolling a virtual patch that made sure only

appropriate filters could be injected, and living with it

for quite a while before the vendor could fix it so the

product didn't need to accept LDAP filters from the front

end for unauthenticated users.

kevincox commented [5]:

At a place I worked they had a WAF that constantly caused

problems. I tried to get them to turn if off but security

refused (I don't remember if it was compliance related or

they just liked it.)

I used to call it the fuzzy bug injector because any number

of perfectly valid requests would get a 403. We would do

things like start base64 encoding the password field to hide

it from the WAF. This was effectively a way to whitelist

fields. This worked well when the API was internal. But then

they launched the public API and this truck won't work as we

can't change the semantics without making clients update.

I also remember when I found a real security issue (shell

command injection nauseated face ) it only took a few

attempts to bypass the WAF. We were wasting so much time

playing whack-a-mole with people triggering various false-

positives and it didn't even stop real attacks.

I think it is important to understand what you want to get

out of a WAF. If you are expecting something that magically

prevents all security issues you are wasting money (both

for the WAF itself and development effort fixing the false

positives). I think what a WAF is good for us short-term

rapid response to security vulnrabilities. Especially those

in third-party software. For example if you are running

WordPress a WAF may be super useful as your vendor can

ship rules that block mass exploitation of the latest 0-

day before you even know there is an issue. That may prevent

your installation from getting taken over with a mass attack

in thee time it takes you to patch. This also can help

things like log4j or similar items.

So I think to get value out of a WAF the key is to manage

rulesets.

Avoid subscribing to rulesets that aren't related to your

application. For example don't subscribe to the WordPress

ruleset id you run a custom Python application. These just

give you false-positives with minimal benefits. I even find

generic rulesets that try to do things like sniff out SQL

injection attempts harmful as they cause too many false

positives and can almost always be bypassed with little

effort.

Track new rules and try to verify that you are unaffected or

have since fixed the issue then remove the rule.

Have the ability to write your own rules. This can be used

to mitigate a first-party vulnerability while someone is

working on a patch but is often also useful for DoS attacks.

If you follow this system you should have a small and fast

WAF that doesn't have many false positives. And you are

still getting 99% of the value.

> atmosx commented [1]:

hm, couldn't you use a cookie or something to bypass

filtering or was this a client-side JS app making the calls?

> kevincox commented [3]:

I don't understand what you mean. If we bypass the WAF what

is the point of having it?

gcollazo commented [7]:

Yeah I just don't have the energy to have the conversation

with the auditor

ubernostrum commented [4]:

Yes, prepared statements, or in many APIs just passing

a query with parameter placeholders followed by a set

of parameters to bind to it (which not all drivers

will actually turn into a prepared statement, but will

interpolate the parameters safely), is the magical cure-all

for SQL injection.

But we live in a world of hotshot rockstar wizard cowboy

ninja guru coders who believe using any sort of query-

construction library (let alone those awful awful ORMs) is

beneath them, and who love to write articles encouraging

everyone else to avoid those libraries like the plague.

> calvin commented [3]:

Wait, parameterized queries aren't a standard feature of

every database library in Python? ODBC has had it since

the early 90s - there's no excuse to be manually inserting

arguments into the query with any modern database API.

And at least in the PHP world, PDO provides emulation for

drivers that don't do it themselves.

> ubernostrum commented [5]:

Every driver module I know of supports them. Doesn't mean

people will use them.

> hyperpape commented [2]:

Without library support for forcing use of prepared

statements, you're stuck auditing everything that uses SQL

to verify that you don't have injection vulnerabilities.

alper commented [2]:

Say we use WAF to rate limit requests to a bunch of

resources. Is there anything wrong with that?

> strugee commented [2]:

You don't need a WAF to do that.

> alper commented [1]:

What would be the ideal component that does this?

> Seirdy commented [2]:

If it's simple enough, the functionality should be built

right into your reverse proxy. Nginx and Caddy can do it

with a couple lines.

Beyond that: your hosting provider, CDN, and/or load

balancer (there may be overlap if you use a very provider-

specific setup) should be the place to look.

> strugee commented [1]:

Hrm. I was going to say the load balancer or even the

application itself (that's how we do it at $work), but

actually looking into it, apparently Amazon at least doesn't

support rate limiting at the LB level. You're supposed

to attach an AWS WAF and do it that way. So it seems I'm

wrong :-)

I think part of the problem here is the fuzzy definition

of a "WAF". Elsewhere in this article's comments @cetera

notes a valid usecase for WAFs. So I guess if by WAF you

mean you're doing all the questionable regex stuff that the

article mentions (or you're using one that can and does all

the buffering, etc. by default) then there's something wrong

with that, and if you're using one that just rate limits,

you're golden.

I guess tl;dr look past the overloaded term and evaluate

what the thing is actually doing for you.

> cadey commented [2]:

If your load balancer is doing filtering duties,

congratulations. You have a WAF.

phroa commented [1]:

I agree with all this. But even at an org that is lucky

enough to have security as a core competency, we still use a

WAF because auditors and customers raise hell if we don't.

munro commented [1]:

Probably because the search results for "Web Application

Firewall" are all written by WAF vendors.

I would work in the nginx office of a friend after hours-

they did inbound sales- I would hear people calling in

saying they're hitting some sort of limit on the free

version (they seemed confused, as well as me), and they just

wanted it fixed and willing to pay money.

atmosx commented [1]:

Have had poor experience with WAF as well. I agree with the

overall sentiment. WAFs cause more harm than good.

NOTE: The drop in performance is expected for any kind of

filtering. SOHO routers serving 10Gbps without filters, once

QoS and Firewall at least every new TCP/UDP connection is

filtered, performance drops to~ 1.25-2Gbps (8x downgrade).

safinaskar commented [0]:

In https://www.macchaffee.com/blog/2023/solarwinds-hack-

lessons-learned/ you wrote: "We can keep the benefits

of huge dependency trees without the risks!" What do you

think about this webasm nanoprocess initiative: https://

bytecodealliance.org/articles/announcing-the-bytecode-

alliance ?

safinaskar commented [-4]:

cetera, what do you think about my sudo/suid proposal

https://github.com/memorysafety/sudo-rs/issues/291 ? I

propose to remove sudo suid binary and ideally all other

suid binaries. This should stop many exploits

---

Served by Pollux Gemini Server.