Caddy – Open-source web server with automatic HTTPS

Author: 6581

Score: 148

Comments: 128

Date: 2021-11-29 09:58:17

Web Link

________________________________________________________________________________

PragmaticPulp wrote at 2021-11-29 15:34:45:

I love what the Caddy project is doing, but every time I look into it I can’t get a straight answer about performance.

One of the top search results for Caddy performance is this Caddy forum thread with a response from the Caddy author:

https://caddy.community/t/performance-compared-to-nginx/7993

But it seems to be dodging the question and linking to the most misleading possible Tweets instead of providing any actual info:

Caddy is written in Go. It will perform just as well as any other Go web servers.
Google, Netflix, Cloudflare, Fastly, and other large network companies deploy Go on their edge. You can too.

What a bizarre response.

They then link to a Tweet from someone who claims that Caddy performed 20X better than nginx. An impressive claim! But further investigation shows that virtually every other benchmark I can find shows the opposite situation: Nginx beating Caddy by up to 8X.

Performance isn’t everything and there are plenty of situations where Caddy’s ease of use might prevail, but I get nervous when project authors are giving cagey and misleading responses to straightforward (and important) questions.

cunthorpe wrote at 2021-11-29 16:18:56:

It’s funny that your complaint about opaque answers got responses that imply that performance doesn’t matter.

What this tells me is that Caddy is slower than its competitors and they know it. If it wasn’t, then they wouldn’t hide behind meaningless responses.

Basically they answered without answering.

francislavoie wrote at 2021-11-29 16:42:28:

You misunderstood then, because what was said was essentially "official benchmarks for servers are meaningless", not that "performance doesn't matter". Do your own testing, for your own usecase.

But still, a server should rarely ever be your bottleneck. Your application's DB I/O will be.

PragmaticPulp wrote at 2021-11-29 17:07:45:

> You misunderstood then, because what was said was essentially "official benchmarks for servers are meaningless", not that "performance doesn't matter". Do your own testing, for your own usecase.

The Caddy forum thread I linked to above highlighted a Tweet about Caddy being 20X more performant than nginx. This was from the Caddy team.

The frustrating part is how benchmarks are championed as a selling point when they benefit the project, but the argument becomes “official benchmarks are meaningless” as soon as they don’t.

That, and the weird opaque responses and insistence that we debate my performance requirements on the internet when I just wanted to know how Caddy compares to nginx in the most broad terms. Similar performance? Order of magnitude faster? Order of magnitude slower? Why does such a large debate have to erupt when such simple questions are asked?

If Caddy is good for one use case but not for others, why not just say it? Why must it become a one-on-one debate? I don’t literally have _one_ use case for a web server. I want to know when it’s appropriate to choose so I can make these decisions myself without engaging in an HN comment section back-and-forth to figure it out.

ksec wrote at 2021-11-29 18:42:12:

I think generally speaking, if you really want or care about performance you shouldn't be using Caddy.

The link you provided had some results from eva2000, I think those are good indication of how Caddy performs. I have used some of his work since... I think some 20 years ago. ( Jeez.... ) So he has been testing server and frameworks all the way back since CGI-Bin era.

But I do understand the frustration, may be Caddy should be up front about it. On the other hand I can see how the author doesn't want to do benchmarks. If you have to benchmarks this may not be for you.

I do wish they at least had memory usage on their web site. Although in my testing it is always less than 100MB I dont bother much with it on a low traffic website.

francislavoie wrote at 2021-11-29 17:11:42:

> The Caddy forum thread I linked to above highlighted a Tweet about Caddy being 20X more performant than nginx.

That's just saying Caddy _can_ be faster, it's not saying Caddy _is always_ faster. That was never claimed.

> when I just wanted to know how Caddy compares to nginx in the most broad terms

But that's impossible. There's no way to "generally" benchmark a web server. There's just way too many ways it can be used for any single benchmark to ever be valid.

Do your own benchmarks, for _your_ usecase. That's the only way you'll get any kind of real answer.

Eldt wrote at 2021-11-29 17:52:34:

Think I found the Caddy developer.

francislavoie wrote at 2021-11-29 17:56:11:

Yes, I'm a maintainer. What's your point?

TimWolla wrote at 2021-11-29 18:47:20:

One thing would be indicating that you are a maintainer in your comment, when it's not immediately obvious, so that readers are able to understand your comment in a proper context. Like I will do at the bottom of my comment:

Disclosure: Not a Caddy user. Turned off from it by the maintainers shamelessly plugging Caddy as the best thing since sliced bread whenever a competitor is mentioned somewhere. I'm also a community contributor to HAProxy which might or might not be considered a competitor.

FpUser wrote at 2021-11-29 18:26:49:

>"But still, a server should rarely ever be your bottleneck. Your application's DB I/O will be."

I have C++ server. Instead of constantly querying DB it holds all business data in RAM in appropriate structures optimized for real time usage, not how they're kept in DB. All read requests are basically limited by network IO except when some request calculates some more or less complex math. No waiting for DB. Writes are batched and a frequency of those is way less than that of reads.

It reverse proxied by Nginx. Putting Caddy instead would be a disaster if Caddy is much slower.

mcspiff wrote at 2021-11-29 19:41:12:

Not to diverge too much, but if you’re keeping everything in memory how do you handle hardware failures etc? Wouldn’t that result in data loss?

FpUser wrote at 2021-11-29 19:49:01:

Data is written to disk but requests are batched and executed as a single transaction. It is very fast. Partner systems know that the request may fail and business processes are organized accordingly. In practice it never really happens. The overall performance is insane (hundreds of times better) comparatively to some Python scripts fishing in database for every request.

kreetx wrote at 2021-11-29 16:35:47:

Yup. They should just say "it's the ease of use" and be done with it. It's a pretty good reason to use a technology.

earthboundkid wrote at 2021-11-29 15:57:08:

I think the point is just that for most applications, the reverse proxy server is not a performance bottleneck, so it doesn't matter for any practical purpose. Caddy could be twice as fast or slow and it would not change the number of servers you need to deploy one way or another. If your reverse proxy is a bottleneck, you'll know and probably have the resources to build something custom instead of using Caddy or whatever.

PragmaticPulp wrote at 2021-11-29 16:09:47:

> I think the point is just that for most applications…

I guess it would make more sense if they came right out and said that it’s best used for these applications where it’s acting as a reverse proxy for a heavy backend and therefore performance isn’t a big issue.

But I’m constantly confused by the vague responses combined with the allusions to claims that it’s super-fast because it uses Go like Google or Tweets about it being 20X more performant than nginx when it’s clearly not.

mholt wrote at 2021-11-29 16:19:19:

It's hard to quantify web server performance in a way that is both truly representative and truly generalizable, such that you can draw correct conclusions for specific setups and use cases. Ultimately we just recommend you do your own tuning and performance testing, especially since so many external factors are involved.

philipwhiuk wrote at 2021-11-29 19:39:55:

That would have been a better answer than "it performs as well as every other Go server" which probably does a dis-service to any tuning work you've done (or any tuning work any other server has done on the thin chance you've not done any).

brownjohnf wrote at 2021-11-29 18:39:30:

> I think the point is just that for most applications, the reverse proxy server is not a performance bottleneck

I'm not commenting either way on Caddy vs. other solutions, but whether or not a reverse proxy is a pure performance bottleneck, it can become a cost issue. If a reverse proxy is capable of handling twice as much traffic as another solution (through some combination of simultaneous connections and raw speed), it'll cost half as much to operate. Especially at scale, those costs can really matter.

Raw speed for speed's sake is only sometimes the most important factor.

Edit: grammar.

jrockway wrote at 2021-11-29 18:47:43:

For me, reverse proxy capacity is more about surviving machine failures. I'll need N+2 of them per region per zone regardless of how efficient they are. For my simple personal site, I run Envoy on 3 machines limited to 64M of RAM and it easily supports 10,000qps per instance with many more concurrent connections (for clients downloading the requested document slowly). One instance alone is enough for all the capacity I desire (and I have rate limits to prevent one IP from using more than its fair share of the limited capacity), so I pay for 128M of RAM that I don't need simply to survive VM failures during a deployment.

I guess my point is that even an inefficient proxy is going to be light on resource usage, and you will always need extras. At some scale, the inefficiency matters, but at most scales it really doesn't. So if Caddy is easy to operate, I'd say go for it. (But personally I use envoy + cert-manager. More flexible and less magic.)

emteycz wrote at 2021-11-29 16:06:30:

Yeah but this means that you could wait 8x longer with NGINX before building a custom solution.

throwaway894345 wrote at 2021-11-29 17:01:53:

Which itself is pretty meaningless considering almost no one bottlenecks on their reverse proxy.

pachico wrote at 2021-11-29 16:57:16:

Can't think of any situations I was involved where the bottleneck was the http server app rather than the application behind it.

Am I the only one?

Bayart wrote at 2021-11-29 17:12:15:

As the other person that answered to you pointed out, there's a point where your HTTP server becomes the bottleneck, ie your backend being more concurrent and faster at producing content than your server is at delivering it.

But I feel the real bottleneck is developer pain. I switched to nginx from Apache like everyone else because _nginx was nice to use and Apache wasn't_. The performance aspect was only a rationalization.

mekster wrote at 2021-11-30 01:13:37:

What was not so nice about Apache?

Mentioning without context sounds like an emotional one.

Bayart wrote at 2021-11-30 09:50:18:

I found Apache configuration to be cumbersome and hard to read, with nginx being comparatively clean and making reverse-proxying and caching easy. Keep in mind that was other a decade ago. I don't know where Apache is _now_, it's not something I have reassessed and I don't know if I would make the same choices with my current experience.

PragmaticPulp wrote at 2021-11-29 17:01:55:

If you’re only serving up heavyweight web apps, then this is probably true.

But Caddy also markets itself as an efficient static file webserver right on the home page:

> Caddy is both a flexible, efficient static file server and a powerful, scalable reverse proxy.

Perhaps the confusing part is the way that Caddy is marketed as “The Ultimate Server” when instead it’s designed to be a fast-enough reverse proxy for applications where performance doesn’t matter. There are many applications where performance really does matter. Static file serving is the most obvious example.

throwaway894345 wrote at 2021-11-29 17:12:20:

> There are many applications where performance really does matter.

Like what? Genuine question. I'm sure a cloud provider's layer 7 load balancer / Kubernetes Ingress would merit an optimized implementation but beyond those kinds of super-scale use cases I can't think of much.

rampointerhater wrote at 2021-11-29 17:08:25:

No. But looking at the site Caddy appears to be a product for easy and quick deployment, so that zero-knowledge people can run a web server. Their point is unlikely to be performance, it is probably in the lines of "low code".

mholt wrote at 2021-11-29 15:49:20:

What are your performance requirements?

yjftsjthsd-h wrote at 2021-11-29 16:42:36:

This is exactly what the original complaint was: Someone complains that Caddy is slower than the options, and the official response is not neither agreement nor giving data to show that Caddy _is_ fast, but equivocating about whether it matters while avoiding commenting on whether the claim is true. If you want to ship a webserver that's slower but safer and easy to use, that's _fine_; I'm using Caddy to host my stuff because that's a reasonable trade off. It's just the dodging of the question that gets old fast.

throwaway894345 wrote at 2021-11-29 17:14:23:

The claim can't be evaluated as "true or false", which is the point. The use cases for Caddy and similar software are too broad to be meaningfully approximated as a binary or scalar value.

yjftsjthsd-h wrote at 2021-11-29 17:20:46:

I'm pretty sure you _can_ approximate it to true - are there any cases where Caddy is faster than another webserver in the same usecase, with either default or tweaked configs on both?

throwaway894345 wrote at 2021-11-29 17:28:06:

Yes, you can get very gritty microbenchmarks, but those always fail to generalize well and lead to more confusion. Most people will make bad decisions off of microbenchmarks and those who won't will tell you they're nearly useless in the best case.

vxNsr wrote at 2021-11-29 16:48:05:

just fyi, based on the bio of the person you're replying to, they're the author of caddy. not sure if this was known or not

yjftsjthsd-h wrote at 2021-11-29 16:59:47:

Yes, that's why I am treating that comment as an official response.

mholt wrote at 2021-11-29 16:55:22:

Yeah, it's hard for us to hit a performance target that is being kept hidden from the devs.

qbasic_forever wrote at 2021-11-29 17:17:56:

This shouldn't be downvoted--it's very true, what performance are you trying to optimize? Throughput of long connections? Time from connection to first bytes? There are dozens of dimensions to web performance.

yjftsjthsd-h wrote at 2021-11-29 17:21:29:

That's totally reasonable. Does Caddy actually win on any of those dimensions?

withinboredom wrote at 2021-11-30 07:03:33:

There are millions of configurations for Caddy (which modules you compile into the binary) so this is literally impossible to know unless your benchmark your particular configuration for your particular use case.

vxNsr wrote at 2021-11-29 20:32:40:

I didn't mean my comment to come off as hostile. The way you replied seemed to imply you were involved in some way but didn't make note of that in your comment, sometimes when people don't realize who they're replying to they make bad assumptions, I was trying to prevent that from happening.

This whole thread has a lot of unproductive animosity, and it's possible part of that is due to the opaqueness who the identity of those involved.

PragmaticPulp wrote at 2021-11-29 15:56:42:

> What are your performance requirements?

If the requirements were such that web server performance didn’t matter, I wouldn’t care and wouldn’t be asking about it.

But the problem is that it doesn’t make sense to hide the performance statistics and then debate each user’s performance requirements instead of just letting the performance be a known quantity so we can all decide for ourselves.

mholt wrote at 2021-11-29 16:20:35:

We aren't "hiding" any performance stats. They just don't exist. It's super hard to generate them in a way that applies perfectly to everyone's use case.

I'm still waiting to hear your performance requirements btw.

PragmaticPulp wrote at 2021-11-29 16:54:10:

> I'm still waiting to hear your performance requirements btw.

I suppose this encapsulates why I get turned off of Caddy every time I look into it: Basic concerns (including questions about approximate performance) are met with suggestions that I don’t know what I want and therefore I’m not qualified to make such judgments.

throwaway894345 wrote at 2021-11-29 17:02:49:

You're being unduly defensive. mholt is telling you that he can't provide meaningful general purpose performance numbers because the breadth of cases for Caddy vary too widely, so he's asking you for information about your specific use case so he can give you the very information you're complaining about not being able to access, but you're being cagey about sharing those requirements.

You're certainly within your rights to not use Caddy, but it makes no sense to complain that you can't get the performance information you need when you won't share any information about your use case.

PragmaticPulp wrote at 2021-11-29 17:10:43:

> but you're being cagey about sharing those requirements.

I don’t have _one_ single set of requirements for every location I use a web server. Nobody does.

If Caddy is good for one thing but not so good for other things, why is it so hard to come out and say that?

Also, the benchmark I was questioning in the opening comment came from mholt on the official community forum. It’s mind boggling that I’m getting attacked for questioning the benchmark result that they provided in the first place.

joshmanders wrote at 2021-11-29 17:23:21:

Nobody is attacking you. Calm down, yeesh.

You want performance metrics, but can't even tell us what metrics matter to you, but expect Matt and Caddy team to have them just available... You understand that your responses here are directly proving his point on not having them, right?

throwaway894345 wrote at 2021-11-29 17:21:26:

> I don’t have one single set of requirements for every location I use a web server. Nobody does.

Right, you have to analyze each case separately. There's no way to _meaningfully_ quantify performance across all cases.

> If Caddy is good for one thing but not so good for other things, why is it so hard to come out and say that?

Perhaps because that doesn't tell you anything? You can post microbenchmarks for very specific use cases but almost everyone extrapolates way too much from microbenchmarks and the folks who don't will tell you that microbenchmarks are virtually useless.

> It’s mind boggling that I’m getting attacked

Oh grief, you're not being "attacked". You're being melodramatic. I don't have a dog in this fight, I was just trying to clarify the conversation. If you're bent on reacting to a distorted perception of the conversation, there's not much I can do about that.

stronglikedan wrote at 2021-11-29 17:29:02:

> mholt is telling you

It's not what he's saying, but how he's saying it. "I'm still waiting to hear your performance requirements btw.", is a condescending, dismissive tone that would put more people off than not. Especially considering it was a reply within just a couple minutes of his initial question.

mholt wrote at 2021-11-29 17:43:04:

Sorry, don't mean to be condescending or dismissive. Just trying to get stuff done. To do that, I'll need that information.

genewitch wrote at 2021-11-29 19:40:50:

I've only used caddy twice, but it worked for like 2 years without interruption both times.

I don't think you're being negative or whatever and I also think the other person has an idea of what they want to know but not how to explain it.

Do they want connection handling metrics? SSL terminations? Caching performance?

Also, not at you but a general laugh, "the reverse proxy isn't the bottleneck ever" is funny because prior to haproxy, getting gigabit throughput to a backend was considered extreme engineering.

Now I can load balance and ssl terminate a gigabit with like 5 raspberry pi and an old Intel box to load balance and cache.

Zababa wrote at 2021-11-29 17:39:56:

On the Caddy website (

https://caddyserver.com/

) it says:

> Caddy is both a flexible, _efficient static file server_ and a powerful, scalable reverse proxy.

> LIGHTWEIGHT For all its features, Caddy runs _lightly and efficiently_ with _relatively low memory footprint and high throughput_.

> STATIC FILES By default, Caddy will serve static files in the current working directory. It's so brilliantly simple and works _fast_.

> Caddy's HTTP server has a wide array of modern features, _high performance_, and is easy to deploy.

Emphasis with the italics being mine. All of these seem to hint at performance comparaisons being done in specific domains agaisnt other web servers. For example:

- efficient static file server: I would assume that "efficient" here is compared to other file servers, though that could mean also that it doesn't use much resources

- For all its features, Caddy runs lightly and efficiently with relatively low memory footprint and high throughput: That seem to imply that other servers with all these features are heavier and less efficient? Or that other servers just don't have those features? Relatively low memory footprint and high throughput? Relative to what?

- It's so brilliantly simple and works fast: What does fast mean here? Fast relative to what? Apache? The standard Go static file server? Python SimpleHTTPServer?

- high performance: High compared to what?

My goal here isn't to be confrontational. My point is that if there are all those claims on the main page of Caddy, then you must at some point have measured performance, resource usage and things like that against other servers. I think what people are asking is to substantiate those claims with data. You're right that it's hard to generate data in a way that applies perfectly to everyone use case. But then on what do you base all these claims? Why is Caddy called "the ultimate server"? At least that's why I'm wondering.

mholt wrote at 2021-11-29 17:53:03:

Not everything needs to be read as a comparison to other products. Adjectives have meaning without competition. They can be interpreted as ratios, for example requests / available memory, or load / latency.

It's not always about putting down other products. Both can be fast. Both can be efficient. Both can be fast and light. Both can be "high" without one having to be low in order for it to be valid to call it "high" -- because it's not always about the other product.

This discourse makes me exhausted, every time it comes up. I'm tired of everyone thinking I'm trying to outperform NGINX. Caddy's just fast and efficient, I don't care to defend that claim as much as people care to attack it.

Xevi wrote at 2021-11-29 19:29:59:

Then what do they mean to you? I think people are just trying to understand. What is "fast" and "efficient" to you? Where do you draw the line?

When it comes to tech in particular, things are pretty much always compared relative to each other, because there's very little other reference points. React is generally deemed as slow relative to Svelte. Rust is more efficient than Python at runtime. These claims mean very little in a vacuum, in fact I'd say that they are almost pointless. You can't base you business decisions on claims that haven't been backed up by data. And if you do, you might regret them later on.

Zababa wrote at 2021-11-29 18:29:55:

I mean, I get your point, but adjectives don't exist in a vacuum. If everything is fast, everything is slow too, and nothing is fast too, because everything is the same. When you use an adjective like "fast" or "slow", you're separating yourself from the whole, and you're putting the thing you're talking about in a category. Of course this is relative. You can be a fast marathon runner. You can be a fast human. Or a fast animal. Or a fast car, or plane. All of those have vastly different speed. But they still have meaning.

Now, Caddy is a web server. You say that it's fast and efficient. I'm not putting that into question. But what do you base that on? What would it take for Caddy to be judged as "slow" or "inefficient" by you? Maybe you don't have a precise answer, and that's perfectly okay. But you must have a way to evalute what's slow and what's fast, even when not comparing to other product. At least that's how I (and I assume, the people asking you all these questions) think.

> This discourse makes me exhausted, every time it comes up. I'm tired of everyone thinking I'm trying to outperform NGINX.

That's fair, and I can empathize. However, I think that's what will happen when the first thing people see on the website of Caddy is "THE ULTIMATE SERVER". Ultimate, for a good amount of people, is related to performance, so people are going to ask you about that. And when they don't get a clear answer, they're going to get frustrated.

Again, I'm not trying to be confrontational here. But there seems to be a real lack of understanding in this conversation, and I'm trying to clarify what people mean when they ask about Caddy's performance.

Hamcha wrote at 2021-11-29 15:59:28:

I'm an avid user of Caddy but I think the more time it passes and the more opinionated it gets, the worse it becomes overall. I don't object to highly opinionated takes but when your biggest selling point is how neat the configuration is to write it becomes more of a pain when it just isn't anymore.

Two examples from my experience upgrading from Caddy 1 to Caddy 2:

Ex.1: Caddy 1 by default bound to an arbitrary port and served via HTTP, they now default to HTTPS, which doesn't work for me since I use traefik in front of it. Bypassing this means adding a ugly ":80" in front of every single vhost I have (which is 20+), before it was just about adding "ssl" if I wanted HTTPS.

Ex.2: The reverse proxy is now transparent by default, in Caddy 1 it wasn't and you just added a "transparent" flag if you wanted it, but now you can't opt out of it and you have to manually specify every header in the config.

I don't think my use-case is that unusual (file server behind edge router) and yet I feel like I have to work against Caddy every so many steps of the way, at one point I will have to ask myself how much more work would it take to just go back to nginx instead.

mholt wrote at 2021-11-29 16:16:28:

Thanks for the feedback.

> when your biggest selling point is how neat the configuration is to write

To clarify, neat/easy configuration isn't our biggest selling point in v2. _Flexible_ configuration is (one of them, anyway). Marketing an advanced web server as "easy" was a mistake in hindsight, so we don't do that anymore.

As for your use case, understandably it's slightly more tedious to do what you're trying to do. However, in our experience in helping hundreds of users in the forums and from what we see in issues, transparent proxy was the more common use case by far, and disabling that is usually as simple as setting the Host header to the address of the upstream. And we feel that enabling HTTPS universally is easier to understand than only enabling HTTPS for some hosts, and we made sure it's easy to disable when necessary.

(Most people just use Caddy, rather than proxying to it from other servers like Traefik, since Caddy can fit those use cases as well, especially with this plugin:

https://github.com/lucaslorentz/caddy-docker-proxy

)

yjftsjthsd-h wrote at 2021-11-29 16:46:18:

I dunno, I appreciate that they're not optimizing for your usecase, but I think their defaults do in fact make sense for the majority of users they're targeting. And yes, that means that if you're trying to use it for something different and/or you want a lot of tweaking you might be better off with something else.

cunthorpe wrote at 2021-11-29 16:14:28:

To be fair, HTTPS by default makes perfect sense for a server. Of course the more complex the setup, the more you’re going against the defaults. It’s impossible to have defaults that work for everyone, so they chose the most common/desirable/sellable default.

mook wrote at 2021-11-29 18:16:31:

I'd argue that HTTPS by default makes perfect sense for a _public facing_ server. For something on an internal network it doesn't make as much sense, especially since that makes using the default path for getting certificates difficult (can't get certs if you don't have DNS).

I think what I'm trying to say is that Caddy was initially great for me (serving static files on the local network by IP address as an easy way to transfer files), but hasn't been that for ages.

francislavoie wrote at 2021-11-29 19:15:01:

FWIW, Caddy can act as its own CA, so it can issue certs for any private site, and you can add the root CA cert to any machines necessary.

> but hasn't been that for ages

Huh? It still is. This is all you need:

      :80 {
        root * /path/to/files
        file_server
    }

mholt wrote at 2021-11-29 19:44:17:

Or without a config file, simply:

        $ caddy file-server --root /path/to/files

crummy wrote at 2021-11-29 18:02:41:

I rave about Caddy. Worth it just to never have to think about HTTPS again. We used to have an Nginx+certbot docker container and it was fragile at best, but migration to Caddy was very easy.

One feature it has that I love: If a request comes in but a backend is down, instead of returning an error it'll wait a set amount of time for the backend to come up, then forward the request. This means you can do zero-downtime deploys with nothing more complicated than "docker-compose up -d". I know that there are better ways to do true seamless deploys but they all introduce enough complexity to a system that we want to keep as simple as possible.

mholt wrote at 2021-11-29 18:22:21:

I like that feature too! Someone else just wrote about it this week:

https://til.simonwillison.net/caddy/pause-retry-traffic

number6 wrote at 2021-11-29 18:16:49:

For docker I use traefik2 and i really like how it integrates; you can also set up a zero downtime setup

bb1234 wrote at 2021-11-29 14:18:57:

Yes, Caddy is awesome! About 3-4 years ago it made it super-easy (and possible) for me to serve my websites with `https`. Before that, I was using nginx, and the process of obtaining certificates seemed quite complicated. Matt Holt, thank you for creating Caddy.

mholt wrote at 2021-11-29 15:51:48:

You're welcome ^_^ Thanks for the nice comment

Tomte wrote at 2021-11-29 15:30:56:

I learned only recently that Apache can do automatic Letsencrypt certificate management, not with some third-party module, but with the bundled mod_md.

KronisLV wrote at 2021-11-29 18:30:01:

This is super interesting!

Of course, it's still experimental:

https://httpd.apache.org/docs/trunk/mod/module-dict.html#Sta...

  "Experimental" status indicates that the module is available as part of the Apache kit, but you are on your own if you try to use it. The module is being documented for completeness, and is not necessarily supported.

Nonetheless, it would certainly be something to make Apache competitive with the likes of Caddy and Nginx again, if they manage to implement the functionality in a resilient and easy to use manner to the end.

ViViDboarder wrote at 2021-11-30 01:10:34:

> and Nginx

Does nginx now have this too? I know Caddy and Traefik do.

KronisLV wrote at 2021-11-30 06:46:20:

Only though Certbot, as far as i know:

https://certbot.eff.org/

That said, Nginx still blows most other web servers out of the water: it has excellent performance and the configuration file format is easier to grok than that of Apache2/httpd and can let you get more done in less time. Perhaps this is why it's often chosen as the default solution for implementing an ingress, e.g. in Kubernetes.

And in my eyes that's a shame, since Apache2/httpd has served me faithfully for years and is a decent web server on its own, hence the addition of this new functionality has the potential to either make Nginx adopt it as well (which would be a net positive for everyone), or simply make more people consider Apache2/httpd for their deployments (which is good, because then its decline will be slower).

joshstrange wrote at 2021-11-29 18:16:54:

I love using caddy for my self-hosted apps, it's so easy to setup and with a little work I got SSO working (v1) for all my apps which is awesome. I no longer have to maintain a different password for each service I run and once I log into 1 of them, I'm logged into all of them. I need to port my setup to v2 but I've been lazy about doing it since some of the plugins/config I use has changed or been removed in v2. I've never used Caddy for production because I've not been in the position to dictate using it or, before switching off nginx/lighttpd, benchmarking it but the ease of use for SSL alone is bar-none.

fabiospampinato wrote at 2021-11-29 16:12:53:

I can't recommend Caddy enough. I switched to that for a simple server of mine and I was able to throw away a whole bunch of junk as a result.

The only thing I wasn't so sure about was the DSL that should be used for configuring Caddy, but after using it I have to say it's very well designed, and rewriting the same configuration in something like JSON would only complicate things in the end.

tiffanyh wrote at 2021-11-29 16:46:48:

What makes Caddy setup of Lets Encrypt so much easier?

I frequently hear that being a huge reason for Caddy use but looking at the documentation for NGINX [0] and Caddy [1], it doesn't seem to be much different.

[0]

https://www.nginx.com/blog/using-free-ssltls-certificates-fr...

[1]

https://caddyserver.com/docs/automatic-https

mholt wrote at 2021-11-29 16:54:07:

NGINX requires external tooling (e.g. certbot), Caddy's auto-HTTPS is built-in. Caddy's auto-HTTPS logic is much smarter and more robust than certbot or any other ACME client. In general, Caddy sites stay up when other sites will go down due to HTTPS/certificate issues. Certbot+cron is much more brittle than Caddy's embedded handling of certificates. Plus, Caddy has stronger memory safety guarantees than a C program like NGINX.

Caddy automatically staples and caches OCSP responses by default, as well. OCSP responder outages? No problem. This sometimes brings sites down (esp. Must-Staple certs in Firefox) because other servers' OCSP stapling implementations are not as robust. I remember when gnu.org went down for a while because of this.

Certificate got revoked? No problem, Caddy will automatically detect that via OCSP and replace it for you.

Internal throttling and retry logic makes it more resilient to domain validation problems. Caddy will fall back to a secondary CA if your first CA is having trouble issuing a certificate (i.e. multiple default CAs). Not to mention, when using Let's Encrypt, Caddy will retry with its staging endpoint to help avoid rate limit issues.

Plus, deploy multiple Caddy instances as a cluster and they will automatically share and coordinate management of certificates and OCSP staples. (Doing this is as simple as configuring each Caddy instance to use the same storage backend.)

It's very quick to get started and try out HTTPS:

https://caddyserver.com/docs/quick-starts/https

cwaffles wrote at 2021-11-29 17:15:47:

Nginx has an excellent track record for CVEs:

https://www.cvedetails.com/product/17956/Nginx-Nginx.html?ve...

nicolaslem wrote at 2021-11-29 17:02:21:

The automatic HTTPS feature of Caddy is a game changer. I enabled it once on a server five years ago and never had to take a second look. I am surprised that this is not a standard feature of all web servers. There is no way I am fighting with certbot (or whatever the recommended software is) anymore.

That being said, the rest of the software is not as amazing. Please nginx, could you add automatic HTTPS so I can ditch Caddy?

tialaramex wrote at 2021-11-29 17:41:50:

This is what's crazy. I _assumed_ two things would logically happen in the year or two after Let's Encrypt went live. Firstly, bulk hosts would all either negotiate a very cheap bulk deal with a for-profit CA, or they'd go to Let's Encrypt. That mostly happened, eventually, but there are some hold-outs still, years later. Secondly, all the TLS capable servers, but especially HTTPS would just throw this in as part of the core system.

Nope, Caddy is one of a minority that put more than a token effort in, a few others have some optional functionality, that you could choose to use if you know what you're doing, most did nothing, relying on you to roll a Certbot script or whatever to manage the certificates.

I figured corporate products, maybe Microsoft tools, stuff like that, might prefer to provide SCEP or any of the half a dozen mechanisms for certificate issuance that pre-dated ACME (what Let's Encrypt does) but lack the actual proof-of-control mechanism. You can imagine Big Corp decides to have a central ACME service doing DNS proof-of-control and then it uses say SCEP internally to issue the certificates to its own servers. That didn't really happen either.

The situation outside web servers is even worse. SQL Servers, SMTP† and IMAP, IRC, and so on, can't do this as easily as Caddy does, but few of them even made a token effort. I don't have much faith in the "efficient market" hypothesis, but if you're a server admin and you believe in it this ought to really shake you. Apparently you value your own time at zero dollars.

† SMTP is a special case. In principle a CA could (but Let's Encrypt don't) allow you to do proof-of-control via SMTP because it uses one of the ports set aside for proof of control purpose in the Baseline Requirements. So you could imagine a CA offering a new ACME proof-of-control method say smtp-69 that meant suitable mail servers would just get themselves certificates, no extra software. But that is not a thing. Other services like IRC or indeed IMAP are not covered by the BRs and so you could not do this for those servers without a lengthy and political process to amend the BRs first.

mholt wrote at 2021-11-29 17:03:49:

What is not as amazing?

nicolaslem wrote at 2021-11-29 17:20:11:

I have not kept up with Caddy in a long while, so take with a grain of salt:

- The weird license debacle.

- The weird ads in headers debacle.

- I had trouble with advanced usage that should have worked but were not flexible enough, like acting as a cache.

- The marketing rubbed me the wrong way. It was pointing the deficiencies in other established FOSS web servers, basically treating them as relics while Caddy itself had 10% of their features.

francislavoie wrote at 2021-11-29 17:25:48:

In case you missed it:

https://github.com/caddyserver/caddy/issues/2786

Those issues are a thing of the past. There's no use bringing it up again, tbh.

> like acting as a cache

That's fair, we have a WIP cache module here

https://github.com/caddyserver/cache-handler

, it should be ready soon!

mholt wrote at 2021-11-29 17:44:32:

I'm excited for that cache module.

mholt wrote at 2021-11-29 17:24:13:

Hmm, so you're hung up on all things we've fixed with Caddy v2 years ago. I recommend giving it a try again with a fresh mindset. It's a totally different product.

nicolaslem wrote at 2021-11-29 18:25:04:

I don't know if the tone of the marketing is fixed. In the past I found the lack of humility surrounding the project off putting. Now I go to the website and the first thing I see is a giant "THE ULTIMATE SERVER", not a good start.

mholt wrote at 2021-11-29 18:27:12:

We're more confident about that one.

philipwhiuk wrote at 2021-11-29 19:35:28:

How can you be confident about it being the ultimate server when you say comparing performance can't be done.

mholt wrote at 2021-11-29 19:41:11:

There's a lot more to "ultimate" than just performance. It's a cross of many dimensions.

nullwarp wrote at 2021-11-29 16:51:35:

It has no third party tool required

zinxq wrote at 2021-11-29 15:41:03:

Good lesson here really. Nginx is an awesome, powerful tool but quite often, you don't need something that sophisticated. Also, because of it's abilities (and legacy) it's a learned skill to configure it.

Caddy just "works" (the advent of Certs that "just work" with LetsEncrypt helped enable that).

That fills a real and important use case.

rgrmrts wrote at 2021-11-29 16:49:40:

Big fan of Caddy as well. I recently listened to the Sourcegraph Podcast episode with Matt Holt. Pretty great episode, would recommend!

https://about.sourcegraph.com/podcast/matt-holt/

doteka wrote at 2021-11-29 20:38:59:

Caddy is the best. I use it as the static file server and reverse proxy for several side projects running in docker-compose.

What I like most about it is how little config you need for reasonable defaults that would require 300 lines of nginx boilerplate.

If I could wish for one thing though, I’d really like the functionality to get let’s encrypt certs while being proxied through cloudflare to be built in. Right now it requires building a custom caddy with a plug-in, which is a lot of hassle for such a vanilla setup.

mholt wrote at 2021-11-29 21:06:43:

Thanks for your feedback. We try to avoid tight integration with specific, third-party providers as much as possible to keep Caddy light and flexible. Cloudflare is popular, but is also not a majority use case for Caddy users, in our experience. The Cloudflare plugin has only 16,000 downloads (and Route53 has 10,000)... out of over a million custom builds (not to mention 100M+ Docker pulls, or other ways of installing Caddy). So it's not enough of a standout to merit inclusion by default.

doteka wrote at 2021-11-30 05:43:12:

Thanks for the reply, mholt!

I would have expected way higher numbers there, but going by that ratio you’re absolutely right. Regardless, thanks for the great work on Caddy, it’s rock solid.

buybackoff wrote at 2021-11-29 17:34:56:

With acme.sh, 80% time is spent on configuring Cloudflare API keys, then the rest 20% it's just editing and running a couple of commands (so both steps are fast). The certificates are installed in any dir, e.g. `etc/nginx/ssl/mysite.com`, and auto renewed, with a custom shell command after, e.g. to restart NGINX.

Tried Caddy before learning acme.sh. Even looked in the source code. Nice idea, but it's a new custom thing. Spent a very small learning time on NGINX, everything is googleable, multiple tutorials, Q&As, samples on every possible subject and usecase. E.g. homelab Guacamole was freezing, the first search result was the fix - not trivial, but just copy-paste several config lines. It's hard to beat the performance and simplicity of NGINX. It does not require to learn all possible tweaks from the start, but learn as you go from 5-lines config SSL reverse proxy and up to whatever complexity is required.

cosmotic wrote at 2021-11-29 18:18:18:

In addition to the auto HTTPS which many mentioned, it also has simplified PHP support. I also find the configuration files are easier to write with more consistent syntax. They are smaller than similar nginx rules, and it's easy to test them because caddy will automatically reload when the file is saved.

ElCapitanMarkla wrote at 2021-11-29 12:55:17:

Caddy is fantastic. I do the odd freelance contract making touch screen interactives for museums here. These have to run on the local host machine which usually isn't networked and Caddy is perfect for this. The last one I did had a massive DeepZoom/OpenSeaDragon image tileset and Caddy managed to serve that flawlessly where my old go to server Mongoose couldn't keep up.

mholt wrote at 2021-11-29 17:03:05:

Neat use case! First I've heard of it. Can you tell me more about the museums?

zamadatix wrote at 2021-11-29 15:22:39:

Caddy is great, makes it easy to just set something up and go. For some reason I can't put my finger on I was really fighting with getting what I wanted to know out of the documentation initially but either it got better since Caddy 2 released or I just got used to it/Caddy and am no longer fighting "newness".

cpach wrote at 2021-11-29 15:05:59:

I love Caddy!

It’s so much nicer to configure compared to Nginx.

No plugins needed for automatic TLS certs – it’s bulilt-in.

And it’s written in a memory-safe language – Go. Which also means a single static binary to deploy.

I can warmly recommend it.

junon wrote at 2021-11-29 15:15:38:

Go is not memory safe, not sure where you got that idea. Go programs can absolutely succumb to nil pointer references, memory leaks, etc.

jrockway wrote at 2021-11-29 15:23:22:

Safety can include crashing. It's better than returning subtly incorrect results. Yes, there are nil pointers and unnecessary data structures that people keep around, but that's miles ahead of C where an HTTP request and a missing bounds check can inject new code into the application.

If you have a Go program where user input can be executed as code (no cgo, no unsafe), I'd love to see it.

junon wrote at 2021-11-29 21:41:31:

This is splitting hairs and arguing semantics. The absolutist statement "Go is memory safe" is simply incorrect.

mholt wrote at 2021-11-29 21:44:34:

> The absolutist statement "Go is memory safe" is simply incorrect.

That absolutist statement is _also_ incorrect.

"Go has strong memory safety guarantees" is more accurate. But neither "Go is memory safe" nor "Go is not memory safe" are correct.

cpach wrote at 2021-11-30 16:43:09:

Ok, cool. TIL!

Zababa wrote at 2021-11-29 15:44:08:

> no cgo

If you're not careful, you will use c code when using os/user and net according to

https://www.arp242.net/static-go.html

.

mholt wrote at 2021-11-29 15:51:29:

Memory safety is a spectrum, and Go is much higher on that spectrum than C/C++.

tcard wrote at 2021-11-29 15:31:03:

That's type safety, not memory safety.

... But no, Go isn't memory safe, insofar data races can cause memory corruption.

junon wrote at 2021-11-29 21:33:39:

That is a part of memory safety, not just type safety. Type safety could _fix_ this form of memory issue, yes.

BilalBudhani wrote at 2021-11-29 18:04:10:

Caddy is a great piece of software.

We, at hatchbox.io, have to moved to Caddy (in v2) for all our applications and honestly the experience so far has been great. It’s a bliss not to deal with SSL certificate generation for our customers’ apps and let Caddy take care of it.

mholt wrote at 2021-11-29 21:07:37:

That's great to hear!

orware wrote at 2021-11-29 20:51:20:

I'm not sure I understand the criticism in the top comment in this thread about Caddy's performance.

I don't think I ever really decided to use Caddy solely based on performance itself (although already fairly familiar with Go's built-in libraries for creating HTTP servers gave me confidence that it should be fairly quickly...I wasn't overly concerned about how it directly compared to Nginx/Apache/IIS/etc.).

For me, the biggest selling point was the Caddyfile format, which felt a lot more human readable than the .htaccess or Nginx configuration options I had seen over the years (I know with Caddy v2 there's the possibly now more preferred JSON option, but I'm still a sucker for the original Caddyfile format myself :-).

There's still a lot I don't know about for all of the web server options out there in general so I don't claim to be an expert in any of them, but Caddy was the first one I felt comfortable using new situations to simplify things in my environment.

For example, Caddy's reverse proxy functionality (and the way it is setup within the Caddyfile) is what finally made that particularly capability "click" for myself over 5 years ago when I first came across Caddy (even though I had seen info how Nginx + Apache were used together in the years prior, with Nginx providing reverse proxy access to Apache in those hybrid setups, it wasn't something that had "clicked" for me being aware of those situations). Since my environment is fairly simple (no real super high demand situations requiring lots of load balancing) we mainly use Caddy to provide automatic SSL capabilities and act as the publicly accessible endpoint in our environment, and use the reverse proxy functionality to tie Caddy into the various other internal servers that need that SSL termination.

I've also been able to take advantage of the built-in static file server for an internal website need and it's actually something I really think is a nice/neat feature to have available (especially due to its ability for simple server side includes, allowing for relatively easy simple templating).

Separate from that, I've also experienced first hand the personal responses that Matt Holt has provided to myself and to many others in the Caddy Community, and I'm definitely appreciative of the work he's put into Caddy (along with others of course) and I hope some of negativity in this thread is able to be easily shrugged off by Matt, because I could imagine it can be exhausting to deal with when he genuinely does care and puts a lot of thought/effort into the responses he provides.

mholt wrote at 2021-11-29 21:10:06:

Thank you for sharing your experiences. Lots of positive feedback here has helped to offset the negative, so it's only a _little_ exhausting today. :)

kalev wrote at 2021-11-29 15:14:13:

Been using Caddy now for multiple years and will use it for all future projects. Highly recommended.

itsjloh wrote at 2021-11-29 21:55:32:

I really like Caddy and use it on a few projects. Its a good piece of software that generally just works.

Their latest security incident leaves me feeling somewhat uneasy though. The only announcement for it was on twitter[1] and a footnote on one of the releases[2]. I don't believe there was ever a mention of the incident on the official forum so if you weren't following them on Twitter you'd never find out.

The general messaging around it was "a GitHub bug caused it" and its never really been followed up on publicly :\

[1]:

https://twitter.com/caddyserver/status/1338324878441603073

[2]:

https://github.com/caddyserver/caddy/releases/tag/v2.2.1

francislavoie wrote at 2021-11-29 22:19:07:

It wasn't a security incident, actually. It's true that "a GitHub bug caused it". It wasn't malicious.

TLDR, a contributor made a tag on their own fork of Caddy, and for some reason our next release used their tag, because it turns out forks in GitHub aren't actual separate repos, but rather "still technically the same repo". It's really strange. Wasn't that contributor's fault either, they had no idea that would happen.

All that happened is that the v2.2.2 git tag wasn't properly signed with Matt's signing key. There was no problem with the code at all.

We've put in place checks during our CI actions to ensure that releases are always verified to be signed by Matt's key. See

https://github.com/caddyserver/caddy/pull/3932

Specifically, commit

https://github.com/caddyserver/caddy/commit/1d473ae924f0d52c...

(which you'll notice is _not_ part of the Caddy repo, it was actually from a fork which was later deleted) triggered this GitHub Actions job for the release

https://github.com/caddyserver/caddy/actions/runs/392345801

which we cancelled when we noticed it was happening. But we can't remove the tag from Go's caching server

https://pkg.go.dev/github.com/caddyserver/caddy/v2@v2.2.2

so it's kinda there forever.

More context:

https://twitter.com/mholt6/status/1337879764317564928

,

https://twitter.com/danlsgiga/status/1338859470227529732

, that whole twitter thread has many branches, so click around to get the whole conversation.

itsjloh wrote at 2021-11-29 22:39:30:

It’s great you’ve got some protections in place to prevent it from happening again. That inspires some confidence.

Thanks for all the work yourself and Matt do on Caddy.

francislavoie wrote at 2021-11-29 22:42:27:

Thanks for the kind words :)

Shoutout to

https://github.com/mohammed90

who's been a huge part of our CI/CD efforts on Caddy (among many other things), including that tag verification PR linked above.

ksec wrote at 2021-11-29 17:33:51:

We already have a number of features to land in the upcoming 2.1 release:

Well it turns out the latest release is v2.4.6 on Github already.

I wish there is a changlog / release note link on the homepage somewhere.

Basically I think they need to delete the whole V2 section. I find it quite confusing.

mholt wrote at 2021-11-29 17:45:11:

Yeah, sorry. Been so focused on dev that sometimes parts of the website get left behind.

ksec wrote at 2021-11-29 18:20:53:

Its all good. No need to apologise :)

Apart from Release Note, and V2 section. You may want to include a section of Big Name Clients / users that are already using Caddy under the fold or right before the fewer moving parts.

Maskawanian wrote at 2021-11-29 17:10:22:

I tried to use caddy, but I wasn't able to find a currently supported module for IP filtering just a bunch of old unsubscribed ones, does anyone know if this is currently possible?

francislavoie wrote at 2021-11-29 17:16:50:

Depends what you mean by "IP filtering", but there's a built-in 'remote_ip' matcher that can let you handle requests from specific IPs/CIDRs differently (such as aborting/forcefully closing the connection):

https://caddyserver.com/docs/caddyfile/matchers#remote-ip

mholt wrote at 2021-11-29 17:11:48:

Yes, this is very common if I understand you correctly. You want the remote_ip matcher:

https://caddyserver.com/docs/caddyfile/matchers#remote-ip

KronisLV wrote at 2021-11-29 18:19:17:

In my eyes, Caddy is a lovely web server that works pretty well as ingress for container clusters (e.g. Nomad, Docker Swarm etc.). That said, i can't help but to feel that v1 was easier and in some ways nicer to use than v2, even though it's abandoned at this point.

That said, i have certain grievances with most of the web servers out there.

Apache2/httpd - actually decently usable even nowadays, but if the fragmentation of service names (httpd vs apache2, with additional scripts like a2enmod) between different distros doesn't hurt it, then the configuration format and how it does reverse proxying and path rewriting most certainly will. The performance is still passable, no matter what anyone says, my applications still have been the bottleneck in approx. 95% of the cases, though that might change with frameworks like Vert.X or such. The further down you scroll, the less user friendly it becomes:

https://httpd.apache.org/docs/2.4/rewrite/remapping.html

Admittedly, the docs themselves are good, though, despite the syntax that you're stuck with.

Nginx - recently migrated my ingress to it at work, seems pretty okay so far, the configuration format seems to make a bit more sense and probably lies somewhere between Apache and Caddy as far as its ease of use and pleasantness goes. I no longer even need rewrite rules to get websockets working properly, which is nice. And my containers can have all of the necessary config in a single file vs the unnecessary boilerplate fragmentation that httpd forces upon me. For example, both of these seem more passable to me when compared to Apache2:

https://docs.nginx.com/nginx/admin-guide/web-server/reverse-...

and

https://www.nginx.com/blog/creating-nginx-rewrite-rules/

Currently, my biggest gripe is that Nginx kills itself when it cannot resolve an upstream host, for example, while Docker containers are still starting, their health checks haven't passed and therefore their DNS records also haven't been created:

https://stackoverflow.com/questions/42720618/docker-nginx-st...

The worst part is that none of the suggested answers actually work for me, so i can't have a single Nginx instance be in front of the development environment with about 20 containers, a few of those being down when Nginx is being restarted will not let many of them be used until the startup finishes. Unacceptable.

Caddy - as stated before, i liked v1 more than v2, though the project itself is pretty close to as good as a web server might get. What i don't enjoy is them taking the old docs offline, merely letting you download an archive, nor am i a fan of the current docs, since at the current point in time they are a bit like running "tar --usage":

https://caddyserver.com/docs/caddyfile/directives/reverse_pr...

It's nice that there are a few examples for the common use cases, but there probably could be even more, just look at what the PHP documentation has at the bottom for a good example:

https://www.php.net/manual/en/function.str-replace.php

(crowd sourced, but i like the idea of letting the community contribute useful information like that).

Apart from that, some of the behavior is weird and you will get a 200 when you'd expect to get a 502/404 in most other web servers:

https://caddy.community/t/why-does-caddy-return-an-empty-200...

which will sometimes be misleading ("Huh, i'm not getting any data in the response to my request, even though the status is 200 in my log, weird...")

Also, i remember when v1 had this "fail-fast" habit of shutting down the entire server when renewing/obtaining a certificate failed, something that i utterly hate when web servers do:

https://github.com/caddyserver/caddy/issues/642

Admittedly, things are a bit better now:

https://caddyserver.com/docs/automatic-https#errors

I just don't understand why web servers can be so opinionated about these things and not provide something like "failure_action" in Docker Compose (

https://docs.docker.com/compose/compose-file/compose-file-v3...

) so that people can choose between either stopping everything as soon as problems manifest, or continuing with a "best effort" strategy.

If i'm hosting 100 sites behind a reverse proxy, i don't want 99 to be taken down just because 1 of them was misconfigured, the web server should be able to throw out a warning about that one host if i tell it to, and proceed to run the rest 99 as instructed. When no web server forces me to cope with such brittleness will be a good day.

francislavoie wrote at 2021-11-29 19:30:29:

Regarding Caddy directive docs, there's examples right at the bottom. What are you missing, exactly? If you could be more specific, we can address it. But as-is, your comment is too vague to be actionable. Feel free to open an issue on

https://github.com/caddyserver/website

with specific examples you think are missing.

Regarding empty 200 responses, this is because "Caddy worked as configured". A 404 Not Found would be incorrect, because there was no attempt to "find" anything. A 400 would be incorrect, because the request was probably fine. A 500 would also be incorrect, because there was no error. The only option remaining, really, is an empty 200 response. It's the user's responsibility to make sure the configuration handles all possible requests with a handler that does what they want.

Regarding fail-fast on cert issues, the problem was that shutting down often triggers container restarts, causing Caddy to attempt issuance again, usually rapidly hitting rate limits. Caddy v2 no longer has this problem. I really can't imagine any situation where shutting down the server makes sense. Servers are kinda by-design supposed to be stable, and shutting down for any other reason than config/startup issues seems counterproductive. Do you have any specific usecase where it would be useful? You're the first to bring up this point since v2 was released.

zwarag wrote at 2021-11-29 18:15:47:

Anyone switch from traefik to caddy and can describe why they did it?

qbasic_forever wrote at 2021-11-29 20:06:56:

Better documentation for Caddy 2 vs. Traefik 2 IMHO. I still don't understand half of the weird labels and incantations Traefik wants for simple container proxy scenarios. Caddy's config is just a straightforward file if you want, like nginx or apache.

mholt wrote at 2021-11-29 18:19:17:

https://twitter.com/_stoakes/status/1425700401022705669

Shadonototra wrote at 2021-11-29 17:04:29:

Go such a wonderful language and stack

sneak wrote at 2021-11-29 17:02:58:

Remember when you had to put a cdrom into a special plastic tray before you could put it into the cd drive?