💾 Archived View for gmi.runtimeterror.dev › caddy-tailscale-alternative-cloudflare-tunnel › index.gm… captured on 2024-09-29 at 00:35:20. Gemini links have been rewritten to link to archived content

View Raw

More Information

-=-=-=-=-=-=-

💻 [runtimeterror $]

2024-09-22

Caddy + Tailscale as an Alternative to Cloudflare Tunnel

Earlier this year, I shared how I used Cloudflare Tunnel [1] to publish some self-hosted resources on the internet without needing to expose any part of my home network. Since then, I've moved many resources to bunny.net [2] (including this website [3]). I left some domains at Cloudflare, primarily just to benefit from the convenience of Cloudflare Tunnel, but I wasn't thrilled about being so dependent upon a single company that controls so much of the internet.

[1] shared how I used Cloudflare Tunnel

[2] moved many resources to bunny.net

[3] including this website

However a post on Tailscale's blog this week [4] reminded me that there was another easy approach using solutions I'm already using heavily: Caddy [5] and Tailscale [6]. Caddy is a modern web server (that works great as a reverse proxy with automatic HTTPS), and Tailscale makes secure networking simple [7]. Combining the two allows me to securely serve web services without any messy firewall configurations.

[4] post on Tailscale's blog this week

[5] Caddy

[6] Tailscale

[7] makes secure networking simple

So here's how I ditched Cloudflare Tunnel in favor of Caddy + Tailscale.

Docker Compose config

To keep things simple, I'll deploy the same speedtest app I used to demo Cloudflare Tunnel [8] on a Docker host located in my homelab [9].

[8] same speedtest app I used to demo Cloudflare Tunnel

[9] homelab

Here's a basic config to run openspeedtest [10] on HTTP only (defaults to port `3000`):

[10] openspeedtest

services:
  speedtest:
    image: openspeedtest/latest
    container_name: speedtest
    restart: unless-stopped
    ports:
      - 3000:3000

A Tailscale sidecar

I can easily add Tailscale in a sidecar container [11] to make my new speedtest available within my tailnet:

[11] Tailscale in a sidecar container

services:
  speedtest:
    image: openspeedtest/latest
    container_name: speedtest
    restart: unless-stopped
    ports: 
      - 3000:3000
    network_mode: service:tailscale 
  tailscale: 
    image: tailscale/tailscale:latest
    container_name: speedtest-tailscale
    restart: unless-stopped
    environment:
      TS_AUTHKEY: ${TS_AUTHKEY:?err}
      TS_HOSTNAME: ${TS_HOSTNAME:-ts-docker}
      TS_STATE_DIR: /var/lib/tailscale/
    volumes:
      - ./ts_data:/var/lib/tailscale/

Note that I no longer need to ask the host to expose port `3000` from the container; instead, I bridge the `speedtest` container's network with that of the `tailscale` container.

And I create a simple `.env` file with the secrets required for connecting to Tailscale using a pre-authentication key [12]:

[12] pre-authentication key

TS_AUTHKEY=tskey-auth-somestring-somelongerstring
TS_HOSTNAME=speedtest

After a quick `docker compose up -d` I can access my new speedtest at `http://speedtest.tailnet-name.ts.net:3000`. Next I just need to put it behind Caddy.

Caddy configuration

I already have Caddy [13] running on a server in Vultr [14] (referral link [15]) so I'll be using that to front my new speedtest server. I add a DNS record in Bunny for `speed.runtimeterror.dev` pointed to the server's public IP address, and then add a corresponding block to my `/etc/caddy/Caddyfile` configuration:

[13] Caddy

[14] Vultr

[15] referral link

speed.runtimeterror.dev {
        bind 192.0.2.1    # replace with server's public interface address
        reverse_proxy http://speedtest.tailnet-name.ts.net:3000
}

<-- note -->

Since I'm already using Tailscale Serve for other services on this server, I use the `bind` directive to explicitly tell Caddy to listen on the server's public IP address. By default, it will try to listen on *all* interfaces and that would conflict with `tailscaled` that's already bound to the tailnet-internal IP.

<-- /note -->

The `reverse_proxy` directive points to speedtest's HTTP endpoint within my tailnet; all traffic between tailnet addresses is already encrypted, and I can just let Caddy obtain and serve the SSL certificate automagically.

Now I just need to reload the Caddyfile:

sudo caddy reload -c /etc/caddy/Caddyfile 
  INFO    using config from file  {"file": "/etc/caddy/Caddyfile"} 
  INFO    adapted config to JSON  {"adapter": "caddyfile"}

And I can try out my speedtest at `https://speed.runtimeterror.dev`:

Image: OpenSpeedTest results showing a download speed of 194.1 Mbps, upload speed of 147.8 Mbps, and ping of 20 ms with 0.6 ms jitter. A graph displays connection speed over time.

Conclusion

Combining the powers (and magic) of Caddy and Tailscale makes it easy to publicly serve content from private resources without compromising on security *or* extending vendor lock-in. This will dramatically simplify migrating the rest of my domains from Cloudflare to Bunny.

---

📧 Reply by email

Related articles

SilverBullet: Self-Hosted Knowledge Management Web App

Taking Taildrive for a Testdrive

Automate Packer Builds with GithHub Actions

---

Home

This page on the big web