arcker.org had been working perfectly for weeks. Cloudflare in front, automatic TLS, DDoS protection, a CDN edge in dozens of cities — all free, all good. Cloudflare’s proxy is a genuinely solid solution; for most personal sites, it’s probably the right default.

But I wanted to test something else. lithair, the framework I’m building, includes its own host-based routing. I’d written it. I hadn’t actually used it as the frontal layer. So one evening I switched the Cloudflare orange cloud off — DNS only, no proxy — and pointed arcker.org directly at my VPS.

It didn’t work.

The browser couldn’t establish HTTPS. There was no cert for arcker.org on the VPS itself; Cloudflare had been quietly providing one. Hostname routing wasn’t picking up the new domain either. Forty minutes in, I’d opened the lithair main.rs, the Cloudflare DNS panel, and the certbot output, just to understand which layer was supposed to do what.

That night I started writing this article in my head.

The standard pattern

For most of the last decade, deploying a web service has meant putting something in front of it. Nginx, Caddy, HAProxy, Cloudflare — pick one. The application binary listens on localhost:8080; the proxy in front handles TLS, hostname routing, rate limiting, sometimes ACME automation. That’s the standard pattern, and it works.

It made sense when web frameworks were minimal — they didn’t terminate TLS, didn’t think about hostnames, didn’t handle cert renewal. So you bolted a proxy in front to fill the gaps.

I built lithair the same way at first. It would listen on a port, you’d put your proxy of choice in front. Why would anyone do otherwise.

The realization

I’m not opposed to layers. I’m opposed to layers I’m not actively choosing.

What the Cloudflare-off experiment revealed wasn’t that Cloudflare was bad — it’s excellent. It revealed that I’d never asked, for my specific deployment, what each layer in the path was actually for. Two static sites, an admin UI, one VPS, no team. The proxy in front had been giving me convenience features I’d never measured the value of for my actual traffic.

Picking Cloudflare wasn’t a choice I’d made. It was a choice I’d inherited. Like a lot of defaults.

What I tried

Lithair already terminated TLS via rustls. Hostname routing was the missing piece. I added it directly to the framework:

LithairServer::new()
    .with_vhost("arcker.org", |v| v.with_frontend_at("/", "sites/arcker.org"))
    .with_vhost("lithair.net", |v| v.with_frontend_at("/", "sites/lithair.net"))
    .with_default_vhost(|v| v.with_frontend_at("/", "fallback"))
    .serve()
    .await

One process, two domains plus a default. The whole config is in the binary’s startup. When something breaks, I read one file.

Why I prefer this for my projects

For a small operation — me, deploying personal sites on a VPS — collapsing the proxy into the framework reduced the cognitive surface area considerably. Each layer has its own config language, log format, failure modes, attack surface. Removing one of them means one fewer thing I need to hold in my head when the next 11pm bug hits.

For a large operation with a dedicated platform team, multiple shared services, mixed runtimes — the proxy is probably still the right call. Centralized TLS, centralized rate limits, centralized routing across services that don’t share a binary, edge presence in 200+ POPs. That’s what reverse proxies are good at, and Cloudflare is one of the best at it.

I’m just not running a large operation. And lithair is built for the small case first.

What this is not

It’s not “you shouldn’t use Cloudflare.” For most people, Cloudflare in front is the right default — it’s free, it’s fast, it terminates TLS automatically, and it absorbs traffic spikes that would otherwise hit your server.

It’s not “every framework should do hostname routing internally.” Most use cases genuinely assume a proxy.

It’s: when you control the deployment end-to-end, and the proxy is mainly a habit, it might be worth asking what it’s actually doing.