service forwarder
listen:
protocol : http_1_0
port : 8080
max_request: 2048
handler: forward_request
connection upstream
host : 198.51.100.42
port : 80
max_response: 4096
on_connect_error: abort
rule forward_request(req: HttpRequest) -> HttpResponse:
let upstream_response = fetch(upstream, concat(
req.method, " ", req.path, " HTTP/1.0\r\n\r\n"
))
HttpResponse { status: 200, body: upstream_response }
The above is a real, working reverse proxy. Written in 14 lines of .verbose. Compiles to a 1133-byte ELF binary. No runtime, no libc, no framework underneath.
You can strings the binary and see exactly which IP it can reach. The bytes are there because they have to be — verbose only emits what the source declares.
The standard pattern
A reverse proxy is usually a configuration of an existing program. You install Nginx, write a proxy_pass directive, point it at an upstream. Or you use HAProxy, Envoy, Caddy. The proxy’s behavior is mostly defined by config, and the program itself is a black box that interprets that config.
This works. The programs are excellent. The configs are well-understood.
But “what does this binary actually do at the network layer” is a question you answer by reading the config, then trusting the binary interprets the config the way you read it. You can’t read the binary itself — it’s millions of lines of compiled C. The relationship between config and behavior is a contract enforced by the program’s authors, not by anything you can verify.
For a reverse proxy at most companies, that’s fine. The trust relationship works.
For a reverse proxy at a bank, an audit firm, a regulated environment? It’s where the audit story gets thin. “The auditor reads the Nginx config.” OK. What if Nginx has a CVE? What if a directive interprets differently than the auditor read? What’s the actual reachable set of this binary at runtime?
The realization
I’d been working on verbose — a language designed for AI authorship and mechanical verification — and I kept noticing the same gap. The compiler proved properties about pure data transformations. The binary did exactly what the source said. But for anything that touched the network, the disk, the clock — the source went silent. A .verbose file could prove a sort was correct. It couldn’t say which port it would listen on, which file it might open, which host it might reach.
The audit story stopped at the boundary between computation and the world.
v0.2.0 closes that gap.
What I tried
Six phases of work, each adding one class of external interaction in a way the source can express and the compiler can verify:
Phase 7 — Listening services. A service block declares protocol, port, request size limit, and handler. The compiler synthesizes HttpRequest and HttpResponse as built-in concepts. The handler is a pure rule. Native HTTP/1.0 emitter. No framework underneath.
Phase 8 — Audit logging. A log: block in the service declaration has access to resp.status and resp.body by name. The audit record and the response share the same memory slot — they cannot diverge. on_error: abort makes the service fail-closed when the audit can’t be written. Multiple log: blocks compose in declaration order. json_escape(text) is a primitive; RFC 8259 correct; compile-time-folded for literals.
Phase 9 — Resource file I/O. A resource declaration names a file path, a max: byte bound, and an on_read_error policy. read(resource) works in rule bodies and handler bodies. The set of files the binary can ever open is enumerable from the source. strings on the binary lists every path. A cache: true opt-in hoists the read to startup; combined with concurrency: forked, the parent reads once and children inherit via copy-on-write.
Phase 10 — Concurrency. concurrency: forked on a service means fork-per-accept, Apache mpm_prefork shape. Parent loops, child handles, exits. SIGCHLD=SIG_IGN for zombie-free operation. +158 bytes over sequential on the worked example.
Phase 11 — Outbound TCP. A connection declaration names an IPv4 host (literal — no DNS), a port, a max_response buffer bound, and an on_connect_error policy. fetch(connection, request_bytes) opens the connection, sends the bytes, reads up to max_response, returns the result. The set of hosts the binary can reach is enumerable from the source.
Phase 12 — Closing the audit-JSON gap. json_escape got promoted to a first-class primitive. The audit log writes JSON, the JSON is bit-for-bit correct, the binary contains the escape table. Every audit line is a proof.
The reverse proxy at the top of this post uses all of these. The complete effect-model.md shipped in this release catalogues every external thing a verbose binary can declare — and, just as important, the closed list of refusals (no DNS, no TLS, no POST bodies, no pthreads, no heap).
Why this matters to me
The auditor — the person responsible for whether the binary is safe to deploy — should not have to read assembly, should not have to trust a framework’s interpretation, should not have to take the developer’s word for “what the binary does.”
For computation, that has been mostly solvable for decades — proof assistants, type systems, model checkers. For the parts that touch the world, the audit story has always been: documentation. Documentation rots.
v0.2.0 makes the parts that touch the world declarable in the same source the compiler verifies. The auditor reads .verbose, runs strings on the ELF, and the answers match. Not because we’re disciplined; because divergence is structurally impossible.
229 tests, six phases shipped, a 1.1 KB reverse proxy whose reachable set is visible without a debugger.
What this is not
It’s not “Nginx is bad.” Nginx is excellent. Most reverse proxies should keep being Nginx.
It’s not “audit-grade is the only reason for verbose.” The same compiler that emits a 1.1 KB reverse proxy also emits 358-byte echo servers, sub-kilobyte aggregators, and entire CLI tools. Tight machine code is the structural consequence; the audit story is the social consequence.
It’s not “AI replaces auditors.” The .intent file — prose written by a human, the formalization of what the program is supposed to do — is harder than the code. The auditor’s work moves upstream.
It’s: when you stop treating “what the binary does at the network layer” as something only the framework knows, you can put it in the source. v0.2.0 is what that looks like.