A few months ago I was on lithair, my Rust framework. cargo build, again. Three minutes of waiting. The same three minutes I’d spent ten thousand times.

It struck me that I hadn’t actually written most of this code. The AI had. I’d typed prompts; it had produced Rust. I was reading it, reviewing it, sometimes correcting it. But the keystrokes weren’t mine.

So why was I waiting on a compiler designed for humans?

The standard pattern

Rust, Go, TypeScript, even modern Python — they’re all designed for humans to write and humans to read. Type systems balance expressiveness against typing speed. Borrow checkers exist because humans forget. Lifetimes are a notation that helps humans reason. Everything in the language design pays a tax to the constraint that someone has to type it on a keyboard and hold it in their head.

Then AI happened. The way the industry handled it: graft it on. Autocomplete. Suggestions. Copilot. The AI is a tool that helps the human author. The source language is unchanged.

That arrangement made sense in 2022. It makes less sense now. In 2026, on most of my projects, the AI is producing the bulk of the code. Calling it a tool is generous. It’s the author.

The realization

If the AI is the author, then the language doesn’t need to be optimized for human writing comfort. It needs to be optimized for what the AI can produce reliably and what the compiler can verify mechanically.

Those are different optimizations.

A human cares about: short syntax, type inference, ergonomic shortcuts, error messages they can debug. An AI doesn’t need any of those. An AI can produce twenty pages of explicit declarations as easily as five pages of inferred ones. The friction is on the wrong axis.

A compiler that targets AI-authored code can ask for things humans wouldn’t write by hand: every read and write declared, every numeric overflow path stated, every termination form named, every intention written out. The AI doesn’t mind. The compiler can then verify what the source claims, mechanically, with no inference. No “the compiler thinks this is probably safe.” Stated, then verified.

The deal that gives you: the verifier is fixed; the AI gets better; the source the AI emits gets richer; the binary you trust is anchored in the verifier, not in trusting the AI.

What I tried

Verbose is the language I designed under that assumption. Two layers:

A .intent file written in prose. Its audience is the human — the author themselves, an auditor, a future reader. It exists to force someone to formalize what the program is supposed to do before any code is written.

A .verbose file: explicit declarations of every read, every write, every bound, every transformation, every effect. The compiler verifies the declarations match the rule bodies. If the AI produces a .verbose whose declarations don’t hold, the compile fails.

You can write .verbose directly, by hand. That’s a valid path and it’s first-class in the docs. But the AI-assisted path — .intent.verbose, with the compiler as the floor — is what the design was made for.

There’s a side effect: a lot of developers are going to read this and recoil. “I have to write all of that?” Yes. The discipline isn’t optional.

That’s not a UX problem to fix. It’s an audience filter. Verbose is for code that has to be auditable, that has to be reviewed, that has to survive a regulator. Most code doesn’t. Most code is fine in Rust or Go. For the code that isn’t fine in Rust or Go — code where someone needs to read the source and know exactly what the binary will do — the formalization is the work. Verbose makes it the first work, not the last one.

Why I prefer this for my projects

Two reasons.

The first is physical. We’ve hit the limits of what raw CPU growth can give us. Lithography is hitting walls. The next round of “faster” has to come from emitting tighter machine code, not from waiting for the silicon to catch up. A compiler that knows precisely what the program does — because every read, every bound, every shape was declared — can emit code a general-purpose compiler can’t: SIMD where applicable, parallel where declared, elided checks where proven. Verbose’s native backend is built around exploiting that.

The second is honesty about where the energy is going. I’ve been collaborating with AI on production code since Sonnet 3.5 in late 2024. I’ve watched the trajectory. Each generation produces more reliable code. The right move, if you’re betting on a curve that keeps going up, is to design the rails the AI runs on — not to keep designing better keyboards for humans.

I’d rather go with the wind and steer it than fight it.

What this is not

It’s not “Rust is bad.” Rust is excellent. I built lithair in it.

It’s not “everyone should switch.” Most software, most teams, most products — the existing toolchain is fine.

It’s not “AI replaces developers.” Writing the .intent — forcing yourself to clearly articulate what you actually want before any code exists — is harder than writing the code. The bottleneck moves upstream, not away.

It’s: when you stop pretending the human is doing the typing, you can ask different questions about what the language is for. That’s what verbose is. The questions I was asking while cargo was building.