I was setting up a small staging environment a couple of years ago. Three Proxmox VMs, then OS configuration on each, then a service deployed across them. The standard stack: Terraform for the VMs, Ansible for the configuration, something else for the service. Three tools. Three languages. Three state machines.
I’d been doing variants of this for years. I knew the routine. I just stopped wanting to do it.
The standard pattern
If you read about infrastructure-as-code today, the consensus stack looks like this:
- Provisioning: Terraform (or OpenTofu, Pulumi) — declarative cloud resources
- Configuration: Ansible (or Chef, Salt) — agent-less or agent-based config management
- Deployment: Helm, Kustomize, kubectl manifests, systemd units — depending on what runs
Each tool is excellent at its layer. Each has its own language: HCL for Terraform, YAML for Ansible, Go templates for Helm. Each maintains its own state: .tfstate, ad-hoc Ansible facts, Helm release secrets in your cluster.
You stitch them together. Terraform output feeds Ansible’s inventory. Ansible’s run leaves the host ready for Helm. Helm’s release waits for some condition Ansible reported elsewhere. The integration is your problem. The glue is shell scripts, CI pipelines, Makefiles, runbooks. Sometimes a wrapper called “make deploy” that hides three tools’ worth of options.
This works. People run real production on this stack. It’s the default, and the default is robust.
But the tax is real. And once you stop paying it for a while, going back hurts.
The realization
The first time I wrote modules for a real deployment, I noticed I was spending more energy on the tool’s conventions than on describing what I wanted to run. Module composition. Variable plumbing. Conditional resources expressed through awkward syntax. The tool had a lot of opinions about how to structure my intent, and most of those opinions were about the tool, not about my deployment.
Then I’d hand off to Ansible. Different language, different mindset. A Python runtime to install everywhere I’d run from. Roles, collections, inventories — another set of conventions to learn. Some of Ansible’s modules covered provisioning too, but they always felt like second-class citizens compared to the dedicated tool. So in practice I’d use both.
The honest question I started asking: why is “set up three VMs with this software on them” two tools and four files?
Provisioning and configuration aren’t independent concerns. They run on the same machines, in the same flow, often within minutes of each other, almost always by the same person. The separation between them isn’t in the problem; it’s in the tools. Each tool’s history is why it’s separate from the others. None of that history is mine.
What I tried
Configorator is the answer I built for myself. One YAML manifest. One Go binary. One run.
infrastructure:
proxmox:
nodes:
web-1:
cpu: 2
memory: 4096
disk: 40
os: rocky10
components:
- name: nginx
target_hosts: [web-1]
type: dnf
package: nginx
state: present
- name: site
target_hosts: [web-1]
type: file
source: ./site/index.html
dest: /usr/share/nginx/html/index.html
depends_on: [nginx]
If you’ve read Kubernetes manifests, the shape should feel familiar — declarative resources, dependencies between them, target-based expansion. Configorator’s manifest sits in that family, applied to the layer above the cluster: the VMs, the OS state, the services that run on the bare hosts.
configorator all sync --env staging and that’s it. The binary creates the VM, waits for SSH, installs the package, copies the file, starts the service, reports what changed.
The same manifest re-run produces “0 changed” if nothing has drifted. The expansion target_hosts: [web-1, web-2, web-3] becomes one component per host, with dependencies rewritten across variants. Secrets work the same way through Vault, OpenBao, or a local file. Healthchecks use HTTP status, TCP open, or command exit code.
There’s no Python to install on the operator’s machine. There’s no separate state file to lose. There’s no second tool to take over after the first one finishes. There’s a binary, a YAML, and a run.
Why I prefer this for my projects
Most of my projects have small, defined infrastructure: a couple of VMs on my Proxmox, occasionally a Docker host, sometimes a local node for testing. Always small enough that the integration tax of three tools dwarfs the actual provisioning work.
I want to download a binary and describe what I want. I don’t want to also pick a Terraform version, install Python, decide between Ansible roles and collections, learn which Helm chart structure is current, and remember which of my five tools owns which secret.
A single binary that reads a YAML and produces my running stack does what I want most of the time. When it doesn’t, I write another component module. The components are open enough that adding a new one is a couple hundred lines of Go, not a fork of three tools’ ecosystems.
Configorator is built for the small case. The case where one person is responsible end to end. The case where the layers aren’t actually independent — they share machines, they share timing, they share the operator’s attention.
What this is not
It’s not “Terraform is bad.” For a team that already lives in HCL and has a deep bench of Terraform modules, switching makes no sense. The investment is real and the outcomes are good.
It’s not “Ansible is wrong.” Ansible’s playbooks-and-roles model is well-suited to mature operations teams. The Python dependency is a non-issue when you already have Python everywhere.
It’s not “every shop should consolidate to one tool.” Specialized stacks earn their complexity in proportion to the team’s scale.
It’s: when “set up three VMs and run my software on them” still takes three tools and four files, the tooling is the work, not your deployment. For a solo operator, that ratio is wrong. Configorator is what one tool, one file, one run looks like — for the projects where that’s enough.