Multi-server FreeBSD environments designed, built, and administered for organizations where uptime has a direct cost. iocage jails, ZFS, bhyve, pf, Puppet, HAProxy, BIND, MariaDB — the full stack, managed as a system rather than a collection of individual parts.
Why FreeBSD
FreeBSD's application binary interface is stable across releases in a way that Linux distributions are not. A jail built on FreeBSD 12 can be migrated to a FreeBSD 14 host and run without modification — not as a theoretical capability, but as a documented, tested, production-proven one. That stability is worth something when you're responsible for infrastructure that can't go down for a rebuild cycle.
The ZFS implementation in FreeBSD is mature, well-integrated, and understood at depth. The pf firewall is deterministic. The jail subsystem provides OS-level isolation that is lighter than a VM and more complete than Linux namespace-based containers in terms of how it interacts with the kernel. These are not preferences — they are properties that matter when you're diagnosing problems at 2am.
FreeBSD's release engineering produces a coherent, tested operating system rather than a package manager layered on top of a kernel. That coherence shows up in production as fewer interactions between components that were never tested together, fewer surprise behaviors after updates, and a system whose behavior can be understood from its documentation.
The production stack
The following describes the architecture used in real production environments under continuous traffic load — not a reference design, but what is actually running.
Isolation
iocage jails
Each service runs in its own iocage jail — nginx, PHP-FPM, MariaDB, Valkey, BIND, each isolated with its own filesystem and network address via lo1+pf. Thin jail templates provisioned by Puppet mean new jails are created from a known-good baseline, not from memory. RCTL resource limits prevent one jail from starving another.
Storage
ZFS with deliberate tuning
Pool layout matched to workload: lz4 compression on all datasets, atime disabled, ARC sized explicitly via vfs.zfs.arc_max rather than left to dynamic defaults. ZFS snapshot schedules provide point-in-time recovery across every jail. Database pools get dedicated SLOG devices on SSD for synchronous write performance. zfs send | zfs receive handles jail migration and off-site replication.
Networking
pf + HAProxy
pf handles all inbound NAT, port redirection to jail IPs, and stateful filtering. Rulesets are explicit — default deny, specific pass rules for each service and each jail. HAProxy sits in front of application tiers for load balancing, SSL termination, health checking, and traffic routing. Health checks test real request paths, not just TCP handshakes.
Configuration management
Puppet with Hiera/eyaml
The entire fleet is managed via Puppet running on buildhost14 — roles/profiles pattern, Hiera data hierarchy for per-node and per-domain configuration, eyaml for encrypted secrets. Custom packages are built via poudriere on the build host and served from a local repository. A change to a service configuration is a Puppet commit, not a manual edit on a production server.
DNS
BIND 9.18 LTS authoritative
Primary and secondary authoritative nameservers running BIND 9.18 LTS in dedicated jails. TSIG-secured zone transfers, explicit notify configuration, and zone management across large domain portfolios. NS1/NS2 on separate physical hosts. DNS is infrastructure, not an afterthought — and an authoritative nameserver you control doesn't have a vendor's SLA between you and your zones.
Application tier
nginx + PHP-FPM
nginx handles static assets, upstream proxying, and SSL. PHP-FPM pools are sized against actual CPU core count and workload profile — not round numbers copied from another server. pm.max_children set with the database connection limit in mind. Pool configuration deployed via Puppet, not hand-edited on running jails.
Database
MariaDB with replication
MariaDB in a dedicated jail, primary/replica replication topology, innodb_flush_log_at_trx_commit and ZFS sync behavior tuned together rather than independently. Large table operations — schema changes on 100M+ row tables — done with pt-online-schema-change to avoid lock contention. Replication verified with pt-table-checksum before any promotion.
Caching
Valkey / Redis
Valkey (Redis fork) in its own jail, maxmemory sized to account for jemalloc fragmentation overhead rather than set to a theoretical maximum. activedefrag configured to reclaim fragmented allocator space proactively. ZFS ARC bounds set explicitly so Valkey memory pressure doesn't silently evict filesystem cache.
Monitoring
monit + remote VPS checks
Local monit watching process health, service availability, and resource consumption for every jail. Remote VPS running monit for external HTTP/HTTPS content validation — not just status codes. SSL certificate and domain expiration monitoring. Application-specific health checks in Perl via cron. When something is wrong, it surfaces as an alert, not as a user complaint.
Virtualization
Jails are the right tool for most workloads — lightweight, fast, and directly integrated with ZFS. For workloads that require a separate kernel — legacy applications tightly coupled to an old OS version, guest operating systems that aren't FreeBSD, Windows VMs for management tooling — bhyve provides a Type-2 hypervisor built into the FreeBSD base system.
bhyve guests run on the same ZFS pool as the jails, inheriting the same snapshot, send/receive, and replication capabilities. A bhyve guest can be snapshotted before maintenance, rolled back if something goes wrong, and migrated to a new physical host via zfs send without the guest OS being aware anything happened.
The combination of iocage jails for native FreeBSD workloads and bhyve for everything else means a single FreeBSD host can run a complete, heterogeneous production environment with full snapshot coverage across all workloads — on hardware you control, with no hypervisor license, and with the same operational model for both.
Configuration management
A server whose configuration exists only in the memory of the person who built it is a liability. A server whose configuration is in a Puppet repository is an asset — it can be reproduced, audited, reviewed, and changed safely.
Puppet on FreeBSD requires understanding the platform specifics that generic Puppet documentation doesn't cover — the pkgng provider, FreeBSD's rc.conf service management model, the path differences from Linux that break naive module assumptions, and the interaction between Puppet's file management and ZFS dataset layout in a jail environment.
The production configuration management setup runs Puppet 8 on buildhost14, a dedicated FreeBSD build and management host. Environments are managed per-codebase using the roles/profiles pattern. The Hiera hierarchy separates per-node, per-domain, per-role, and common data — encrypted secrets are stored as eyaml blobs using PKCS7, decryptable only on the Puppet master. Custom packages for all production services are built by poudriere on the same host and served from a local repository — the production fleet never pulls packages directly from the public FreeBSD package mirrors.
Every configuration change goes through the Puppet codebase. Every service configuration, every firewall rule, every sysctl tuning parameter has a record of when it was changed and why. The production servers are the output of the codebase, not a state that has drifted away from it.
Diagnostic capability
FreeBSD's DTrace implementation and ktrace provide diagnostic capability that has no equivalent on Linux. When something is wrong and the conventional tools show nothing, these are the tools that find it.
Arbitrary instrumentation of the running kernel and userspace without modifying binaries, rebooting, or running debug builds. Probes on the VFS layer, ZFS ARC internals, scheduler events, TCP stack behavior, system calls — aggregated and correlated in a single script output. The cases where DTrace found problems that were invisible to every other tool are documented in the case studies.
Per-process system call tracing that captures every kernel interaction a process makes — reads, writes, accepts, fsyncs, kevent waits — with timing. Where DTrace provides fleet-level and kernel-level visibility, ktrace provides the definitive record of what a specific process was doing at the moment a problem occurred. Used to diagnose PHP-FPM worker deadlocks, InnoDB fsync stalls, and latency problems that presented as something else entirely.
Most production performance problems that aren't immediately obvious — intermittent latency spikes, periodic stalls, systems that appear healthy but aren't — live at the boundary between subsystems. Between the ZFS ARC and a userspace allocator. Between InnoDB's durability guarantee and ZFS's synchronous write path. Between a health check's view of a process and what that process is actually doing.
DTrace and ktrace cross those boundaries. They're not tools you reach for first — they're tools you reach for when everything else has been exhausted and the problem is still there. On FreeBSD, they're available. On Linux, the equivalent capability requires specific kernel builds and additional tooling that isn't standard. That's not a preference — it's a capability difference that matters when you're trying to find something invisible.
Remote-first. Dallas-based. Available until 2am CT. Long-term contracts welcome.