Senior Linux systems administration for organizations running Ubuntu, Debian, and containerized workloads. Fleet standardization, Docker and Podman deployments, and application integration — with cross-platform depth that comes from 20 years of parallel FreeBSD production experience.
The cross-platform advantage
A senior Linux administrator who has only ever worked on Linux has a single reference frame. They know what Linux does, but they don't always know why — because they've never seen the same problem solved differently on a system where the design choices were made explicitly rather than inherited from decades of accumulated decisions.
Twenty years of production FreeBSD administration means working with a system where the network stack, the filesystem, the jail subsystem, and the package management are all designed as a coherent whole and documented as such. When that background is applied to a Linux environment, things that are normally invisible become visible: why a particular system call is slow, what the container isolation model actually provides versus what it appears to provide, where the network stack is going to behave unexpectedly under load.
The diagnostic methodology is the same on both platforms. strace on Linux, ktrace on FreeBSD. perf on Linux, DTrace on FreeBSD. The tools have different names and different capabilities, but the discipline of going below the application layer to find problems that are invisible at the surface is the same. That discipline doesn't come from knowing one OS — it comes from understanding what operating systems do at a level where the differences between them are just implementation details.
Fleet standardization
The most common Linux problem isn't a bug. It's a fleet of servers that were each built slightly differently, have each been patched differently, and now each behave slightly differently — and nobody knows which one is the reference.
The typical inherited Linux fleet has accumulated years of entropy: servers running different Ubuntu versions, different partition schemes, packages installed at different point-in-time versions with no record of why. Something works on server A and not server B, and the investigation into why consumes hours before anyone realizes the two servers aren't running the same version of the relevant library.
Standardization work starts with an honest inventory — not what the documentation says the fleet looks like, but what it actually looks like. From that baseline, the path forward depends on the situation. Two approaches have been applied in production at scale:
For two large engagements, the right answer was migrating the entire fleet to FreeBSD — same kernel version across all hosts, same ZFS filesystem layout, packages built and managed by poudriere on a central build host so every server runs the same compiled versions of every piece of software. Puppet manages configuration uniformly across the fleet. The end state is a set of servers that are genuinely identical in ways that matter: same kernel, same ABI, same package versions, same configuration baseline. When something breaks on one server, the investigation starts from a position of knowing exactly how that server should behave.
This approach requires the client to accept FreeBSD as the platform, which isn't always possible. When it is, it produces the most coherent result.
Where Linux is a requirement — either client preference, specific application dependencies, or existing investment — the same principle applies with Linux-native tooling. The fleet is consolidated to a single Ubuntu LTS or Debian version, with a consistent partition scheme and a centralized local package repository that serves as a point-in-time snapshot. Future server installs pull from the same repository, ensuring they match the existing fleet at the package level. The repository is updated deliberately, not automatically — which means a package version change is a conscious decision, not something that happens because a cronjob ran apt-get upgrade.
Puppet manages the Linux fleet using the same roles/profiles pattern as the FreeBSD environments — the tooling is the same, the discipline is the same, and the end state is the same: a fleet where configuration is code, not institutional memory.
Containerization
Docker deployment work covers both Linux hosts — Ubuntu, Debian — and FreeBSD hosts via the Linux compatibility layer. This includes image construction, Docker Compose orchestration, private registry management, and runtime security hardening. Alpine Linux is the preferred base image for production Docker work: a minimal attack surface, a small image footprint, and a package manager that doesn't pull in unnecessary dependencies. Building a production Docker image on Alpine rather than Ubuntu typically results in an image that is an order of magnitude smaller and contains significantly fewer packages that need to be patched.
Security hardening for Docker environments covers image scanning, non-root user enforcement, read-only filesystem mounts where applicable, network segmentation between containers, and resource limits that prevent a misbehaving container from affecting adjacent workloads.
Podman provides rootless container execution without a daemon — each container runs as the user that started it, with no privileged background process required. On FreeBSD, Podman runs containers via the Linux compatibility layer, providing a Docker-compatible workflow for workloads that need Linux container images on a FreeBSD host. This is particularly useful for running specific Linux applications alongside native FreeBSD jails on the same host, without requiring a separate Linux VM.
A significant portion of container work is deploying and integrating specific applications that clients want running in their infrastructure — applications with non-trivial deployment requirements, dependency chains that need careful management, or integration points with other systems that require custom configuration. Applications deployed in production container environments include:
Airbyte
Data pipeline orchestration — ELT infrastructure for moving data between sources and warehouses.
KasmWeb
Browser isolation and streaming desktop infrastructure — containerized browser sessions for security and remote access.
Pinba
PHP performance monitoring server — real-time application profiling from production PHP-FPM pools.
Headless Chrome
Puppeteer-driven browser automation — containerized Chrome for scraping, PDF generation, and testing pipelines.
2FAuth
Self-hosted two-factor authentication management on Alpine Linux with Docker and automated certificate management.
Custom stacks
nginx + PHP-FPM + MariaDB + Redis compose environments tailored to specific application requirements.
Working within existing infrastructure
Many engagements involve existing Linux infrastructure — an established Ubuntu or Debian environment where the platform isn't up for discussion. The client needs something built, fixed, or secured within what's already there. That's a different kind of work from greenfield deployment, and it requires understanding the environment as it is rather than as it should be.
Common scenario
Application deployment
Deploying a specific application or service into an existing Linux environment — with proper systemd service management, log rotation, monitoring integration, and a deployment process that doesn't require manual intervention on every update.
Common scenario
Security hardening
SSH key policy enforcement, service exposure audit, firewall review with iptables or nftables, unnecessary package removal, and a hardening report with actionable items rather than a compliance checklist.
Common scenario
Performance diagnosis
The same diagnostic methodology applied on Linux — strace, perf, /proc instrumentation — to find problems that aren't visible at the application layer. The tools are different from FreeBSD; the discipline of going below the surface to find the real cause is the same.
Common scenario
Monitoring integration
Adding proper monitoring to an environment that has none, or replacing Pingdom-style surface checks with monitoring that tests what actually matters — process health, application function, certificate expiry, and dependency availability.
Fleet standardization, Docker deployment, or existing environment work. Remote-first, available until 2am CT.