Docker
Reproducible builds, minimal images, disciplined production configuration.
Docker is central to how I ship software — every project gets a multi-stage Dockerfile, a .dockerignore, and a Compose stack from day one. The failure modes I've debugged most often are predictable: PID 1 signal handling that causes containers to ignore SIGTERM and time out during deploys, single-stage builds that bake in dev dependencies and inflate images by hundreds of megabytes, COPY instructions that invalidate the entire layer cache on every source change, and base images with a CVE surface nobody audited when they pulled alpine:latest six months ago.

My Dockerfile discipline starts with build stage separation: a build stage that includes compilers, package managers, and test tooling; a final stage that copies only the compiled artifact into a minimal base. I order COPY instructions so that dependency manifests come first — package.json before src/, requirements.txt before *.py — so cache invalidation only happens when dependencies actually change. Signal handling is explicit: I use the exec form of CMD and add tini as PID 1 for anything that forks child processes, so SIGTERM propagates correctly and containers stop cleanly. Non-root UID is non-negotiable; I create a dedicated user in the Dockerfile rather than relying on runtime configuration. Health check design depends on what the container actually does — HTTP services get a curl or wget probe on the readiness path, not the root, with a realistic start period that accounts for cold JVM or migration time. .dockerignore is treated like .gitignore and kept current, so build context stays small and no local credentials or node_modules end up in the image layer.
Dockerfile Audit
I work through a Dockerfile by examining layer order for cache invalidation patterns — specifically whether dependency installation is separated from source COPY, whether the build context is scoped tightly in .dockerignore, and whether the base image's CVE surface is justified. Single-stage builds that embed build tooling in the final image get refactored to multi-stage. Signal handling and UID configuration get checked on every image regardless of other findings.
Docker Compose Environments
I structure Compose files so that service dependencies are expressed with healthcheck-based condition checks, not just depends_on by name — a database container being up is different from it being ready to accept connections. Volume mounts are scoped to only what the service needs to read or write. Environment variables are loaded from .env with explicit defaults in the Compose file so the stack is portable without requiring local env configuration.
Registry & Image Pipeline
In CI, I wire the build stage to produce a tagged image, run a vulnerability scan (Trivy or Grype) against it before any push, and fail the pipeline on critical CVEs rather than treating the scan as informational. Promotion from a dev registry to a production registry is done by re-tagging a digest — not by rebuilding — so the image that passed scanning is exactly the image that deploys.
Developer Environment Standardization
A Compose stack with healthcheck-gated dependencies and a minimal .env.example replaces multi-page local setup docs. The goal is a single command that produces a working stack, independent of what the developer already has installed on the host.
Legacy App Containerization
Wrapping an existing Node.js, Python, or Java app in a production-ready container means more than a FROM and a CMD. It means auditing what the process actually needs at runtime, stripping everything else, and validating that the container behaves correctly under a clean shutdown signal before it ever sees a load balancer.
CI/CD Build Consistency
Running the same image across CI, staging, and production removes the class of failures that comes from environment drift. The CI job builds the image, runs the test suite inside it, scans it, and pushes the digest. Staging and production pull that digest — not a tag that might resolve differently on the next build.
Let's talk Docker.
No pitch. Just a technical conversation about the problem you're working on.