Basic Dockerfile
A Dockerfile is a recipe that tells Docker how to build your container image. It contains step-by-step instructions: start with a base image, copy your code, compile it, and define how to run it. Weβll use a multi-stage build, which is a technique that uses one container to build your app and a different, smaller container to run it. This keeps your final image small and secureβit only contains the compiled binary, not the Go compiler or source code.Build and Run
Once you have a Dockerfile, you use thedocker build command to create an image, and docker run to start a container from that image. Here are the essential commands youβll use daily:
-p 3000:3000 flag is called port mapping. It connects port 3000 on your host machine to port 3000 inside the container. Without this, the containerβs port would be inaccessible from outside.
Production Dockerfile
A complete production Dockerfile with all best practices:Base Image Options
The base image is the foundation your container is built on. Choosing the right base image affects security, size, and debugging capabilities. For Go applications, you have several excellent options:| Image | Size | Use Case |
|---|---|---|
gcr.io/distroless/static | ~2MB | Pure Go binaries (recommended) |
gcr.io/distroless/base | ~20MB | When CGO is required |
alpine:3.19 | ~7MB | When you need a shell for debugging |
scratch | 0MB | Minimal, but no CA certs or timezone |
Distroless (Recommended)
Alpine (When Shell Needed)
Multi-Architecture Builds
Different servers use different CPU architectures. Most cloud servers use AMD64 (Intel/AMD processors), but newer options like AWS Graviton and Apple Silicon Macs use ARM64. Building for multiple architectures ensures your container works everywhere. Dockerβs buildx tool can create images that work on both architectures. When someone pulls your image, Docker automatically downloads the right version for their CPU.Build Multi-Arch Images
Docker Compose
Docker Compose is a tool for defining and running multi-container applications. Instead of manually starting each container withdocker run commands, you describe your entire application stack in a YAML fileβyour app, database, cache, and any other services. Then one command (docker compose up) starts everything together.
Compose is especially useful when your app depends on other services like PostgreSQL or Redis. It handles networking between containers automatically, so your app can connect to the database using a simple hostname like db instead of managing IP addresses.
Development
Hereβs adocker-compose.yml for local development. It runs your app alongside PostgreSQL and Redis, with hot-reload support:
Production
docker-compose.prod.yml:
Running in Production
Environment Variables and Secrets
In production, you need to configure your application without hardcoding values like database passwords or API keys. Environment variables are the standard way to pass configuration to containers. Theyβre set when the container starts and can be different for each environment (development, staging, production). Secrets are sensitive values like passwords and API keys that need special handling. Never store secrets in your Docker image or commit them to gitβanyone with access to the image could extract them.Using .env Files
For local development, you can store environment variables in a.env file. Docker Compose automatically loads this file:
Docker Secrets (Swarm Mode)
Container Health Checks
Docker can monitor your containerβs health by periodically running a command or making an HTTP request. If the health check fails repeatedly, Docker marks the container as βunhealthy.β Orchestration tools like Docker Compose and Kubernetes use this status to automatically restart failed containers or redirect traffic away from unhealthy ones. For web applications, health checks typically hit a/readyz or /healthz endpoint that returns 200 OK when the app is working properly.
In Dockerfile
In Docker Compose
Custom Health Check Binary
Add a lightweight health check to your binary:Networking
Docker creates isolated networks for your containers. By default, containers in the same Docker Compose file can communicate with each other using their service names as hostnames (e.g., your app can connect topostgres://db:5432 where db is the service name).
You can create multiple networks to control which containers can talk to each other. For example, you might want your app to reach both the database and the internet, but prevent the database from being accessed from outside.
Container Networking
Exposing Ports
Image Optimization
A well-optimized Docker image builds faster, downloads faster, and uses less storage. The two main optimization techniques are layer caching and minimizing image size.Layer Caching
Docker builds images in layers, and it caches each layer. If a layer hasnβt changed, Docker reuses the cached version instead of rebuilding it. The key insight is that when one layer changes, all subsequent layers must be rebuilt. This means you should order your Dockerfile instructions from least to most frequently changing. Dependencies change less often than your source code, so copy and install dependencies first:Reduce Image Size
.dockerignore
Create.dockerignore to exclude unnecessary files:
Private Registries
A container registry is like GitHub for Docker imagesβa place to store and share your container images. When you rundocker pull nginx, Docker downloads the image from Docker Hub, the default public registry.
For your own applications, youβll use a private registry so only authorized users can access your images. All major cloud providers offer managed registries (AWS ECR, Google Container Registry, etc.), or you can use Docker Hubβs private repositories.
The workflow is: build your image locally, push it to the registry, then pull it on your production servers.