Skip to main content
Docker is a platform that packages your application and all its dependencies into a standardized unit called a container. Think of a container as a lightweight, portable box that contains everything your app needs to runβ€”the binary, configuration files, and even the operating system libraries. The biggest advantage of Docker is consistency: the same container that works on your laptop will work identically on any server. No more β€œit works on my machine” problems. Docker also makes scaling easyβ€”you can run multiple copies of the same container to handle more traffic. This guide covers everything from basic Dockerfiles to production-ready configurations.

Basic Dockerfile

A Dockerfile is a recipe that tells Docker how to build your container image. It contains step-by-step instructions: start with a base image, copy your code, compile it, and define how to run it. We’ll use a multi-stage build, which is a technique that uses one container to build your app and a different, smaller container to run it. This keeps your final image small and secureβ€”it only contains the compiled binary, not the Go compiler or source code.
# Build stage
FROM golang:1.23-alpine AS build
WORKDIR /src

# Download dependencies first (cached if go.mod unchanged)
COPY go.mod go.sum ./
RUN go mod download

# Build the application
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -ldflags="-s -w" -o /app ./cmd/server

# Runtime stage
FROM gcr.io/distroless/static-debian12
COPY --from=build /app /app
EXPOSE 3000
USER nonroot:nonroot
CMD ["/app"]

Build and Run

Once you have a Dockerfile, you use the docker build command to create an image, and docker run to start a container from that image. Here are the essential commands you’ll use daily:
# Build the image (the -t flag gives it a name and tag)
docker build -t myapp:latest .

# Run the container (-d runs in background, -p maps port 3000)
docker run -d --name myapp -p 3000:3000 myapp:latest

# View logs (the -f flag follows new log entries)
docker logs -f myapp

# Stop and remove the container
docker stop myapp && docker rm myapp
The -p 3000:3000 flag is called port mapping. It connects port 3000 on your host machine to port 3000 inside the container. Without this, the container’s port would be inaccessible from outside.

Production Dockerfile

A complete production Dockerfile with all best practices:
# syntax=docker/dockerfile:1

# Build stage
FROM golang:1.23-alpine AS build

# Install git for private dependencies (if needed)
RUN apk add --no-cache git ca-certificates tzdata

WORKDIR /src

# Download dependencies (cached layer)
COPY go.mod go.sum ./
RUN --mount=type=cache,target=/go/pkg/mod \
    go mod download

# Copy source code
COPY . .

# Build with optimizations
RUN --mount=type=cache,target=/go/pkg/mod \
    --mount=type=cache,target=/root/.cache/go-build \
    CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \
    go build -ldflags="-s -w -X main.version=$(git describe --tags --always)" \
    -o /app ./cmd/server

# Runtime stage - distroless for security
FROM gcr.io/distroless/static-debian12

# Copy timezone data for time operations
COPY --from=build /usr/share/zoneinfo /usr/share/zoneinfo

# Copy CA certificates for HTTPS
COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/

# Copy the binary
COPY --from=build /app /app

# Expose port
EXPOSE 3000

# Run as non-root
USER nonroot:nonroot

# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
    CMD ["/app", "-health-check"] || exit 1

# Start the application
ENTRYPOINT ["/app"]

Base Image Options

The base image is the foundation your container is built on. Choosing the right base image affects security, size, and debugging capabilities. For Go applications, you have several excellent options:
ImageSizeUse Case
gcr.io/distroless/static~2MBPure Go binaries (recommended)
gcr.io/distroless/base~20MBWhen CGO is required
alpine:3.19~7MBWhen you need a shell for debugging
scratch0MBMinimal, but no CA certs or timezone
Why does size matter? Smaller images download faster (important for scaling), have fewer vulnerabilities (less code = fewer bugs), and use less storage. A 2MB distroless image is much more secure than a 1GB Ubuntu image.
FROM gcr.io/distroless/static-debian12
COPY --from=build /app /app
USER nonroot:nonroot
CMD ["/app"]
Pros: Minimal attack surface, no shell, no package manager Cons: No debugging tools inside container

Alpine (When Shell Needed)

FROM alpine:3.19
RUN apk add --no-cache ca-certificates tzdata
COPY --from=build /app /app
RUN adduser -D -u 1000 appuser
USER appuser
CMD ["/app"]
Pros: Shell available, small size, package manager Cons: Larger attack surface

Multi-Architecture Builds

Different servers use different CPU architectures. Most cloud servers use AMD64 (Intel/AMD processors), but newer options like AWS Graviton and Apple Silicon Macs use ARM64. Building for multiple architectures ensures your container works everywhere. Docker’s buildx tool can create images that work on both architectures. When someone pulls your image, Docker automatically downloads the right version for their CPU.
FROM --platform=$BUILDPLATFORM golang:1.23-alpine AS build
ARG TARGETPLATFORM
ARG TARGETOS
ARG TARGETARCH

WORKDIR /src
COPY go.mod go.sum ./
RUN go mod download
COPY . .

RUN CGO_ENABLED=0 GOOS=${TARGETOS} GOARCH=${TARGETARCH} \
    go build -ldflags="-s -w" -o /app ./cmd/server

FROM gcr.io/distroless/static-debian12
COPY --from=build /app /app
USER nonroot:nonroot
CMD ["/app"]

Build Multi-Arch Images

# Create and use a buildx builder
docker buildx create --name multiarch --use

# Build and push for multiple platforms
docker buildx build \
    --platform linux/amd64,linux/arm64 \
    -t myregistry/myapp:latest \
    --push .

Docker Compose

Docker Compose is a tool for defining and running multi-container applications. Instead of manually starting each container with docker run commands, you describe your entire application stack in a YAML fileβ€”your app, database, cache, and any other services. Then one command (docker compose up) starts everything together. Compose is especially useful when your app depends on other services like PostgreSQL or Redis. It handles networking between containers automatically, so your app can connect to the database using a simple hostname like db instead of managing IP addresses.

Development

Here’s a docker-compose.yml for local development. It runs your app alongside PostgreSQL and Redis, with hot-reload support:
version: '3.8'

services:
  app:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "3000:3000"
    environment:
      - ENV=development
      - DATABASE_URL=postgres://user:pass@db:5432/myapp?sslmode=disable
      - LOG_LEVEL=debug
    depends_on:
      db:
        condition: service_healthy
    volumes:
      - .:/src  # Mount source for hot reload (dev only)
    command: go run ./cmd/server  # Override for development

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: user
      POSTGRES_PASSWORD: pass
      POSTGRES_DB: myapp
    volumes:
      - pgdata:/var/lib/postgresql/data
    ports:
      - "5432:5432"
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U user -d myapp"]
      interval: 5s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"

volumes:
  pgdata:

Production

docker-compose.prod.yml:
version: '3.8'

services:
  app:
    image: myregistry/myapp:${VERSION:-latest}
    ports:
      - "3000:3000"
    environment:
      - ENV=production
      - DATABASE_URL=${DATABASE_URL}
      - LOG_LEVEL=info
    deploy:
      replicas: 2
      resources:
        limits:
          cpus: '1'
          memory: 512M
        reservations:
          cpus: '0.5'
          memory: 256M
      restart_policy:
        condition: on-failure
        delay: 5s
        max_attempts: 3
    healthcheck:
      test: ["CMD", "wget", "-q", "--spider", "http://localhost:3000/readyz"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 10s
    logging:
      driver: json-file
      options:
        max-size: "10m"
        max-file: "3"

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: ${DB_USER}
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_DB: ${DB_NAME}
    volumes:
      - pgdata:/var/lib/postgresql/data
    deploy:
      resources:
        limits:
          memory: 1G

volumes:
  pgdata:
    driver: local

Running in Production

# Start with production config
docker compose -f docker-compose.prod.yml up -d

# View logs
docker compose -f docker-compose.prod.yml logs -f app

# Scale the application
docker compose -f docker-compose.prod.yml up -d --scale app=3

# Rolling update
docker compose -f docker-compose.prod.yml pull
docker compose -f docker-compose.prod.yml up -d --no-deps app

Environment Variables and Secrets

In production, you need to configure your application without hardcoding values like database passwords or API keys. Environment variables are the standard way to pass configuration to containers. They’re set when the container starts and can be different for each environment (development, staging, production). Secrets are sensitive values like passwords and API keys that need special handling. Never store secrets in your Docker image or commit them to gitβ€”anyone with access to the image could extract them.

Using .env Files

For local development, you can store environment variables in a .env file. Docker Compose automatically loads this file:
# .env file (never commit to git!)
DATABASE_URL=postgres://user:pass@localhost:5432/myapp
SECRET_KEY=your-secret-key
# docker-compose.yml
services:
  app:
    env_file:
      - .env

Docker Secrets (Swarm Mode)

version: '3.8'

services:
  app:
    image: myapp:latest
    secrets:
      - db_password
      - api_key
    environment:
      - DB_PASSWORD_FILE=/run/secrets/db_password

secrets:
  db_password:
    external: true
  api_key:
    file: ./secrets/api_key.txt
Read secrets in your application:
func getSecret(name string) string {
    // Try file-based secret first (Docker secrets)
    path := fmt.Sprintf("/run/secrets/%s", name)
    if data, err := os.ReadFile(path); err == nil {
        return strings.TrimSpace(string(data))
    }

    // Fall back to environment variable
    envName := strings.ToUpper(name)
    return os.Getenv(envName)
}

Container Health Checks

Docker can monitor your container’s health by periodically running a command or making an HTTP request. If the health check fails repeatedly, Docker marks the container as β€œunhealthy.” Orchestration tools like Docker Compose and Kubernetes use this status to automatically restart failed containers or redirect traffic away from unhealthy ones. For web applications, health checks typically hit a /readyz or /healthz endpoint that returns 200 OK when the app is working properly.

In Dockerfile

HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
    CMD wget -q --spider http://localhost:3000/readyz || exit 1

In Docker Compose

services:
  app:
    healthcheck:
      test: ["CMD", "wget", "-q", "--spider", "http://localhost:3000/readyz"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 10s

Custom Health Check Binary

Add a lightweight health check to your binary:
// cmd/server/main.go
func main() {
    if len(os.Args) > 1 && os.Args[1] == "-health-check" {
        resp, err := http.Get("http://localhost:3000/readyz")
        if err != nil || resp.StatusCode != 200 {
            os.Exit(1)
        }
        os.Exit(0)
    }

    // Normal application startup
    app := mizu.New()
    // ...
}

Networking

Docker creates isolated networks for your containers. By default, containers in the same Docker Compose file can communicate with each other using their service names as hostnames (e.g., your app can connect to postgres://db:5432 where db is the service name). You can create multiple networks to control which containers can talk to each other. For example, you might want your app to reach both the database and the internet, but prevent the database from being accessed from outside.

Container Networking

version: '3.8'

services:
  app:
    networks:
      - frontend
      - backend

  db:
    networks:
      - backend

  nginx:
    networks:
      - frontend

networks:
  frontend:
    driver: bridge
  backend:
    driver: bridge
    internal: true  # No external access

Exposing Ports

services:
  app:
    ports:
      - "3000:3000"          # Map host:container
      - "127.0.0.1:3000:3000" # Only localhost
      - "3000"               # Random host port
    expose:
      - "3000"               # Only to other containers

Image Optimization

A well-optimized Docker image builds faster, downloads faster, and uses less storage. The two main optimization techniques are layer caching and minimizing image size.

Layer Caching

Docker builds images in layers, and it caches each layer. If a layer hasn’t changed, Docker reuses the cached version instead of rebuilding it. The key insight is that when one layer changes, all subsequent layers must be rebuilt. This means you should order your Dockerfile instructions from least to most frequently changing. Dependencies change less often than your source code, so copy and install dependencies first:
# 1. Base image (rarely changes)
FROM golang:1.23-alpine AS build

# 2. Dependencies (changes occasionally)
COPY go.mod go.sum ./
RUN go mod download

# 3. Source code (changes frequently)
COPY . .
RUN go build -o /app

Reduce Image Size

# Check image size
docker images myapp

# Analyze layers
docker history myapp:latest

# Use dive for detailed analysis
dive myapp:latest

.dockerignore

Create .dockerignore to exclude unnecessary files:
.git
.github
.vscode
*.md
!README.md
Makefile
docker-compose*.yml
.env*
tmp/
vendor/
*_test.go

Private Registries

A container registry is like GitHub for Docker imagesβ€”a place to store and share your container images. When you run docker pull nginx, Docker downloads the image from Docker Hub, the default public registry. For your own applications, you’ll use a private registry so only authorized users can access your images. All major cloud providers offer managed registries (AWS ECR, Google Container Registry, etc.), or you can use Docker Hub’s private repositories. The workflow is: build your image locally, push it to the registry, then pull it on your production servers.

Docker Hub

# Login
docker login

# Tag and push
docker tag myapp:latest username/myapp:latest
docker push username/myapp:latest

AWS ECR

# Login to ECR
aws ecr get-login-password --region us-east-1 | \
    docker login --username AWS --password-stdin 123456789.dkr.ecr.us-east-1.amazonaws.com

# Tag and push
docker tag myapp:latest 123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:latest
docker push 123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:latest

Google Container Registry

# Configure Docker for GCR
gcloud auth configure-docker

# Tag and push
docker tag myapp:latest gcr.io/my-project/myapp:latest
docker push gcr.io/my-project/myapp:latest

Troubleshooting

Container Won’t Start

# Check container logs
docker logs myapp

# Run interactively (if Alpine-based)
docker run -it myapp:latest /bin/sh

# Check container status
docker inspect myapp

Health Check Failing

# Test health check manually
docker exec myapp wget -q --spider http://localhost:3000/readyz

# Check health status
docker inspect --format='{{.State.Health.Status}}' myapp

Performance Issues

# Check resource usage
docker stats myapp

# Set resource limits
docker run -d --memory=512m --cpus=1 myapp:latest

Complete Example

Here’s a production-ready setup:
myapp/
β”œβ”€β”€ cmd/
β”‚   └── server/
β”‚       └── main.go
β”œβ”€β”€ internal/
β”‚   └── ...
β”œβ”€β”€ Dockerfile
β”œβ”€β”€ docker-compose.yml
β”œβ”€β”€ docker-compose.prod.yml
β”œβ”€β”€ .dockerignore
└── .env.example
Build and deploy:
# Build
docker build -t myapp:v1.0.0 .

# Test locally
docker compose up

# Deploy to production
docker compose -f docker-compose.prod.yml up -d

Next Steps