Skip to main content
Deployment is the process of taking your application from your development machine and making it available on the internet for users to access. This involves building your code, configuring it for a production environment, and running it on a server that’s always online. This section covers everything you need to deploy Mizu applications to production. Whether you’re deploying to a VPS (Virtual Private Server), Kubernetes cluster, or serverless platform, you’ll find detailed guides for your environment.

Building for Production

When you develop locally, you run your app with go run. But for production, you need to compile your code into an executable file (called a binary) that can run on your server without needing Go installed. Go compiles your application into a single binary with no external dependencies. Unlike languages that need a runtime (like Python or Node.js), Go bundles everything into one file. This makes deployment remarkably simple—just copy the file to your server and run it.

Basic Build

go build -o app ./cmd/server

Optimized Production Build

For production, you want a smaller, faster binary. You also need to specify which operating system and CPU architecture the binary should run on. This is called cross-compilation—building on your Mac or Windows machine for a Linux server.
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \
  go build -ldflags="-s -w" -o app ./cmd/server
Here’s what each flag does:
FlagPurpose
CGO_ENABLED=0Creates a pure Go binary with no C dependencies, making it fully portable
GOOS=linuxTells Go to build for Linux (even if you’re on Mac or Windows)
GOARCH=amd64Targets Intel/AMD 64-bit CPUs (most cloud servers use this)
-ldflags="-s -w"Strips debug symbols, reducing binary size by ~30%

Common Architectures

TargetGOOSGOARCH
Linux x86-64linuxamd64
Linux ARM64 (AWS Graviton, M1)linuxarm64
macOS Inteldarwinamd64
macOS Apple Silicondarwinarm64
Windowswindowsamd64

Deployment Decision Guide

Choose your deployment method based on your needs:
                    ┌─────────────────┐
                    │  Need container │
                    │  orchestration? │
                    └────────┬────────┘

              ┌──────────────┼──────────────┐
              │ Yes          │ No           │
              ▼              │              │
    ┌─────────────────┐      │    ┌─────────────────┐
    │   Kubernetes    │      │    │  Need auto-     │
    │   deployment    │      │    │  scaling?       │
    └─────────────────┘      │    └────────┬────────┘
                             │             │
                             │  ┌──────────┼──────────┐
                             │  │ Yes      │ No       │
                             │  ▼          │          │
                             │ ┌───────────┴───┐      │
                             │ │Cloud platform │      │
                             │ │ (ECS, Cloud   │      │
                             │ │  Run, etc.)   │      │
                             │ └───────────────┘      │
                             │                        │
                             │           ┌────────────┴────────────┐
                             │           │   Simple VPS or         │
                             │           │   traditional server    │
                             │           └─────────────────────────┘

              ┌──────────────┴──────────────┐
              │  Want containers?           │
              └──────────────┬──────────────┘

              ┌──────────────┼──────────────┐
              │ Yes          │ No           │
              ▼              ▼              │
    ┌─────────────────┐ ┌─────────────────┐
    │     Docker      │ │   Direct binary │
    │   deployment    │ │   + systemd     │
    └─────────────────┘ └─────────────────┘
MethodBest ForComplexity
DockerConsistent environments, easy scalingLow
KubernetesLarge-scale, complex deploymentsHigh
Cloud PlatformsManaged infrastructure, auto-scalingMedium
ServerlessEvent-driven, variable trafficLow
TraditionalSimple apps, full controlLow

Essential Configuration

Environment Variables

Read configuration from environment variables for flexibility:
package main

import (
    "os"
    "github.com/go-mizu/mizu"
)

func main() {
    port := os.Getenv("PORT")
    if port == "" {
        port = "3000"
    }

    app := mizu.New()
    // ... routes
    app.Listen(":" + port)
}

Common Environment Variables

VariablePurposeExample
PORTHTTP listen port3000
ENVEnvironment nameproduction
DATABASE_URLDatabase connectionpostgres://...
LOG_LEVELLogging verbosityinfo
SHUTDOWN_TIMEOUTGraceful shutdown30s

Configuration Pattern

type Config struct {
    Port            string
    Env             string
    DatabaseURL     string
    ShutdownTimeout time.Duration
}

func LoadConfig() *Config {
    timeout, _ := time.ParseDuration(getEnv("SHUTDOWN_TIMEOUT", "15s"))

    return &Config{
        Port:            getEnv("PORT", "3000"),
        Env:             getEnv("ENV", "development"),
        DatabaseURL:     os.Getenv("DATABASE_URL"),
        ShutdownTimeout: timeout,
    }
}

func getEnv(key, fallback string) string {
    if value := os.Getenv(key); value != "" {
        return value
    }
    return fallback
}

Health Checks

In production, your application runs alongside infrastructure that monitors its status. Health checks are special endpoints that answer a simple question: “Is this application working properly?” Load balancers use health checks to decide which servers should receive traffic. If your app becomes unhealthy (maybe the database connection dropped), the load balancer stops sending requests to it until it recovers. Orchestrators like Kubernetes use health checks to automatically restart containers that have crashed or become unresponsive. Mizu provides built-in health check handlers that work with all major platforms:
func main() {
    app := mizu.New()

    // Liveness: Is the process alive?
    // Returns 200 OK always
    http.Handle("/livez", app.LivezHandler())

    // Readiness: Can it handle traffic?
    // Returns 503 during graceful shutdown
    http.Handle("/readyz", app.ReadyzHandler())

    // Your routes
    app.Get("/", handler)

    app.Listen(":3000")
}
EndpointNormalShutting DownUse
/livez200200Container restart decision
/readyz200503Load balancer routing

Custom Health Checks

func healthHandler(db *sql.DB) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        // Check database
        ctx, cancel := context.WithTimeout(r.Context(), 2*time.Second)
        defer cancel()

        if err := db.PingContext(ctx); err != nil {
            w.WriteHeader(http.StatusServiceUnavailable)
            json.NewEncoder(w).Encode(map[string]string{
                "status": "unhealthy",
                "error":  "database unreachable",
            })
            return
        }

        w.WriteHeader(http.StatusOK)
        json.NewEncoder(w).Encode(map[string]string{
            "status": "healthy",
        })
    })
}

Graceful Shutdown

When you need to update your application or restart the server, you don’t want to abruptly kill connections—that would interrupt users mid-request, potentially losing data or causing errors. Graceful shutdown solves this by giving active requests time to complete before the server stops. Here’s what happens: when your server receives a shutdown signal (like when you press Ctrl+C or when Kubernetes scales down), it stops accepting new requests but waits for current requests to finish. This ensures users don’t experience sudden disconnections. Mizu handles graceful shutdown automatically. You can configure how long to wait for active requests:
app := mizu.New()
app.ShutdownTimeout = 30 * time.Second

Shutdown Process

  1. Server receives SIGINT or SIGTERM
  2. /readyz starts returning 503
  3. Server stops accepting new connections
  4. Active requests complete (up to timeout)
  5. Server exits cleanly
SIGTERM received


/readyz returns 503 ─────► Load balancer stops sending traffic


Stop accepting new connections


Wait for active requests ─────► Up to ShutdownTimeout


Exit with code 0

Logging in Production

During development, you want human-readable logs that are easy to scan visually. But in production, your logs are consumed by machines—log aggregation systems that collect, search, and alert on your application’s output. These systems work best with structured JSON logs, where each log entry is a JSON object with consistent fields. JSON logs enable powerful queries like “show me all errors from the payment service in the last hour” or “find requests that took longer than 500ms.” This would be difficult with plain text logs. Configure structured JSON logging for production:
app := mizu.New()

if os.Getenv("ENV") == "production" {
    app.Use(mizu.Logger(mizu.LoggerOptions{
        Mode: mizu.Prod,  // JSON output
    }))
} else {
    app.Use(mizu.Logger(mizu.LoggerOptions{
        Mode:  mizu.Dev,  // Human-readable
        Color: true,
    }))
}

Log Aggregation

In production, send logs to a centralized system:
PlatformHow to Collect
AWS CloudWatchCloudWatch Logs agent
Google CloudAutomatic from stdout
DatadogDatadog agent
ElasticFilebeat or Fluentd
LokiPromtail

Security Checklist

Production environments are exposed to the internet, which means your application will face automated attacks, vulnerability scanners, and potentially malicious users. Security isn’t optional—it’s a core requirement for any public-facing application. The good news is that most security measures are straightforward to implement. Here’s a checklist of essential security practices to complete before deploying:
  • HTTPS only - Use TLS via reverse proxy or load balancer
  • Security headers - Use the helmet middleware
  • Rate limiting - Protect against abuse
  • Input validation - Validate all user input
  • Secrets management - Never commit secrets to git
  • Minimal permissions - Run as non-root user
  • Dependencies - Keep dependencies updated
  • Error messages - Don’t expose internal errors to users

Security Middleware

import "github.com/go-mizu/mizu/middlewares/helmet"

app := mizu.New()
app.Use(helmet.New())

Monitoring

Once your application is running in production, you need visibility into how it’s performing. Monitoring collects metrics like request counts, response times, and error rates. This data helps you answer questions like “Is my app getting slower?” or “Did that deployment cause more errors?” Without monitoring, you’re flying blind—you won’t know about problems until users complain. With proper monitoring, you can set up alerts to notify you before small issues become outages.

Prometheus Metrics

Prometheus is the most popular open-source monitoring system. It collects metrics by periodically “scraping” a /metrics endpoint that your app exposes. Here’s how to add it:
import "github.com/go-mizu/mizu/middlewares/prometheus"

app := mizu.New()
app.Use(prometheus.New())

// Expose metrics endpoint
http.Handle("/metrics", promhttp.Handler())

Key Metrics to Monitor

MetricWhat to Watch
Request rateUnusual spikes or drops
Error rateIncrease in 4xx/5xx
Latency p50/p95/p99Slow requests
CPU/MemoryResource usage
Active connectionsConnection pool exhaustion

Next Steps