Skip to main content
Cloud platforms provide the infrastructure to run your applications without managing physical servers. Instead of buying and maintaining hardware, you rent computing resources on-demand and pay only for what you use. Each cloud provider offers various services with different trade-offs between simplicity and control. Managed container services (like AWS App Runner or Google Cloud Run) handle most infrastructure concerns automatically—you just provide a container image. Compute services (like EC2 or Compute Engine) give you full control but require more configuration. This guide covers deploying Mizu applications to major cloud platforms using their managed services.

AWS

Amazon Web Services (AWS) is the largest cloud provider, offering everything from simple container hosting to complex orchestration. For most Mizu applications, you’ll choose between:
  • App Runner: Simplest option—give it a container, it handles everything else
  • ECS Fargate: More control over networking and scaling, still serverless
  • EC2: Full control over the server, you manage everything

AWS App Runner

App Runner is the easiest way to deploy containers on AWS. You point it at your container image, and it handles load balancing, auto-scaling, and HTTPS automatically. There’s no infrastructure to manage—it’s perfect for getting started quickly. 1. Push image to ECR:
# Create repository
aws ecr create-repository --repository-name myapp

# Login to ECR
aws ecr get-login-password --region us-east-1 | \
    docker login --username AWS --password-stdin \
    123456789.dkr.ecr.us-east-1.amazonaws.com

# Build and push
docker build -t myapp .
docker tag myapp:latest 123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:latest
docker push 123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:latest
2. Create App Runner service:
aws apprunner create-service \
    --service-name myapp \
    --source-configuration '{
        "ImageRepository": {
            "ImageIdentifier": "123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:latest",
            "ImageRepositoryType": "ECR",
            "ImageConfiguration": {
                "Port": "3000",
                "RuntimeEnvironmentVariables": {
                    "ENV": "production"
                }
            }
        },
        "AutoDeploymentsEnabled": true
    }' \
    --instance-configuration '{
        "Cpu": "1 vCPU",
        "Memory": "2 GB"
    }' \
    --health-check-configuration '{
        "Protocol": "HTTP",
        "Path": "/readyz",
        "Interval": 10,
        "Timeout": 5,
        "HealthyThreshold": 1,
        "UnhealthyThreshold": 5
    }'

Amazon ECS with Fargate

ECS (Elastic Container Service) with Fargate is a step up from App Runner. You get more control over networking, secrets, and scaling policies, but you don’t manage servers—Fargate handles the underlying infrastructure. ECS is organized around task definitions (what to run) and services (how to run it). A task definition specifies your container, resources, and environment. A service maintains the desired number of tasks and integrates with load balancers. task-definition.json:
{
  "family": "myapp",
  "networkMode": "awsvpc",
  "requiresCompatibilities": ["FARGATE"],
  "cpu": "256",
  "memory": "512",
  "executionRoleArn": "arn:aws:iam::123456789:role/ecsTaskExecutionRole",
  "containerDefinitions": [
    {
      "name": "myapp",
      "image": "123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:latest",
      "portMappings": [
        {
          "containerPort": 3000,
          "protocol": "tcp"
        }
      ],
      "environment": [
        {"name": "ENV", "value": "production"}
      ],
      "secrets": [
        {
          "name": "DATABASE_URL",
          "valueFrom": "arn:aws:secretsmanager:us-east-1:123456789:secret:myapp/db-url"
        }
      ],
      "healthCheck": {
        "command": ["CMD-SHELL", "wget -q --spider http://localhost:3000/readyz || exit 1"],
        "interval": 30,
        "timeout": 5,
        "retries": 3,
        "startPeriod": 10
      },
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "/ecs/myapp",
          "awslogs-region": "us-east-1",
          "awslogs-stream-prefix": "ecs"
        }
      }
    }
  ]
}
Deploy:
# Register task definition
aws ecs register-task-definition --cli-input-json file://task-definition.json

# Create service
aws ecs create-service \
    --cluster myapp-cluster \
    --service-name myapp \
    --task-definition myapp \
    --desired-count 2 \
    --launch-type FARGATE \
    --network-configuration '{
        "awsvpcConfiguration": {
            "subnets": ["subnet-xxx", "subnet-yyy"],
            "securityGroups": ["sg-xxx"],
            "assignPublicIp": "ENABLED"
        }
    }' \
    --load-balancers '[
        {
            "targetGroupArn": "arn:aws:elasticloadbalancing:...",
            "containerName": "myapp",
            "containerPort": 3000
        }
    ]'

Amazon EC2

Direct deployment to EC2 instances. User data script for Amazon Linux 2023:
#!/bin/bash
yum update -y

# Download and install binary
aws s3 cp s3://myapp-releases/myapp /usr/local/bin/myapp
chmod +x /usr/local/bin/myapp

# Create service user
useradd -r -s /bin/false myapp

# Create systemd service
cat > /etc/systemd/system/myapp.service << 'EOF'
[Unit]
Description=My Mizu Application
After=network.target

[Service]
Type=simple
User=myapp
ExecStart=/usr/local/bin/myapp
Restart=always
RestartSec=5
Environment=ENV=production
Environment=PORT=3000

[Install]
WantedBy=multi-user.target
EOF

# Start service
systemctl daemon-reload
systemctl enable myapp
systemctl start myapp

Google Cloud

Google Cloud Platform (GCP) offers similar services to AWS with a developer-friendly experience. Cloud Run is GCP’s standout service for containers—it’s serverless, scales to zero (you only pay when handling requests), and has a generous free tier.

Cloud Run

Cloud Run is one of the easiest ways to deploy containers anywhere. It automatically scales based on traffic, including scaling to zero when idle. You just deploy a container, and Cloud Run handles HTTPS, load balancing, and scaling. Cloud Run is built on Knative, an open-source Kubernetes-based platform, but you don’t need to know anything about Kubernetes to use it. Deploy directly from source:
# Deploy from source (Cloud Build + Cloud Run)
gcloud run deploy myapp \
    --source . \
    --region us-central1 \
    --platform managed \
    --allow-unauthenticated \
    --port 3000 \
    --memory 512Mi \
    --cpu 1 \
    --min-instances 0 \
    --max-instances 10 \
    --set-env-vars "ENV=production"
Deploy from container registry:
# Build and push to GCR
gcloud builds submit --tag gcr.io/my-project/myapp

# Deploy
gcloud run deploy myapp \
    --image gcr.io/my-project/myapp \
    --region us-central1 \
    --platform managed \
    --allow-unauthenticated \
    --port 3000
service.yaml for Cloud Run:
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: myapp
spec:
  template:
    metadata:
      annotations:
        autoscaling.knative.dev/minScale: "1"
        autoscaling.knative.dev/maxScale: "10"
    spec:
      containerConcurrency: 80
      timeoutSeconds: 300
      containers:
        - image: gcr.io/my-project/myapp:latest
          ports:
            - containerPort: 3000
          env:
            - name: ENV
              value: production
          resources:
            limits:
              cpu: "1"
              memory: 512Mi
          startupProbe:
            httpGet:
              path: /readyz
              port: 3000
            initialDelaySeconds: 0
            periodSeconds: 2
            failureThreshold: 30
          livenessProbe:
            httpGet:
              path: /livez
              port: 3000
            periodSeconds: 10

Google Compute Engine

Startup script:
#!/bin/bash
apt-get update
apt-get install -y wget

# Download binary
gsutil cp gs://myapp-releases/myapp /usr/local/bin/myapp
chmod +x /usr/local/bin/myapp

# Create service
cat > /etc/systemd/system/myapp.service << 'EOF'
[Unit]
Description=My Mizu Application
After=network.target

[Service]
Type=simple
User=nobody
ExecStart=/usr/local/bin/myapp
Restart=always
Environment=ENV=production

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable myapp
systemctl start myapp
Create instance:
gcloud compute instances create myapp-vm \
    --zone us-central1-a \
    --machine-type e2-small \
    --image-family debian-12 \
    --image-project debian-cloud \
    --metadata-from-file startup-script=startup.sh \
    --tags http-server \
    --scopes storage-ro

DigitalOcean

DigitalOcean is known for simplicity and developer-friendly pricing. It’s often the first cloud provider developers use because it’s less overwhelming than AWS or GCP.

App Platform

App Platform is DigitalOcean’s fully managed platform for deploying apps from source code or container images. It integrates directly with GitHub for automatic deployments—push to your main branch, and your app updates automatically. App Platform is simpler than AWS/GCP equivalents, with transparent pricing. The YAML configuration file below shows how to define your app: app.yaml:
name: myapp
region: nyc
services:
  - name: api
    github:
      repo: myorg/myapp
      branch: main
      deploy_on_push: true
    dockerfile_path: Dockerfile
    http_port: 3000
    instance_size_slug: basic-xxs
    instance_count: 2
    health_check:
      http_path: /readyz
      initial_delay_seconds: 10
      period_seconds: 10
      timeout_seconds: 5
      success_threshold: 1
      failure_threshold: 3
    envs:
      - key: ENV
        value: production
      - key: DATABASE_URL
        scope: RUN_TIME
        type: SECRET
Deploy:
doctl apps create --spec app.yaml

Droplets

Direct deployment to DigitalOcean VMs.
# Create droplet
doctl compute droplet create myapp \
    --region nyc1 \
    --size s-1vcpu-1gb \
    --image ubuntu-22-04-x64 \
    --ssh-keys $(doctl compute ssh-key list --format ID --no-header)

# Get IP
doctl compute droplet get myapp --format PublicIPv4 --no-header

# Deploy
scp myapp root@<ip>:/usr/local/bin/
ssh root@<ip> 'systemctl restart myapp'

Fly.io

Fly.io is a modern platform designed for edge deployment—running your app in data centers around the world, close to your users. This reduces latency because requests don’t have to travel across the globe. Unlike traditional clouds where you pick one region, Fly.io can run your app in multiple regions simultaneously. It also has excellent support for WebSockets and persistent connections. fly.toml:
app = "myapp"
primary_region = "iad"

[build]
  dockerfile = "Dockerfile"

[env]
  ENV = "production"
  PORT = "8080"

[http_service]
  internal_port = 8080
  force_https = true
  auto_stop_machines = true
  auto_start_machines = true
  min_machines_running = 1

  [http_service.concurrency]
    type = "connections"
    hard_limit = 100
    soft_limit = 80

[[http_service.checks]]
  grace_period = "10s"
  interval = "30s"
  method = "GET"
  path = "/readyz"
  protocol = "http"
  timeout = "5s"

[[vm]]
  cpu_kind = "shared"
  cpus = 1
  memory_mb = 256
Deploy:
# Launch new app
fly launch

# Deploy updates
fly deploy

# View logs
fly logs

# Scale
fly scale count 3

# Set secrets
fly secrets set DATABASE_URL=postgres://...

Railway

Railway focuses on developer experience above all else. It’s designed to feel like deploying locally—connect your GitHub repo, and Railway figures out how to build and run your app automatically. Railway excels at provisioning databases alongside your app. Need PostgreSQL or Redis? Add it in one click. The connection strings are automatically injected as environment variables. railway.json:
{
  "$schema": "https://railway.app/railway.schema.json",
  "build": {
    "builder": "DOCKERFILE",
    "dockerfilePath": "Dockerfile"
  },
  "deploy": {
    "numReplicas": 1,
    "healthcheckPath": "/readyz",
    "healthcheckTimeout": 100,
    "restartPolicyType": "ON_FAILURE",
    "restartPolicyMaxRetries": 10
  }
}
Deploy:
# Install CLI
npm install -g @railway/cli

# Login and deploy
railway login
railway init
railway up

Render

Render is positioned as a simpler alternative to Heroku. It offers automatic deploys from Git, free SSL, and a generous free tier for testing. Like Railway, it aims to handle infrastructure complexity so you can focus on code. Render’s render.yaml file (called a Blueprint) lets you define your entire infrastructure—web services, databases, and background workers—in a single file that stays in your repo: render.yaml:
services:
  - type: web
    name: myapp
    env: docker
    dockerfilePath: ./Dockerfile
    dockerContext: .
    region: oregon
    plan: starter
    healthCheckPath: /readyz
    envVars:
      - key: ENV
        value: production
      - key: DATABASE_URL
        fromDatabase:
          name: myapp-db
          property: connectionString

databases:
  - name: myapp-db
    databaseName: myapp
    plan: starter

Comparison

PlatformProsConsBest For
AWS App RunnerSimple, auto-scalingLimited configSmall apps
AWS ECS FargateFull control, scales wellComplex setupProduction workloads
Cloud RunGenerous free tier, fast deploysCold startsVariable traffic
DigitalOcean AppSimple pricing, GitHub integrationLimited regionsSide projects
Fly.ioEdge deployment, fastLearning curveGlobal apps
RailwayDeveloper friendlyLimited scalePrototypes
RenderFree tier, simpleCold starts on freeLearning

Best Practices

Use Health Checks

All platforms support health checks. Always configure them:
http.Handle("/readyz", app.ReadyzHandler())
http.Handle("/livez", app.LivezHandler())

Environment-Based Configuration

func main() {
    cfg := LoadConfig()

    app := mizu.New()

    if cfg.Env == "production" {
        app.Use(mizu.Logger(mizu.LoggerOptions{Mode: mizu.Prod}))
    }

    app.Listen(":" + cfg.Port)
}

Graceful Shutdown

Cloud platforms send SIGTERM before stopping containers. Mizu handles this automatically, but configure appropriate timeouts:
app := mizu.New()
app.ShutdownTimeout = 25 * time.Second // Less than platform timeout

Next Steps