Use this file to discover all available pages before exploring further.
Traditional server deployment means running your application directly on a Linux server (either a VPS you rent from providers like DigitalOcean/Linode, or a physical server you own). Unlike containers or serverless, you have full control over the operating system and everything installed on it.This approach requires more setup and maintenance than managed platforms, but it’s often cheaper for consistent workloads and gives you complete control. It’s also a great way to learn how web applications actually run in production.This guide covers deploying to Linux servers using systemd (the standard Linux service manager) and reverse proxies (Nginx or Caddy) for handling HTTPS and load balancing.
Running your application as the root user is a security risk—if your app is compromised, the attacker has full system access. Instead, create a dedicated service user with minimal permissions. This user can only access what your application needs, limiting potential damage.Create a dedicated user for running the application:
# Create service user (no login, no home directory)sudo useradd --system --no-create-home --shell /bin/false myapp# Create directoriessudo mkdir -p /var/lib/myappsudo mkdir -p /var/log/myappsudo chown myapp:myapp /var/lib/myapp /var/log/myapp
Systemd is the standard service manager on modern Linux distributions. It starts your application when the server boots, restarts it if it crashes, and provides tools for monitoring and managing the process.You define your service in a unit file that specifies what to run, which user to run as, and how to handle restarts and failures.
# DatabaseDATABASE_URL=postgres://user:pass@localhost:5432/myapp# Secrets (use a secrets manager in production)API_KEY=your-api-key# ApplicationLOG_LEVEL=info
A reverse proxy sits in front of your application and handles incoming requests. It provides several benefits:
HTTPS termination: Manages SSL certificates so your app doesn’t have to
Load balancing: Distributes traffic across multiple instances of your app
Security: Hides your app behind a hardened web server, adds security headers
Static files: Serves static assets more efficiently than your app
Caddy is the easiest reverse proxy to configure. Its killer feature is automatic HTTPS—it obtains and renews SSL certificates from Let’s Encrypt automatically with zero configuration.
Nginx is the most widely-used web server and reverse proxy. It’s battle-tested, extremely performant, and has extensive documentation. Unlike Caddy, you need to configure HTTPS separately (usually with Certbot), but this gives you more control.
# Install certbotsudo apt install certbot python3-certbot-nginx# Get certificatesudo certbot --nginx -d myapp.example.com# Auto-renewal is set up automaticallysudo systemctl status certbot.timer
A firewall controls which network traffic can reach your server. Without a firewall, all ports are exposed to the internet—including ones you might accidentally leave open. A properly configured firewall allows only the traffic you explicitly need (typically SSH, HTTP, and HTTPS).
# Start new version on different portPORT=3001 /usr/local/bin/myapp-new &# Test new versioncurl localhost:3001/readyz# Update Caddy/Nginx to point to new portsudo systemctl reload caddy# Stop old versionsudo systemctl stop myapp
# Set up email alertssudo apt install mailutils# Create alert scriptcat > /usr/local/bin/myapp-alert.sh << 'EOF'#!/bin/bashecho "myapp service failed on $(hostname)" | mail -s "Service Alert" admin@example.comEOFchmod +x /usr/local/bin/myapp-alert.sh# Add to service file# OnFailure=myapp-alert@%n.service
check process myapp with pidfile /run/myapp.pid start program = "/usr/bin/systemctl start myapp" stop program = "/usr/bin/systemctl stop myapp" if failed host 127.0.0.1 port 3000 protocol http request "/readyz" for 3 cycles then restart if 5 restarts within 5 cycles then alert