Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.go-mizu.dev/llms.txt

Use this file to discover all available pages before exploring further.

Overview

The metrics middleware collects custom application metrics including request counts, latencies, and error rates. Use it when you need:
  • Application monitoring
  • Custom metric collection
  • Performance tracking

Installation

import "github.com/go-mizu/mizu/middlewares/metrics"

Quick Start

app := mizu.New()

m := metrics.New()
app.Use(m.Middleware())

// Expose metrics endpoint
app.Get("/metrics", m.Handler())

Configuration

Options

OptionTypeDefaultDescription
Namespacestring"http"Metric namespace
Buckets[]float64StandardLatency buckets

Examples

Basic Metrics

m := metrics.New()
app.Use(m.Middleware())
app.Get("/metrics", m.Handler())

Custom Namespace

m := metrics.New(metrics.Options{
    Namespace: "myapp",
})

Custom Buckets

m := metrics.New(metrics.Options{
    Buckets: []float64{0.01, 0.05, 0.1, 0.5, 1, 5},
})

Add Custom Metrics

m := metrics.New()

// Counter
m.Counter("jobs_processed", "Total jobs processed")
m.Inc("jobs_processed")

// Gauge
m.Gauge("active_connections", "Active connections")
m.Set("active_connections", 42)

// Histogram
m.Histogram("job_duration", "Job duration", buckets)
m.Observe("job_duration", 1.5)

Default Metrics

MetricTypeDescription
http_requests_totalCounterTotal requests
http_request_duration_secondsHistogramRequest latency
http_requests_in_flightGaugeActive requests

API Reference

Functions

// New creates metrics collector
func New(opts ...Options) *Metrics

// Middleware returns metrics middleware
func (m *Metrics) Middleware() mizu.Middleware

// Handler returns metrics HTTP handler
func (m *Metrics) Handler() mizu.Handler

// Custom metrics
func (m *Metrics) Counter(name, help string)
func (m *Metrics) Gauge(name, help string)
func (m *Metrics) Histogram(name, help string, buckets []float64)
func (m *Metrics) Inc(name string)
func (m *Metrics) Set(name string, value float64)
func (m *Metrics) Observe(name string, value float64)

Technical Details

Core Components

The metrics middleware consists of the following key components:

Metrics Structure

  • RequestCount: Total number of HTTP requests processed (atomic int64)
  • ErrorCount: Total number of errors (4xx and 5xx status codes, atomic int64)
  • TotalDuration: Cumulative duration of all requests in nanoseconds (atomic int64)
  • ActiveRequests: Number of currently in-flight requests (atomic int64)
  • statusCodes: Map tracking counts for each HTTP status code
  • pathCounts: Map tracking request counts per URL path

Thread Safety

All counters use atomic operations (atomic.AddInt64, atomic.LoadInt64, atomic.StoreInt64) to ensure thread-safe concurrent access without explicit locking for the main counters. Maps (statusCodes and pathCounts) are protected by a sync.RWMutex to safely handle concurrent reads and writes.

Status Capture Mechanism

The middleware uses a custom statusCapture wrapper that implements http.ResponseWriter to intercept the status code before writing the response. This allows tracking of status codes even when they’re set by downstream handlers.

Metrics Calculation

  • Average Duration: Calculated as TotalDuration / RequestCount and converted to milliseconds
  • Error Detection: Requests are counted as errors when err != nil or statusCode >= 400

Output Formats

The middleware supports two output formats:
  1. JSON Format (Handler()): Returns statistics as JSON with fields:
    • request_count, error_count, active_requests
    • average_duration_ms, status_codes, path_counts
  2. Prometheus Format (Prometheus()): Exports metrics in Prometheus text format:
    • http_requests_total (counter)
    • http_errors_total (counter)
    • http_active_requests (gauge)

Implementation Notes

  • Uses custom itoa() function for integer to string conversion to avoid allocations
  • Metrics can be reset using Reset() method which clears all counters and maps
  • The middleware does not affect the response or request flow, only observes them

Best Practices

  • Use meaningful metric names
  • Add labels for dimensions
  • Set appropriate histogram buckets
  • Monitor metric cardinality

Testing

The metrics middleware includes comprehensive test coverage for all functionality:
Test CaseDescriptionExpected Behavior
TestNewBasic initialization and request countingCreates middleware, counts single request correctly
TestMetrics_StatusCodesHTTP status code trackingTracks 200 and 500 status codes separately, counts 500 as error
TestMetrics_PathCountsURL path request countingCounts requests per path (3 to /api/users, 1 to /api/posts)
TestMetrics_HandlerJSON metrics endpointReturns metrics as JSON with correct Content-Type header
TestMetrics_PrometheusPrometheus format exportReturns Prometheus-formatted metrics with http_requests_total
TestMetrics_ResetMetrics reset functionalityResets all counters to zero after processing requests
TestMetrics_ConcurrentThread-safe concurrent accessCorrectly counts 100 concurrent requests without race conditions
TestMetrics_AverageDurationDuration measurementCalculates positive average duration for requests
TestMetrics_JSONJSON serializationMarshals stats to JSON with request_count field