Skip to main content

Overview

The concurrency middleware limits the number of requests processed in parallel, preventing resource exhaustion and ensuring stable performance. Use it when you need:
  • Limit CPU-intensive operations
  • Control database connection usage
  • Prevent memory exhaustion

Installation

import "github.com/go-mizu/mizu/middlewares/concurrency"

Quick Start

app := mizu.New()

// Max 50 concurrent requests
app.Use(concurrency.New(50))

Configuration

Options

OptionTypeDefaultDescription
LimitintRequiredMax concurrent
ErrorHandlerfunc(*mizu.Ctx) error-Custom error handler

Examples

Basic Limit

app.Use(concurrency.New(50))

Custom Error

app.Use(concurrency.WithOptions(concurrency.Options{
    Limit: 50,
    ErrorHandler: func(c *mizu.Ctx) error {
        return c.JSON(503, map[string]string{
            "error": "Too many concurrent requests",
        })
    },
}))

Per-Route Limits

// Heavy endpoints
heavy := app.Group("/heavy")
heavy.Use(concurrency.New(5))

// Light endpoints
light := app.Group("/api")
light.Use(concurrency.New(100))

API Reference

Functions

// New creates concurrency limiter
func New(limit int) mizu.Middleware

// WithOptions creates with configuration
func WithOptions(opts Options) mizu.Middleware

Technical Details

The concurrency middleware uses Go’s buffered channels as semaphores to control concurrent request processing:

Implementation Mechanisms

  1. Semaphore Pattern: Uses a buffered channel with capacity equal to the maximum concurrent requests limit
  2. Non-blocking Acquisition: The default New() and WithOptions() use select with default to immediately reject requests when at capacity
  3. Blocking Acquisition: The Blocking() variant blocks requests until a slot becomes available
  4. Context-aware: The WithContext() variant respects context cancellation while waiting for a slot

Core Components

  • Options struct: Configures maximum concurrent requests and custom error handling
  • Semaphore channel: Controls access to the handler based on the configured limit
  • Deferred release: Ensures semaphore slots are released even if the handler panics
  • Retry-After header: Automatically set to “1” second when requests are rejected

Behavior Patterns

  • Zero or negative limit: Immediately rejects all requests with 503 Service Unavailable
  • Default error response: Returns 503 with “Server at capacity” message
  • Custom error handler: Allows full control over rejection response format and status code

Best Practices

  • Set limits based on resource capacity
  • Monitor concurrent request counts
  • Use different limits for different workloads
  • Combine with timeout for hung requests

Testing

Test CaseDescriptionExpected Behavior
TestNewCreates middleware with limit of 2 and sends 5 concurrent requestsMax concurrent requests never exceeds the configured limit of 2
TestNew_RejectsOverCapacitySends 3 concurrent requests with limit of 1Some requests are rejected with 503 Service Unavailable status
TestWithOptions_ErrorHandlerUses custom error handler with Max: 0Custom error handler is invoked, returns 429 with JSON error message
TestBlockingUses Blocking variant with limit of 2 and 5 concurrent requestsAll requests eventually succeed (200 OK), max concurrent never exceeds 2
TestRetryAfterHeaderSends request with Max: 0 (immediate rejection)Response includes “Retry-After: 1” header
TestWithContextUses WithContext variant with limit of 2 and 5 concurrent requestsMax concurrent requests never exceeds limit of 2
TestWithContext_ContextCancellationTests context cancellation with WithContext variantFirst request blocks, completes when channel is closed
TestWithOptions_NegativeMaxConfigures middleware with Max: -1All requests rejected with 503 Service Unavailable
TestWithOptions_ErrorHandlerAtCapacityCustom error handler with Max: 1, sends 2 concurrent requestsSecond request triggers custom error handler, returns 429 status