503 - Service Unavailable
HTTP 503 Service Unavailable means the server is temporarily unable to handle the request due to overload or maintenance.
Last reviewed: February 12, 2026|Editorial standard: source-backed technical guidance
What Does Service Unavailable Mean?
Service capacity is temporarily unavailable, so requests are rejected until overload, maintenance, or dependency readiness conditions are cleared.
Common Causes
- -Service is overloaded and rejects excess traffic.
- -Maintenance window temporarily disables request handling.
- -Critical dependency is unavailable from serving path.
How to Fix Service Unavailable
- 1Reduce load immediately with admission control, queueing, or traffic shedding while preserving core paths.
- 2Scale constrained resources and recover failing dependencies before reopening full traffic.
- 3If applicable, send `Retry-After` so clients back off predictably during temporary unavailability.
Step-by-Step Diagnosis for Service Unavailable
- 1Correlate 503 spikes with saturation metrics (CPU, memory, queue depth, thread pools, connection limits).
- 2Identify whether outage source is overload, planned maintenance gating, or hard dependency outage.
- 3Inspect autoscaling and load-balancing behavior for lag or misconfiguration under traffic bursts.
- 4Retest after recovery actions and confirm traffic can ramp without re-entering overload conditions.
Capacity Saturation and Backpressure Analysis
- -Inspect resource ceilings and queue pressure (example: request queue depth hits max and service starts returning 503).
- -Validate autoscaler reaction time and policy thresholds (example: scale-out triggers too late for burst profile).
Maintenance and Dependency Availability Checks
- -Audit maintenance flags/circuit breakers (example: service left in maintenance mode after deployment).
- -Trace critical dependency readiness (example: cache cluster restart causes app to reject requests with temporary 503).
Implementation Examples
curl -i -X GET https://api.example.com/v1/checkout
# Response:
# HTTP/1.1 503 Service Unavailable
# {"error":"Service Unavailable","code":"503"}const response = await fetch('https://api.example.com/v1/checkout', {
method: 'GET',
headers: { 'Accept': 'application/json' }
});
if (response.status === 503) {
console.error('Handle 503 Service Unavailable');
}import requests
response = requests.get(
'https://api.example.com/v1/checkout',
headers={'Accept': 'application/json'}
)
if response.status_code == 503:
print('Handle 503 Service Unavailable')How to Verify the Fix
- -Repeat affected workflows and confirm 503 clears under nominal and burst traffic profiles.
- -Validate `Retry-After` behavior and client backoff compliance during controlled degradation tests.
- -Confirm saturation metrics stay below alert thresholds after capacity and dependency fixes.
How to Prevent Recurrence
- -Improve resilience with capacity planning, graceful degradation, and circuit breakers.
- -Gate high-risk deployments with canaries and automatic rollback triggers.
- -Continuously test dependency failure paths and production recovery procedures.
Pro Tip
- -define service-level admission budgets per endpoint tier so non-critical traffic is shed first and core APIs stay available under pressure.
Decision Support
Compare Guide
429 Too Many Requests vs 503 Service Unavailable
Use 429 for caller-specific throttling and 503 for service-wide outages, so retry behavior, escalation paths, and incident ownership stay correct.
Compare Guide
500 Internal Server Error vs 502 Bad Gateway: Root Cause
Debug 500 vs 502 faster: use 500 for origin failures and 502 for invalid upstream responses at gateways, then route incidents to the right team.
Playbook
Availability and Dependency Playbook (500 / 503 / ServiceUnavailable)
Use this playbook to separate origin-side 500 failures from temporary 503 dependency or capacity outages, then apply safe retry and escalation paths.
Playbook
API Timeout Playbook (502 / 504 / DEADLINE_EXCEEDED)
Use this playbook to separate invalid upstream responses from upstream wait expiration and deadline exhaustion, and apply timeout budgets, safe retries, and circuit-breaker controls safely.
Official References
Provider Context
This guidance is specific to HTTP services. Always validate implementation details against official provider documentation before deploying to production.