202 - Accepted
HTTP 202 Accepted is a success status code indicating that a request has been received but processing is not yet complete. It is the gold standard for asynchronous operations, allowing the server to decouple request acknowledgment from task execution.
Last reviewed: March 12, 2026|Editorial standard: source-backed technical guidance
What Does Accepted Mean?
A 202 response represents "Eventual Consistency." It is the primary architectural solution to prevent 504 Gateway Timeout errors; instead of keeping a connection open until it times out, the server accepts the task and releases the client. It essentially says, "I understand the request and have queued it," but it is non-committal—the server cannot guarantee the final success or failure at the moment of acknowledgment.
Common Causes
- -Long-Running Tasks: Operations like video transcoding or massive data exports that exceed standard load balancer timeouts (30-60s).
- -Asynchronous Offloading: Using message brokers (Redis, RabbitMQ, SQS) to process tasks outside the main request-response thread.
- -Batch Processing: Large-scale updates where validating and persisting every record takes more time than a synchronous HTTP cycle allows.
- -Silent Failures: The task is accepted, but the background worker crashes without updating the status, leaving the client in a permanent "pending" state.
How to Fix Accepted
- 1Locate the Tracking ID: Check the "Location" header or the response body for a Job ID or Status URL.
- 2Implement Polling: Set up a client-side loop to query the status endpoint until a terminal state (Completed/Failed) is reached.
- 3Audit Worker Health: Verify that background workers (e.g., Sidekiq, Celery, Lambda) are actively consuming the task queue.
- 4Check Dead-Letter Queues (DLQ): Inspect your broker for failed jobs that were accepted via 202 but could not be completed.
Step-by-Step Diagnosis for Accepted
- 1Verify that the 202 response includes a tracking reference (Location header or JSON Job ID).
- 2Test the status endpoint manually: Does it return a valid state like "processing", "queued", or "completed"?
- 3Use distributed tracing (OpenTelemetry) to track the request from the API gateway to the specific background worker.
- 4Confirm the client is not being rate-limited during high-frequency status polling.
Asynchronous Workflow Matrix
- -Use 202: For any task taking > 5 seconds to ensure system stability and prevent 504 errors.
- -Polling Pattern: Client asks "Is it done?" every few seconds. Best for web frontends.
- -Webhook Pattern: Server notifies the client upon completion. Best for server-to-server integrations.
Lifecycle & Security of Async Jobs
- -Status URL TTL: Ensure the tracking URL is temporary and expires (e.g., after 24 hours) to save resources and improve security.
- -Authorization: The status endpoint must require the same credentials as the initial request to prevent Job ID enumeration attacks.
- -Deadman’s Switch: Auto-fail jobs that stay in "pending" for 2x their expected SLA to prevent "Zombie Tasks".
Implementation Examples
# 1. Submit task
curl -i -X POST https://api.errorreference.com/v1/tasks
# Response: 202 Accepted, Location: /v1/status/99
# 2. Check status
curl -i https://api.errorreference.com/v1/status/99
# Response: 200 OK, {"status": "processing", "eta": "10s"}async function poll(url, delay = 2000) {
const res = await fetch(url);
const job = await res.json();
if (job.status === 'completed') return job.data;
if (job.status === 'failed') throw new Error('Worker Error');
await new Promise(r => setTimeout(r, delay));
return poll(url, delay * 1.5); // Increase wait time progressively
}How to Verify the Fix
- -Confirm the initial request returns 202 and a valid status URL.
- -Observe the status endpoint transitioning from "pending" to "completed".
- -Verify the final result (e.g., a generated report) is fully accessible and accurate.
How to Prevent Recurrence
- -Always include Retry-After: Tell the client exactly how many seconds to wait before polling again to reduce server load.
- -Idempotency Keys: Ensure retrying a 202 request doesn’t trigger duplicate background workers.
- -Rich Status Payloads: Include
percent_completeandestimated_time_remainingin the status JSON for better UX. - -Pro-tip: Return a 303 See Other from the status URL once the task is done to redirect the client to the final resource.
Decision Support
Compare Guide
429 Too Many Requests vs 503 Service Unavailable
Use 429 for caller-specific throttling and 503 for service-wide outages, so retry behavior, escalation paths, and incident ownership stay correct.
Compare Guide
500 Internal Server Error vs 502 Bad Gateway: Root Cause
Debug 500 vs 502 faster: use 500 for origin failures and 502 for invalid upstream responses at gateways, then route incidents to the right team.
Playbook
API Timeout Playbook (502 / 504 / DEADLINE_EXCEEDED)
Use this playbook to separate invalid upstream responses from upstream wait expiration and deadline exhaustion, and apply timeout budgets, safe retries, and circuit-breaker controls safely.
Playbook
Availability and Dependency Playbook (500 / 503 / ServiceUnavailable)
Use this playbook to separate origin-side 500 failures from temporary 503 dependency or capacity outages, then apply safe retry and escalation paths.
Official References
Provider Context
This guidance is specific to HTTP services. Always validate implementation details against official provider documentation before deploying to production.