102 - Processing
HTTP 102 Processing is a deprecated informational status code primarily used in WebDAV environments. It acts as an interim response to tell the client that a long-running request (like COPY or MOVE) is still active, preventing a connection timeout before a final response is sent.
Last reviewed: March 12, 2026|Editorial standard: source-backed technical guidance
What Does Processing Mean?
Originally defined in RFC 2518, 102 was a "Keep-Alive" signal. It allowed servers to tell clients "I am still working" during operations that could take minutes. However, RFC 9110 deprecated it because holding an HTTP connection open with interim responses is unreliable in modern distributed systems. Today, encountering 102 usually indicates legacy WebDAV integrations or misconfigured proxies attempting to prevent 504 Gateway Timeouts incorrectly.
Common Causes
- -Legacy WebDAV Operations: Long-running
COPY,MOVE, orLOCKrequests on old file servers. - -Interim Response Injection: A proxy server (like an older HAProxy or Nginx setup) emitting 102 to keep a client connection alive instead of managing timeouts properly.
- -Incompatible Client Libraries: Modern fetch-based clients (like Axios or Fetch API) often fail to expose 1xx interim codes, causing them to hang until the final status arrives, potentially timing out regardless.
- -Middleware Logic Errors: A backend service incorrectly using 102 to signal "Work in Progress" instead of using the modern 202 Accepted pattern.
How to Fix Processing
- 1Upgrade to 202 Accepted: If you control the API, stop using 102. Return
202 Acceptedimmediately with aLocationheader for status polling. - 2Check Proxy Buffering: Ensure intermediate proxies do not "swallow" interim 1xx responses. If they buffer the whole response, the 102 benefit is lost.
- 3Implement Client-Side Wait: If consuming a 102-emitting server, ensure your HTTP client is configured to "wait and continue" upon receiving interim responses.
- 4Verify WebDAV Modules: On Apache or Nginx servers, audit the
mod_davorngx_http_dav_modulesettings if 102 is appearing unexpectedly.
Step-by-Step Diagnosis for Processing
- 1Confirm the source: Use
curl -vto see if the 102 arrives from the origin or an intermediate proxy. - 2Identify the operation: Is it a standard
GET/POSTor a WebDAV-specific method likeCOPYorMOVE? - 3Measure the gap: Check how many 102 responses are sent before the final 200/201. If the gap is >30s, the client might drop the connection.
- 4Inspect the "Server" header: Confirm if the backend is a known legacy system (e.g., old IIS or Apache versions with WebDAV).
Deprecated vs. Modern Pattern
- -Legacy (102): Connection stays open; server sends "Working..." every few seconds. Vulnerable to proxy drops.
- -Modern (202): Connection closes immediately; client polls a "Status URL". Scalable and proxy-friendly.
Proxy Persistence Audit
- -Check if your Load Balancer (ALB, Cloudflare) supports 1xx passthrough. Some terminate the connection if no "Final" status is seen within 60 seconds.
Implementation Examples
curl -v -X COPY https://legacy.dav-server.com/large-archive \
-H "Destination: /backup/archive"
# Look for interim response:
# < HTTP/1.1 102 Processing
# (Wait...)
# < HTTP/1.1 201 Created// Instead of holding with 102, return 202
export async function POST(req) {
const jobId = await startAsyncJob();
return new Response(JSON.stringify({ jobId }), {
status: 202,
headers: { 'Location': `/api/jobs/${jobId}` }
});
}How to Verify the Fix
- -Verify that long operations now return a
202 Acceptedwith a trackable job ID. - -Confirm that the client no longer hangs indefinitely or fails with an "Unexpected Status" error.
- -Ensure the final resource state (e.g., the copied file) is accurate after the 102 handshake completes.
How to Prevent Recurrence
- -Migrate to Async Workflows: Design API endpoints to be asynchronous by default for any task exceeding 5 seconds.
- -Client Middleware: Add a handler to your HTTP client to log 1xx informational responses for better observability.
- -Avoid Connection-Holding: Never rely on keeping an HTTP request open for minutes; use Webhooks or Polling instead.
- -Pro-tip: 102 is essentially a "Zombie Code." If you see it in a modern REST API, it is almost always a sign of technical debt that needs refactoring to 202.
Decision Support
Compare Guide
429 Too Many Requests vs 503 Service Unavailable
Use 429 for caller-specific throttling and 503 for service-wide outages, so retry behavior, escalation paths, and incident ownership stay correct.
Compare Guide
500 Internal Server Error vs 502 Bad Gateway: Root Cause
Debug 500 vs 502 faster: use 500 for origin failures and 502 for invalid upstream responses at gateways, then route incidents to the right team.
Playbook
API Timeout Playbook (502 / 504 / DEADLINE_EXCEEDED)
Use this playbook to separate invalid upstream responses from upstream wait expiration and deadline exhaustion, and apply timeout budgets, safe retries, and circuit-breaker controls safely.
Playbook
Availability and Dependency Playbook (500 / 503 / ServiceUnavailable)
Use this playbook to separate origin-side 500 failures from temporary 503 dependency or capacity outages, then apply safe retry and escalation paths.
Official References
Provider Context
This guidance is specific to HTTP services. Always validate implementation details against official provider documentation before deploying to production.