100 - Continue
HTTP 100 Continue is an informational success code indicating that the server has received the request headers and the client is cleared to transmit the request body. It serves as a bandwidth-saving handshake for large-scale data uploads.
Last reviewed: March 11, 2026|Editorial standard: source-backed technical guidance
What Does Continue Mean?
HTTP 100 acts as a "checkpoint." When a client sends a large payload (e.g., >1MB), it first sends the headers with "Expect: 100-continue". The server validates authentication and file limits before the client spends bandwidth on the body. If the server rejects the headers (e.g., 401 Unauthorized), it saves the client from a useless, expensive upload. A failed 100-handshake usually manifests as a mysterious delay or a client-side hang.
Common Causes
- -Proxy Interception: Reverse proxies (like Nginx) stripping the "Expect" header, causing the backend to never see the switch request.
- -Load Balancer Timeout: AWS ALB or other edge devices terminating the 100-handshake prematurely, leading to 502 Bad Gateway errors.
- -Wait-and-Send Race: The client (like curl) waits only 1 second for a 100 response before sending the body anyway, causing confusion in high-latency links.
- -Legacy Server Rejection: Older API servers returning 417 Expectation Failed because they do not recognize the Expect mechanism.
How to Fix Continue
- 1Configure Nginx Passthrough: Add
proxy_set_header Expect $http_expect;to ensure the header reaches your upstream application. - 2Client-Side Fallback: If the server returns 417, configure your HTTP client to send the body immediately without waiting for confirmation.
- 3Explicit Server Listeners: Ensure your backend (Node.js/Go/Python) is explicitly listening for the "checkContinue" event if it isn't handled automatically.
- 4Adjust Timeout: Increase the client's "expect-timeout" if your server performs complex header validations (e.g., heavy database-backed auth).
Step-by-Step Diagnosis for Continue
- 1Run
curl -v -H "Expect: 100-continue" -d "test-body" [URL]and check for the interim "HTTP/1.1 100 Continue" line. - 2Audit proxy logs for a 1-second gap between the header-send and body-send; this confirms a "Missing 100" hang.
- 3Verify the
417 Expectation Failedstatus in response headers, which points directly to an unsupported expectation. - 4Test the application server directly without the proxy layer to isolate where the Expect header is being dropped.
Expectation Handshake Logic
- -Successful Path: Header sent -> 100 Received -> Body sent -> 200 OK.
- -Rejection Path: Header sent -> 401/413 Received -> Body never sent (Success in saving bandwidth).
- -Broken Path: Header sent -> Timeout/Wait -> Body sent anyway -> Latency penalty.
Infrastructure Configuration Audit
- -Nginx: By default, Nginx may strip Expect headers. Use
proxy_set_header Expect $http_expect;and ensureproxy_http_version 1.1;is active. - -Cloud CDNs: Check if your CDN (Cloudflare/Akamai) terminates the 100-continue handshake at the edge or passes it to the origin.
Implementation Examples
curl -v -H "Expect: 100-continue" \
-X POST --data-binary @largefile.zip \
https://api.errorreference.com/v1/uploads
# Look for interim status line:
# > Expect: 100-continue
# < HTTP/1.1 100 Continuelocation /uploads/ {
proxy_pass http://backend_app;
proxy_http_version 1.1;
proxy_set_header Expect $http_expect; # Crucial fix
}How to Verify the Fix
- -Confirm the initial request returns 100 Continue before any body bytes are transmitted.
- -Verify that unauthorized requests are rejected immediately at the header stage.
- -Ensure large file uploads no longer experience a fixed 1-second delay at the start.
How to Prevent Recurrence
- -IaC Templates: Include Expect header passthrough in all standard Nginx and Load Balancer Terraform/Ansible scripts.
- -API Smoke Tests: Include a test case with a large body and
Expect: 100-continueto catch proxy stripping during CI/CD. - -Monitoring: Alert on
417 Expectation Failedspikes, which indicate client-server protocol incompatibility. - -Pro-tip: For internal services on high-speed networks, disabling Expect: 100-continue on the client can save 1 round-trip of latency without risk.
Decision Support
Compare Guide
429 Too Many Requests vs 503 Service Unavailable
Use 429 for caller-specific throttling and 503 for service-wide outages, so retry behavior, escalation paths, and incident ownership stay correct.
Compare Guide
500 Internal Server Error vs 502 Bad Gateway: Root Cause
Debug 500 vs 502 faster: use 500 for origin failures and 502 for invalid upstream responses at gateways, then route incidents to the right team.
Playbook
API Timeout Playbook (502 / 504 / DEADLINE_EXCEEDED)
Use this playbook to separate invalid upstream responses from upstream wait expiration and deadline exhaustion, and apply timeout budgets, safe retries, and circuit-breaker controls safely.
Playbook
Availability and Dependency Playbook (500 / 503 / ServiceUnavailable)
Use this playbook to separate origin-side 500 failures from temporary 503 dependency or capacity outages, then apply safe retry and escalation paths.
Official References
Provider Context
This guidance is specific to HTTP services. Always validate implementation details against official provider documentation before deploying to production.