RequestTimeout
AWS S3 RequestTimeout (HTTP 400/408) indicates that the client connection was open but the request body was not received within S3's expected time window. It occurs when upload streams stall, bandwidth is throttled, or the data source is too slow to keep the stream active.
Last reviewed: March 28, 2026|Editorial standard: source-backed technical guidance
What Does Request Timeout Mean?
RequestTimeout is a "Connection Inactivity" signal. S3 expects a steady flow of data once a PUT request starts. If the transfer rate drops too low or pauses for more than 20 seconds, S3 closes the TCP connection to free up resources. It is almost always caused by network instability on the client side or a "producer-consumer" mismatch where the application cannot generate data as fast as the network can send it.
Common Causes
- -Stalled Streams: The source data (e.g., a slow database query) pauses mid-transfer, causing the upload stream to go idle.
- -Unstable Mobile Networks: High packet loss or frequent signal drops on 4G/5G connections during large file uploads.
- -Single-Part Bottleneck: Attempting a massive (2GB+) upload via a single
PutObjectinstead of breaking it into parts. - -Bandwidth Throttling: Client-side firewalls or corporate proxies limiting the outbound upload speed below S3's minimum threshold.
How to Fix Request Timeout
- 1Force Multipart Upload: For any file >100MB, use Multipart Upload. Each part has its own timeout window, making the overall process much more resilient.
- 2Buffer Before Upload: If your data comes from a slow API or database, write it to a temporary file/memory buffer first, then start the S3 upload at full speed.
- 3Enable Transfer Acceleration: Use S3 Transfer Acceleration to route your data through the closest CloudFront edge location, minimizing the distance over the public internet.
- 4Adjust TCP Timeouts: Increase the
requestTimeoutandconnectionTimeoutsettings in your AWS SDK configuration.
Step-by-Step Diagnosis for Request Timeout
- 1Measure throughput: calculate your average upload speed. If it is < 1MB/s for large files, you are at high risk of a timeout.
- 2Check the data source: is your code reading from a disk or database while uploading? Test if the source itself is stalling.
- 3Identify consistency: does the error only happen for specific file sizes or from specific geographic regions?
- 4Review proxy logs: check if an intermediate proxy is timing out before AWS S3 does.
Single-Part vs. Multipart Resilience
- -Single-Part: One failure = 100% data loss. Must restart from byte 0.
- -Multipart: One failure = one 5MB-10MB part loss. Only that part is retried automatically.
The Producer-Consumer Stall
- -If your Lambda/server reads from a slow legacy DB and streams to S3, S3 is the fast consumer waiting for the slow producer. Use a buffered stream to prevent idle connection kills.
Implementation Examples
import { S3Client } from "@aws-sdk/client-s3";
import { Upload } from "@aws-sdk/lib-storage";
const parallelUploads3 = new Upload({
client: new S3Client({}),
params: { Bucket: "my-bucket", Key: "huge-file.zip", Body: stream },
queueSize: 4,
partSize: 5 * 1024 * 1024,
leavePartsOnError: false,
});
await parallelUploads3.done();aws s3 cp large-file.bin s3://my-bucket/ --debug
# Look for 'RequestTimeout' in the raw HTTP responses during long stallsHow to Verify the Fix
- -Confirm that large uploads now complete successfully even under simulated network throttling.
- -Verify that the S3 Transfer Manager logs show parts being uploaded in parallel.
- -Check CloudWatch metrics for a reduction in 4xx errors for the specific bucket.
How to Prevent Recurrence
- -Standardize SDK Clients: Use the AWS SDK Transfer Manager (S3 lib-storage in JS v3) which defaults to multipart for large files.
- -Implement Resumable Logic: Store
UploadIdandPartETagsto resume an upload after a network drop instead of starting over. - -Network Monitoring: Alert on high latency between your app servers and S3 regional endpoints.
- -Pro-tip: For global applications, use a signed URL combined with Transfer Acceleration to let the client upload directly to the closest AWS edge, bypassing your server entirely.
Decision Support
Compare Guide
429 Too Many Requests vs 503 Service Unavailable
Use 429 for caller-specific throttling and 503 for service-wide outages, so retry behavior, escalation paths, and incident ownership stay correct.
Compare Guide
AWS ThrottlingException vs GCP RESOURCE_EXHAUSTED
Compare AWS ThrottlingException and GCP RESOURCE_EXHAUSTED to separate rate limiting from quota/resource exhaustion and choose the remediation path.
Playbook
Rate Limit Recovery Playbook (429 / ThrottlingException / RESOURCE_EXHAUSTED)
Use this playbook to separate transient throttling from hard quota exhaustion and apply retry, traffic-shaping, and quota-capacity fixes safely.
Official References
Provider Context
This guidance is specific to AWS services. Always validate implementation details against official provider documentation before deploying to production.