EntityTooLarge
AWS EntityTooLarge (Entity Too Large) means the proposed upload exceeds the maximum allowed object size. In Amazon S3, this error returns HTTP 400.
Last reviewed: February 12, 2026|Editorial standard: source-backed technical guidance
What Does Entity Too Large Mean?
S3 rejected the payload size for the selected upload path, so the object is not stored until upload mode and size boundaries are corrected.
Common Causes
- -Payload exceeds the S3 size quota for the chosen upload operation.
- -Large object is sent via single-request upload instead of multipart.
- -Uploader thresholds are stale for the current bucket type, endpoint, or operation path.
How to Fix Entity Too Large
- 1Route oversized uploads to multipart flow when one-shot upload limits are exceeded.
- 2Validate object and part-size constraints against current S3 quotas for the exact operation and endpoint.
- 3Retry with corrected upload mode, thresholds, and chunk boundaries.
Step-by-Step Diagnosis for Entity Too Large
- 1Record object size and selected upload mode for each failure.
- 2Inspect uploader branch selection, chunking logic, and part sizes in traces.
- 3Reproduce in a controlled environment with deterministic file size tests.
Upload Mode and Size Boundary Checks
- -Validate payload size against the exact operation boundary (example: single-request upload path selected after object size crosses that path limit).
- -Confirm multipart path is used for large payloads with valid part sizing (example: uploader forced to single-shot mode after feature-flag drift).
Chunking and Transfer Pipeline Validation
- -Inspect chunker logic and part-size configuration (example: transformed payload exceeds planned size after compression disabled).
- -Audit upload client branch selection by size threshold (example: 6 GiB file incorrectly routed to non-multipart path).
How to Verify the Fix
- -Re-run upload and confirm EntityTooLarge no longer appears.
- -Validate object size and multipart boundaries on the completed upload.
- -Confirm size-related upload failures stay below baseline.
How to Prevent Recurrence
- -Route uploads by size threshold to PutObject or multipart appropriately.
- -Add boundary tests for max object and part sizes in upload clients.
- -Monitor upload-size distribution and failures by operation mode.
Pro Tip
- -compute and log effective payload size after all client-side transforms so upload-path routing decisions use true wire-size, not source file size.
Decision Support
Compare Guide
429 Too Many Requests vs 503 Service Unavailable
Use 429 for caller-specific throttling and 503 for service-wide outages, so retry behavior, escalation paths, and incident ownership stay correct.
Compare Guide
AWS ThrottlingException vs GCP RESOURCE_EXHAUSTED
Compare AWS ThrottlingException and GCP RESOURCE_EXHAUSTED to separate rate limiting from quota/resource exhaustion and choose the remediation path.
Playbook
Rate Limit Recovery Playbook (429 / ThrottlingException / RESOURCE_EXHAUSTED)
Use this playbook to separate transient throttling from hard quota exhaustion and apply retry, traffic-shaping, and quota-capacity fixes safely.
Official References
Provider Context
This guidance is specific to AWS services. Always validate implementation details against official provider documentation before deploying to production.