BadDigest
AWS BadDigest means the Content-MD5 or checksum value in the request does not match what Amazon S3 received for that payload (HTTP 400).
Last reviewed: February 12, 2026|Editorial standard: source-backed technical guidance
What Does Bad Digest Mean?
S3 blocks object persistence at integrity validation time, so uploads fail hard until client-provided digests and transmitted bytes are identical.
Common Causes
- -Content-MD5 or x-amz-checksum-* header is calculated from bytes that differ from the final transmitted body.
- -A proxy, gzip middleware, or custom interceptor mutates payload bytes after digest generation.
- -Digest formatting is wrong for the header contract (example: hex digest sent where Base64 is expected for Content-MD5).
- -Multipart or streaming uploads hash pre-transform buffers instead of the exact bytes written to the socket.
How to Fix Bad Digest
- 1Recompute the digest from the exact outbound wire payload, not from an earlier in-memory representation.
- 2Stop all body transforms after checksum/signing steps (compression, newline normalization, templating).
- 3Align checksum algorithm, header type, and encoding format before dispatch.
- 4Retry with byte-level checksum tracing enabled from client buffer through transport.
Step-by-Step Diagnosis for Bad Digest
- 1Log checksum algorithm, outgoing digest header value, and local payload hash immediately before send.
- 2Capture or reconstruct wire bytes and compare them against the digest input buffer.
- 3Inspect S3 error detail elements to compare your sent digest versus the digest S3 computed.
- 4Audit multipart and streaming pipelines for late-stage transforms after hashing.
Checksum Byte-Path Trace
- -Trace bytes from source buffer to transport and recompute digest at each hop (example: hash computed before gzip, but wire payload is compressed).
- -Verify checksum header semantics and encoding rigorously (example: Content-MD5 requires Base64 output, not hexadecimal text).
Multipart and SDK Integrity Checks
- -Inspect multipart uploaders for stream position drift and per-part hash correctness (example: stale stream cursor produces wrong digest for part 3).
- -Confirm SDK checksum options do not conflict with manually set headers (example: x-amz-checksum-algorithm is SHA256 while custom Content-MD5 is still injected).
How to Verify the Fix
- -Re-run upload and confirm BadDigest is no longer returned.
- -Validate checksum and object-integrity signals on completed upload.
- -Confirm upload-integrity error rates remain at baseline.
How to Prevent Recurrence
- -Standardize checksum generation in a shared upload component.
- -Add end-to-end tests that compare checksum headers to outbound payload bytes.
- -Alert on checksum-mismatch spikes in upload telemetry.
Pro Tip
- -choose either SDK-managed checksums or manual checksum headers per path; mixing both often creates hidden algorithm/encoding drift.
Decision Support
Compare Guide
HTTP 400 vs 422: Bad Request vs Unprocessable Content
Fix API payload issues faster by using 400 for malformed syntax and 422 for semantic validation failures, so clients correct format before business rules.
Playbook
CORS Error Fix Playbook (Preflight / Origin / Credentials)
Use this playbook to separate browser-enforced cross-origin policy failures from server-side CORS header and route defects and apply strict origin and credential controls safely.
Playbook
Validation Failure Playbook (400 / 422 / INVALID_ARGUMENT)
Use this playbook to separate malformed-request failures from semantic validation failures, then fix request contracts without broad server-side bypasses.
Official References
Provider Context
This guidance is specific to AWS services. Always validate implementation details against official provider documentation before deploying to production.