AWS

BadDigest - Bad Digest

Getting a **BadDigest** error means the Content-MD5 hash you sent doesn't match what AWS calculated from the uploaded data—the file was corrupted during transmission, the MD5 hash was calculated incorrectly, or the file was modified after hashing. This client-side error (4xx) happens when AWS validates data integrity using MD5 checksums. Most common when uploading S3 objects with Content-MD5 headers, but also appears when data corruption occurs during upload, MD5 hashes are calculated incorrectly, network transmission errors corrupt data, or files are modified after hashing.

#Common Causes

  • Identity: IAM policy allows upload but hash validation fails. Service Control Policy (SCP) enforces hash validation.
  • Network: Data corruption during network transmission. Network errors corrupting upload. VPC endpoint transmission issues.
  • Limits: Content-MD5 hash mismatch. Incorrect MD5 hash calculation. Data corruption during upload. Object modified during upload process.

Solutions

  1. 1Step 1: Diagnose - Check if Content-MD5 was provided: Review request headers. Verify Content-MD5 header is present. Check if hash was calculated correctly.
  2. 2Step 2: Diagnose - Recalculate MD5 hash: Calculate hash of file: md5sum FILE (Linux) or md5 FILE (macOS). Compare with hash sent to AWS. Verify hash is base64-encoded.
  3. 3Step 3: Diagnose - Check for data corruption: Verify file hasn't changed since hash calculation. Check network connection stability. Verify file wasn't modified during upload.
  4. 4Step 4: Fix - Recalculate and re-upload: Calculate correct MD5: MD5_HASH=$(md5sum FILE | cut -d' ' -f1 | xxd -r -p | base64). Upload with correct hash: aws s3api put-object --bucket BUCKET --key KEY --body FILE --content-md5 ${MD5_HASH}.
  5. 5Step 5: Fix - Use multipart upload for large files: For files > 5GB, use multipart upload: aws s3 cp FILE s3://BUCKET/KEY --multipart-chunk-size 64MB. Multipart upload handles integrity automatically. Or let AWS CLI handle MD5 automatically (don't specify Content-MD5).

</>Code Examples

Calculate Content-MD5 Hash for S3 Upload
1#!/bin/bash
2FILE_PATH="path/to/file.txt"
3
4echo "=== Calculating MD5 Hash ==="
5
6# Method 1: Using md5sum (Linux) or md5 (macOS)
7if command -v md5sum &> /dev/null; then
8  # Linux: Get hex MD5, convert to binary, then base64
9  HEX_HASH=$(md5sum ${FILE_PATH} | cut -d' ' -f1)
10  echo "Hex MD5: ${HEX_HASH}"
11  
12  # Convert hex to binary, then base64
13  BASE64_HASH=$(echo ${HEX_HASH} | xxd -r -p | base64)
14  echo "Base64 MD5: ${BASE64_HASH}"
15elif command -v md5 &> /dev/null; then
16  # macOS: md5 outputs hex
17  HEX_HASH=$(md5 -q ${FILE_PATH})
18  echo "Hex MD5: ${HEX_HASH}"
19  BASE64_HASH=$(echo ${HEX_HASH} | xxd -r -p | base64)
20  echo "Base64 MD5: ${BASE64_HASH}"
21else
22  echo "md5sum or md5 not found"
23  exit 1
24fi
25
26# Upload with Content-MD5
27echo "\n=== Uploading with Content-MD5 ==="
28BUCKET_NAME="my-bucket"
29OBJECT_KEY="file.txt"
30
31aws s3api put-object \
32  --bucket ${BUCKET_NAME} \
33  --key ${OBJECT_KEY} \
34  --body ${FILE_PATH} \
35  --content-md5 ${BASE64_HASH} 2>&1
36
37if [ $? -eq 0 ]; then
38  echo "✓ Upload successful with MD5 verification"
39else
40  echo "✗ Upload failed - check if BadDigest error"
41fi
Verify File Integrity Before Upload
1#!/bin/bash
2FILE_PATH="path/to/file.txt"
3
4echo "=== Verifying File Integrity ==="
5
6# Calculate hash before upload
7echo "Calculating initial MD5..."
8INITIAL_HASH=$(md5sum ${FILE_PATH} 2>/dev/null | cut -d' ' -f1 || md5 -q ${FILE_PATH})
9echo "Initial hash: ${INITIAL_HASH}"
10
11# Wait a moment and recalculate
12echo "\nRecalculating after delay..."
13sleep 2
14FINAL_HASH=$(md5sum ${FILE_PATH} 2>/dev/null | cut -d' ' -f1 || md5 -q ${FILE_PATH})
15echo "Final hash: ${FINAL_HASH}"
16
17if [ "${INITIAL_HASH}" = "${FINAL_HASH}" ]; then
18  echo "✓ File integrity verified (hashes match)"
19else
20  echo "✗ File may have been modified (hashes don't match)"
21  echo "Do not upload - file may be corrupted or being modified"
22  exit 1
23fi
24
25# Check file size
26FILE_SIZE=$(stat -f%z "${FILE_PATH}" 2>/dev/null || stat -c%s "${FILE_PATH}" 2>/dev/null)
27echo "\nFile size: ${FILE_SIZE} bytes"
Use Multipart Upload for Large Files
1#!/bin/bash
2# For large files, use multipart upload (handles integrity automatically)
3FILE_PATH="large-file.zip"
4BUCKET_NAME="my-bucket"
5OBJECT_KEY="large-file.zip"
6
7echo "=== Using Multipart Upload (Recommended for Large Files) ==="
8
9# AWS CLI handles multipart upload automatically for files > 64MB
10# It also handles MD5 verification automatically
11aws s3 cp ${FILE_PATH} s3://${BUCKET_NAME}/${OBJECT_KEY} 2>&1
12
13if [ $? -eq 0 ]; then
14  echo "✓ Upload successful"
15  echo "Multipart upload handles integrity verification automatically"
16else
17  echo "✗ Upload failed"
18fi
19
20# Alternative: Explicit multipart upload with chunk size
21echo "\n=== Explicit Multipart Upload ==="
22echo "For very large files, specify chunk size:"
23echo "aws s3 cp ${FILE_PATH} s3://${BUCKET_NAME}/${OBJECT_KEY} \"
24echo "  --expected-size $(stat -f%z ${FILE_PATH} 2>/dev/null || stat -c%s ${FILE_PATH})"
25
26# Note: Best practice is to let AWS CLI handle MD5 automatically
27echo "\n=== Best Practice ==="
28echo "Don't specify Content-MD5 - let AWS CLI calculate it automatically"
29echo "This avoids BadDigest errors from incorrect hash calculation"

Related Errors

Provider Information

This error code is specific to AWS services. For more information, refer to the official AWS documentation.

BadDigest - Bad Digest | AWS Error Reference | Error Code Reference