AWS

EntityTooLarge - Entity Too Large

Hitting an **EntityTooLarge** error means your S3 upload exceeds the 5TB maximum object size limit—S3 allows single objects up to 5TB, but for files larger than 5GB, you should use multipart upload. This client-side error (4xx) happens when AWS validates upload size. Most common when uploading very large files as single objects, but also appears when object size exceeds 5TB limit, files are too large for single upload, or upload size violates S3 limits.

#Common Causes

  • Identity: IAM policy allows upload but size limit exceeded. Service Control Policy (SCP) enforces size limits.
  • Network: VPC endpoint size restrictions. API Gateway payload size limits.
  • Limits: Object size exceeds 5TB limit. File too large for single upload (>5GB should use multipart). Maximum object size exceeded. Upload size limit violation.

Solutions

  1. 1Step 1: Diagnose - Check file size: ls -lh FILE or stat -f%z FILE (macOS). Compare with 5TB limit. Check if file is >5GB (should use multipart).
  2. 2Step 2: Diagnose - Verify S3 object size limits: S3 single object limit is 5TB. Files >5GB should use multipart upload. Check if using single upload for large file.
  3. 3Step 3: Diagnose - Check current upload method: Review if using put-object (single) vs multipart. Verify if multipart is configured. Check upload chunk size.
  4. 4Step 4: Fix - Use multipart upload for large files: aws s3 cp FILE s3://BUCKET/KEY (automatically uses multipart for >5GB). Or use s3api create-multipart-upload for manual control.
  5. 5Step 5: Fix - For extremely large files: Use AWS Transfer Family for large file transfers. Or use AWS Snowball for petabyte-scale data migration. Or split files into smaller objects.

</>Code Examples

Check File Size and Use Multipart Upload
1#!/bin/bash
2FILE_PATH="large-file.zip"
3BUCKET_NAME="my-bucket"
4OBJECT_KEY="large-file.zip"
5
6# Check file size
7echo "=== Checking File Size ==="
8FILE_SIZE=$(stat -f%z "${FILE_PATH}" 2>/dev/null || stat -c%s "${FILE_PATH}" 2>/dev/null)
9FILE_SIZE_MB=$((FILE_SIZE / 1024 / 1024))
10FILE_SIZE_GB=$((FILE_SIZE / 1024 / 1024 / 1024))
11
12echo "File size: ${FILE_SIZE} bytes (${FILE_SIZE_MB} MB, ${FILE_SIZE_GB} GB)"
13
14# S3 limits: 5TB max, but use multipart for >5GB
15MAX_SINGLE_UPLOAD=5368709120  # 5GB in bytes
16MAX_OBJECT_SIZE=5497558138880  # 5TB in bytes
17
18if [ ${FILE_SIZE} -gt ${MAX_OBJECT_SIZE} ]; then
19  echo "✗ File exceeds 5TB limit - cannot upload"
20  echo "Consider using AWS Snowball or splitting file"
21  exit 1
22elif [ ${FILE_SIZE} -gt ${MAX_SINGLE_UPLOAD} ]; then
23  echo "✓ File >5GB - AWS CLI will automatically use multipart upload"
24  echo "\n=== Uploading with Multipart (Automatic) ==="
25  aws s3 cp ${FILE_PATH} s3://${BUCKET_NAME}/${OBJECT_KEY}
26else
27  echo "✓ File <5GB - can use single upload"
28  echo "\n=== Uploading (Single or Multipart) ==="
29  aws s3 cp ${FILE_PATH} s3://${BUCKET_NAME}/${OBJECT_KEY}
30fi
Use AWS CLI for Automatic Multipart Upload
1#!/bin/bash
2# AWS CLI automatically uses multipart upload for files >5GB
3FILE_PATH="very-large-file.zip"
4BUCKET_NAME="my-bucket"
5OBJECT_KEY="very-large-file.zip"
6
7echo "=== AWS CLI Automatic Multipart Upload ==="
8echo "AWS CLI handles multipart upload automatically for files >5GB"
9echo "No manual configuration needed"
10
11# Simple upload - CLI handles everything
12aws s3 cp ${FILE_PATH} s3://${BUCKET_NAME}/${OBJECT_KEY}
13
14# For more control, you can specify multipart settings
15echo "\n=== With Multipart Settings ==="
16aws s3 cp ${FILE_PATH} s3://${BUCKET_NAME}/${OBJECT_KEY} \
17  --expected-size $(stat -f%z "${FILE_PATH}" 2>/dev/null || stat -c%s "${FILE_PATH}")
18
19# Check upload status
20echo "\n=== Verifying Upload ==="
21aws s3 ls s3://${BUCKET_NAME}/${OBJECT_KEY} --human-readable

Related Errors

Provider Information

This error code is specific to AWS services. For more information, refer to the official AWS documentation.

EntityTooLarge - Entity Too Large | AWS Error Reference | Error Code Reference