RequestTooLargeException
AWS RequestTooLargeException means the payload sent to a Lambda function exceeds the maximum allowed invocation payload size. Synchronous invocations are limited to 6MB and asynchronous invocations are limited to 256KB. The request is rejected before the function executes.
Last reviewed: March 25, 2026|Editorial standard: source-backed technical guidance
What Does Request Too Large Mean?
RequestTooLargeException is a hard payload size ceiling - Lambda does not execute the function and returns this error immediately. The limit applies to the entire request payload including all headers and body content. Applications that pass large datasets, file contents, or uncompressed JSON blobs directly to Lambda invocations will hit this limit and must redesign the data passing pattern.
Common Causes
- -API Gateway passes a large file upload or uncompressed JSON payload directly to a Lambda function as the invocation event, exceeding the 6MB synchronous invocation limit.
- -Step Functions workflow passes accumulated state between Lambda steps as the event payload - state grows across steps until it exceeds the 256KB async invocation limit.
- -Application invokes Lambda asynchronously to process a large dataset by embedding the data directly in the event payload rather than passing a reference to S3 or another storage service.
How to Fix Request Too Large
- 1Replace large inline payloads with an S3 reference - upload the data to S3 and pass only the bucket name and key in the Lambda event, letting the function fetch the data directly from S3.
- 2For API Gateway file uploads, configure API Gateway to store the upload in S3 using a service integration and pass the S3 object reference to Lambda rather than the file contents.
- 3Compress the event payload before invocation if the data cannot be externalized - gzip compression can reduce JSON payloads significantly and keep them under the limit.
Step-by-Step Diagnosis for Request Too Large
- 1Measure the size of the payload being passed to Lambda - log the payload size before invocation and compare against the 6MB sync or 256KB async limit.
- 2Identify the data source causing the oversized payload - file upload content, accumulated Step Functions state, large query results, or uncompressed JSON blobs.
- 3Check whether the invocation is synchronous or asynchronous - async invocations have a much stricter 256KB limit that is easy to exceed with moderate-sized payloads.
- 4Review the data flow design to identify where large datasets can be externalized to S3, DynamoDB, or another storage service instead of passing through Lambda event payloads.
Payload Size Measurement and Externalization
- -Log JSON.stringify(event).length or equivalent before each Lambda invocation to measure payload size trends - add alerting when payloads approach 4MB for sync or 200KB for async to catch growth before it triggers RequestTooLargeException (example: Step Functions execution where state object grows from 10KB to 300KB across 50 steps hits the async limit without the payload ever appearing large at any single step).
- -Identify which fields in the Lambda event are largest and evaluate whether they can be replaced with a reference - file contents, base64-encoded images, large JSON arrays, and query result sets are the most common sources of oversized payloads and are all candidates for S3 externalization (example: replace event.fileContent with event.s3Key and have the function call s3.getObject to fetch the content directly).
Step Functions State Management
- -For Step Functions workflows where state accumulates across steps, use the S3 pass-through pattern - have each Lambda step write large outputs to S3 and pass only the S3 reference as the step output, keeping the state machine payload small regardless of data volume (example: data transformation pipeline where each step adds processed records to the state object should instead write each step output to S3 and accumulate only S3 keys in the state).
- -Review Step Functions Map state configurations - a Map state that processes many items and returns all results inline can produce a combined output that exceeds the payload limit even when individual item outputs are small (example: Map state processing 1000 items where each result is 300 bytes produces a 300KB combined output that exceeds the async limit).
Implementation Examples
# Upload file to S3 first
aws s3 cp large-dataset.json s3://my-bucket/inputs/large-dataset.json
# Invoke Lambda with S3 reference instead of file contents
aws lambda invoke \
--function-name process-dataset \
--payload '{"s3Bucket":"my-bucket","s3Key":"inputs/large-dataset.json"}' \
--cli-binary-format raw-in-base64-out \
response.jsonimport { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
import { LambdaClient, InvokeCommand } from '@aws-sdk/client-lambda';
const s3 = new S3Client({ region: 'us-east-1' });
const lambda = new LambdaClient({ region: 'us-east-1' });
async function processLargeDataset(data) {
const key = `inputs/${Date.now()}-dataset.json`;
// Store large data in S3
await s3.send(new PutObjectCommand({
Bucket: 'my-bucket',
Key: key,
Body: JSON.stringify(data),
}));
// Pass only S3 reference to Lambda
const response = await lambda.send(new InvokeCommand({
FunctionName: 'process-dataset',
Payload: JSON.stringify({ s3Bucket: 'my-bucket', s3Key: key }),
}));
return response;
}import boto3
import json
from datetime import datetime
s3 = boto3.client('s3', region_name='us-east-1')
lambda_client = boto3.client('lambda', region_name='us-east-1')
def invoke_with_large_payload(function_name, large_data, bucket='my-bucket'):
# Upload large data to S3
key = f'inputs/{datetime.utcnow().isoformat()}-payload.json'
s3.put_object(
Bucket=bucket,
Key=key,
Body=json.dumps(large_data),
)
# Invoke Lambda with S3 reference only
response = lambda_client.invoke(
FunctionName=function_name,
Payload=json.dumps({'s3Bucket': bucket, 's3Key': key}),
)
return responseHow to Verify the Fix
- -Re-invoke Lambda with the redesigned payload and confirm RequestTooLargeException no longer appears.
- -Verify the function correctly retrieves externalized data from S3 or another storage service and produces the expected output.
- -Measure the new payload size and confirm it has sufficient headroom below the applicable limit to accommodate future data growth.
How to Prevent Recurrence
- -Establish a payload size budget for each Lambda function and enforce it in code review - document the expected payload schema and maximum size so engineers know when to externalize data.
- -Add payload size assertions to Lambda integration tests that fail when the event exceeds a configurable threshold - catching oversized payloads in tests prevents RequestTooLargeException in production.
- -For Step Functions workflows, use the S3 pass-through pattern by default for any step that processes or produces data larger than a few kilobytes.
Pro Tip
- -design Lambda functions to accept references rather than data by default - a function that receives an S3 key and fetches the object itself is more flexible, more testable, and immune to RequestTooLargeException regardless of how large the underlying data grows.
Decision Support
Compare Guide
403 Forbidden vs 404 Not Found: When to Hide Resources
Use 403 for explicit access denial, or 404 to conceal resource existence when security policy requires reducing endpoint and object enumeration risk.
Compare Guide
404 Not Found vs 410 Gone: Missing vs Permanent Removal
Learn when to return 404 (missing or temporary absence) versus 410 (intentional permanent removal), including redirect and cache implications.
Playbook
Resource State Playbook (404 / 410 / ResourceNotFound)
Use this playbook to separate temporary missing-resource lookups from permanent removals, then fix scope, lifecycle, and identifier drift safely.
Official References
Provider Context
This guidance is specific to AWS services. Always validate implementation details against official provider documentation before deploying to production.