CodeStorageExceededException
AWS CodeStorageExceededException is an account-level error indicating that the cumulative storage used by all Lambda deployment packages, layers, and versions has exceeded the 75GB regional quota.
Last reviewed: March 25, 2026|Editorial standard: source-backed technical guidance
What Does Code Storage Exceeded Mean?
CodeStorageExceededException is a hard ceiling on your account's "deployment debt." Every time you publish a new Lambda version or update a layer, AWS stores that package permanently. In CI/CD pipelines that deploy daily, these 50MB-100MB packages accumulate rapidly. Since the 75GB limit is shared across all functions in a region, one "noisy" function with hundreds of versions can block deployments for your entire infrastructure.
Common Causes
- -Unbounded Versioning: CI/CD pipelines calling
PublishVersion: trueon every push without deleting older, unused versions. - -Orphaned Layers: Large dependency layers (e.g., Pandas, Scikit-learn) being updated frequently, with each 250MB version staying in storage indefinitely.
- -Large Deployment Packages: Using heavy container images or zips with unnecessary dev-dependencies that inflate the total storage footprint.
- -High Function Density: Having hundreds of micro-functions in a single account/region, each contributing to the 75GB total.
How to Fix Code Storage Exceeded
- 1Delete Old Versions: Use a script to delete all function versions except the last 3-5 and those currently pointed to by an Alias.
- 2Cleanup Unused Layers: Audit your Lambda Layers and delete versions that are no longer referenced by any Active function.
- 3Request Quota Increase: If your workload genuinely requires >75GB, request an increase via AWS Service Quotas (though cleanup is usually the better first step).
- 4Disable Auto-Publish: Set
PublishVersiontofalsein your SAM/CDK/Serverless templates if you don't use Aliases for traffic shifting.
Step-by-Step Diagnosis for Code Storage Exceeded
- 1Run
aws lambda get-account-settings: This provides theTotalCodeSizeand confirms if you are at the limit. - 2Identify the "Storage Hogs": List all functions and sort them by
CodeSize * NumberOfVersions. - 3Check Aliases: Before deleting, ensure the version is not being used by a
prodorstagingalias. - 4Audit Layers: Check for old layer versions that are 0.5GB+ and haven't been updated in months.
The Math of Storage Exhaustion
- -The Formula: (Deployment Zip Size) x (Number of Versions) = Total Consumption.
- -Example: A 100MB function deployed 10 times a week = 1GB/week. In 1.5 years, this single function hits the 75GB limit alone.
Safe Deletion Strategy
- -Always keep
$LATESTand any version referenced by an Alias. Deleting a version that an Alias points to will break your production environment immediately.
Implementation Examples
aws lambda get-account-settings \
--query 'AccountLimit.TotalCodeSize'// Logic: List versions -> Sort by ID -> Keep top 5 -> Delete rest
const versions = await lambda.listVersionsByFunction({ FunctionName: "my-func" }).promise();
const toDelete = versions.Versions
.filter(v => v.Version !== '$LATEST')
.slice(0, -5); // Keep the newest 5
for (const v of toDelete) {
await lambda.deleteFunction({ FunctionName: "my-func", Qualifier: v.Version }).promise();
}How to Verify the Fix
- -Run
get-account-settingsagain and confirmTotalCodeSizehas significantly dropped. - -Attempt a new deployment; it should now succeed without the
CodeStorageExceedederror. - -Verify that rollback capabilities (if needed) are still intact for the last few versions.
How to Prevent Recurrence
- -Automated Cleanup: Add a "post-deploy" step in your pipeline that runs a cleanup script to keep only the N most recent versions.
- -CloudWatch Alarms: Set an alarm on the
AccountCodeSizemetric to trigger at 80% (60GB) capacity. - -Optimized Zips: Use
.lambdaignoreor similar tools to stripnode_modulesof dev-dependencies and binary assets. - -Pro-tip: If you use container-image-based Lambdas, their storage is managed differently (ECR), but they still count toward specific account quotas. For zip-based Lambdas, aggressive version pruning is the only long-term solution.
Decision Support
Compare Guide
429 Too Many Requests vs 503 Service Unavailable
Use 429 for caller-specific throttling and 503 for service-wide outages, so retry behavior, escalation paths, and incident ownership stay correct.
Compare Guide
AWS ThrottlingException vs GCP RESOURCE_EXHAUSTED
Compare AWS ThrottlingException and GCP RESOURCE_EXHAUSTED to separate rate limiting from quota/resource exhaustion and choose the remediation path.
Playbook
Rate Limit Recovery Playbook (429 / ThrottlingException / RESOURCE_EXHAUSTED)
Use this playbook to separate transient throttling from hard quota exhaustion and apply retry, traffic-shaping, and quota-capacity fixes safely.
Official References
Provider Context
This guidance is specific to AWS services. Always validate implementation details against official provider documentation before deploying to production.