ItemCollectionSizeLimitExceededException
AWS ItemCollectionSizeLimitExceededException means a DynamoDB item collection exceeded the 10 GB limit for a local secondary index partition key (HTTP 400).
Last reviewed: February 12, 2026|Editorial standard: source-backed technical guidance
What Does Item Collection Size Limit Exceeded Exception Mean?
DynamoDB blocks writes for partition keys whose LSI item collection exceeded the hard 10 GB limit, so this is a data-model ceiling rather than a retryable throttle.
Common Causes
- -A local secondary index is defined on a partition key with heavy skew toward one key value.
- -Too many items accumulate under the same partition key, pushing that item collection above 10 GB.
- -Projected attributes and write patterns grow item-collection size faster than expected.
- -Data model lacks sharding or alternate access patterns for unbounded per-key growth.
How to Fix Item Collection Size Limit Exceeded Exception
- 1Identify the hot partition key values causing item-collection growth.
- 2Stop or reroute writes that continue inflating the oversized item collection.
- 3Redesign the data model: shard partition keys or move access pattern to a GSI-based approach.
- 4Do not rely on AWS Support for limit increases; this 10 GB LSI item-collection cap is a hard model constraint.
Step-by-Step Diagnosis for Item Collection Size Limit Exceeded Exception
- 1Confirm the table uses a local secondary index and the failing write targets that indexed partition key.
- 2Use write-path logs and key distribution analysis to isolate top partition key contributors.
- 3Capture item collection metrics on write operations to estimate per-key growth trend.
- 4Validate whether a GSI or key-sharding strategy can preserve query requirements without LSI key-size pressure.
LSI Item-Collection Growth Analysis
- -Identify top partition keys by item-collection growth where LSI is present (example: one tenant key accumulates years of events under a single partition key).
- -Measure projected attribute footprint that accelerates LSI collection growth (example: newly projected blob attribute doubles per-item indexed size).
Data Model Remediation Planning
- -Evaluate key sharding strategy for unbounded entity growth (example: split `customer#123` into `customer#123#2026Q1`, `...Q2`).
- -Validate migration path from LSI-dependent access pattern to GSI/query redesign (example: move historical reads to time-bucketed GSI to keep per-key collection bounded).
How to Verify the Fix
- -Re-run previously failing writes for impacted partition keys and confirm the exception is gone.
- -Verify per-key growth stays below LSI item-collection safety thresholds over time.
- -Confirm downstream queries still meet latency and correctness expectations after model changes.
How to Prevent Recurrence
- -Design partition keys for high cardinality and bounded per-key growth when using LSIs.
- -Prefer GSI or sharded-key patterns for workloads with unbounded item growth per logical entity.
- -Continuously monitor key-skew and item-collection growth signals in write-heavy partitions.
Pro Tip
- -set automated alarms on rising item-collection metrics per hot key and trigger archival/sharding workflows before any key approaches LSI limit risk.
Decision Support
Compare Guide
429 Too Many Requests vs 503 Service Unavailable
Use 429 for caller-specific throttling and 503 for service-wide outages, so retry behavior, escalation paths, and incident ownership stay correct.
Compare Guide
AWS ThrottlingException vs GCP RESOURCE_EXHAUSTED
Compare AWS ThrottlingException and GCP RESOURCE_EXHAUSTED to separate rate limiting from quota/resource exhaustion and choose the remediation path.
Playbook
Rate Limit Recovery Playbook (429 / ThrottlingException / RESOURCE_EXHAUSTED)
Use this playbook to separate transient throttling from hard quota exhaustion and apply retry, traffic-shaping, and quota-capacity fixes safely.
Official References
Provider Context
This guidance is specific to AWS services. Always validate implementation details against official provider documentation before deploying to production.