NotImplemented
AWS S3 NotImplemented (HTTP 501) occurs when a request uses a feature or header that is recognized by the S3 protocol but not implemented by the specific endpoint. It is commonly encountered on S3-compatible storage providers or legacy regional endpoints.
Last reviewed: March 28, 2026|Editorial standard: source-backed technical guidance
What Does Not Implemented Mean?
NotImplemented is a "Feature Gap" signal. Unlike 400 Malformed, S3 understands what you want but says "I have not built that yet." This is the primary error code returned by S3-compatible services (MinIO, Ceph) when you attempt advanced AWS operations like Object Lock, Intelligent Tiering, or specific checksum algorithms that their API layer does not support.
Common Causes
- -S3-Compatible Gaps: Using a self-hosted S3 service that lacks advanced features like Object Lock, Lifecycle policies, or Replication.
- -Unsupported Transfer Encodings: Sending
Transfer-Encoding: gzipor other non-standard values that S3 recognizes as HTTP but does not implement for its storage logic. - -SDK Version Mismatch: Upgrading to a modern SDK that sends new headers (e.g.,
x-amz-checksum-algorithm) to an older on-premises S3 gateway. - -Regional Limitations: Attempting to use S3 Express One Zone or specific encryption headers in a region where they are not deployed yet.
How to Fix Not Implemented
- 1Check the "Message" Field: S3 usually tells you exactly which header caused the issue (e.g., "A header you provided implies functionality that is not implemented").
- 2Fallback to Core APIs: If on a compatible service, stick to core S3 operations (PUT, GET, DELETE, LIST) and avoid bucket-level management APIs.
- 3Downgrade Checksum Logic: Disable newer SDK features like
httpChecksumRequiredif the target gateway does not support it. - 4Upgrade the Provider: If using MinIO or Ceph, ensure you are running the latest version to get the most complete S3 API coverage.
Step-by-Step Diagnosis for Not Implemented
- 1Run the request with
--debugto see exactly whichx-amz-headers are being injected by the SDK. - 2Identify the endpoint: is this
s3.amazonaws.comor a custom URL? Compatible services are the most common source. - 3Compare the operation against the provider compatibility matrix (for example MinIO S3 API support).
- 4Check for
Transfer-EncodingorContent-Encodingheaders that might be confusing the storage gateway.
The S3 Compatibility Gap
- -AWS S3: Implements the full spec for supported public features.
- -MinIO/Ceph: Implements core storage well but may lack complex lifecycle, locking, or replication features.
- -Legacy Gateways: May only support older signature versions or a reduced subset of headers.
SDK Injection Warnings
- -Modern AWS SDKs often enable newer headers/features by default (example: checksum-related headers). If a proxy returns 501, these auto-injected features are common suspects.
Implementation Examples
try {
await s3.send(new PutObjectCommand({
Bucket,
Key,
Body,
ChecksumAlgorithm: "CRC32"
}));
} catch (err) {
if (err.name === 'NotImplemented') {
console.warn("Checksum not supported by provider. Retrying without it...");
// Fallback logic
}
}aws s3api put-object --bucket my-bucket --key test.txt --body . --debug
# Inspect the output for "x-amz-..." headers and 501 statusHow to Verify the Fix
- -Successfully perform a basic
PutObjectwithout the offending header or feature flag. - -Confirm the target S3-compatible service documentation marks the remaining operation as supported.
- -Verify that bypassing the advanced feature lets the broader application flow complete successfully.
How to Prevent Recurrence
- -Standardize on Core APIs: When building for hybrid clouds, design around the least common denominator of S3 features.
- -API Feature Probing: Implement a startup capability check that tries feature-specific operations and records unsupported ones.
- -SDK Configuration: Explicitly disable unsupported headers in SDK client options when targeting on-prem or compatible storage.
- -Pro-tip: If you must use Object Lock or lifecycle rules on-prem, prefer provider-specific guidance or libraries instead of assuming full AWS parity.
Decision Support
Compare Guide
HTTP 400 vs 422: Bad Request vs Unprocessable Content
Fix API payload issues faster by using 400 for malformed syntax and 422 for semantic validation failures, so clients correct format before business rules.
Playbook
CORS Error Fix Playbook (Preflight / Origin / Credentials)
Use this playbook to separate browser-enforced cross-origin policy failures from server-side CORS header and route defects and apply strict origin and credential controls safely.
Playbook
Validation Failure Playbook (400 / 422 / INVALID_ARGUMENT)
Use this playbook to separate malformed-request failures from semantic validation failures, then fix request contracts without broad server-side bypasses.
Official References
Provider Context
This guidance is specific to AWS services. Always validate implementation details against official provider documentation before deploying to production.