RequestTimeTooSkewed
Amazon S3 RequestTimeTooSkewed means the caller clock is too far from S3 server time for the signed request to be accepted. This usually points to clock skew rather than queue delay or presigned URL reuse.
Last reviewed: April 21, 2026|Source-backed guidance under our editorial policy
Start Here
Use the closest compare guide, playbook, or adjacent error page to narrow the decision faster before you start changing production systems.
This page is part of the Error Reference library. Learn more about the project or report a correction.
What Does Request Time Too Skewed Mean?
This is the clock-skew branch of SigV4 time validation. The request can be otherwise well-formed, but AWS rejects it because the caller time and server time differ too much. Unlike RequestExpired, the main question is not how old the queued request is. It is whether the signer itself believes the wrong current time.
Common Causes
- -Host, VM, container, or node clock drift exceeds the accepted skew window.
- -NTP or cloud time synchronization is disabled, unhealthy, or blocked on one part of the fleet.
- -Container image or runtime starts signing requests before time sync is complete after boot.
- -Time source configuration differs across nodes, causing only one shard or region to skew.
- -UTC conversion bugs format the current time incorrectly before signing.
How to Fix Request Time Too Skewed
- 1Check UTC time on the failing node against a trusted source before rotating credentials or changing IAM policies.
- 2Restore NTP or cloud time synchronization and block request signing until the clock is healthy.
- 3Compare a healthy node and a failing node side by side to confirm skew is isolated to one signer path.
- 4Re-sign and resend the request only after the caller clock is corrected.
Step-by-Step Diagnosis for Request Time Too Skewed
- 1Capture
x-amz-date, local UTC time, AWS response timestamp, host identity, and region from the failing request path. - 2Compare signer time against trusted NTP or cloud time service on every node that can sign requests.
- 3Check whether only one runtime cohort, AZ, or autoscaled node group is drifting.
- 4Confirm the request succeeds immediately after time sync is restored without changing credentials.
- 5Differentiate this from RequestExpired by checking whether newly signed requests also fail instantly.
Seen in Production
- -A new autoscaled worker group starts serving traffic before chronyd reaches sync, and every S3 call from that pool fails with RequestTimeTooSkewed.
- -One Kubernetes node loses host time sync after suspend or hypervisor drift, so only pods scheduled there produce skew-related auth failures.
- -A container signs requests using local-time formatting on one deployment path, creating future-dated
x-amz-datevalues that S3 rejects immediately. - -An isolated VPC environment blocks outbound NTP, and long-lived instances slowly drift until S3 starts returning RequestTimeTooSkewed.
Clock Source and Drift Audit
- -Check NTP or cloud time sync state on the signer itself (example: system clock is several minutes behind on one node while peers stay accurate).
- -Compare wall-clock and UTC formatting behavior in the signer (example: local timezone offset leaks into SigV4 date generation).
Decision Shortcut: Clock Skew vs Aged Request
- -If a newly signed request fails immediately from one node, prioritize caller clock drift before analyzing queues or presigned URL reuse.
- -If the same runtime succeeds after instant re-sign on a healthy clock, stay on RequestTimeTooSkewed rather than RequestExpired.
- -If failures only appear after queueing or resume/retry delays, move to RequestExpired instead of treating this as pure clock skew.
Wrong Fix to Avoid
- -Do not rotate credentials when the request timestamp itself is wrong.
- -Do not only extend presigned URL TTL if a freshly signed request from the same node already fails.
- -Do not let unsynchronized nodes rejoin service just because application health checks pass.
Implementation Examples
2026-04-21T09:04:18Z host=worker-a3 region=eu-central-1
x-amz-date=20260421T085612Z localUtc=2026-04-21T08:56:12Z serverUtc=2026-04-21T09:04:18Z
error=RequestTimeTooSkewed message="The difference between the request time and the current time is too large."date -u
timedatectl status | rg 'System clock synchronized|NTP service'
chronyc trackingaws s3 ls --debug 2>&1 | rg 'X-Amz-Date|RequestTimeTooSkewed'
ssh healthy-node 'date -u'
ssh failing-node 'date -u'Incident Timeline
09:02 UTC
A signer node drifts away from trusted time
Signal: NTP fails, boot-time sync is incomplete, or one runtime formats UTC incorrectly before signing.
Why it matters: The first useful question is whether the signer knows the current time, not whether the request body or permissions changed.
09:04 UTC
Freshly signed requests begin failing immediately
Signal: Even requests sent right after signing are rejected because x-amz-date is already too far from server time.
Why it matters: That distinguishes clock skew from delayed dispatch or expired URLs.
09:08 UTC
Retries keep producing invalid timestamps
Signal: The same unhealthy signer re-signs each retry with its own skewed clock, so every attempt fails the same way.
Why it matters: Retry loops do not recover this incident until caller time is fixed.
09:14 UTC
Time sync is restored and new signatures succeed
Signal: The node returns to trusted UTC, requests are signed again, and failures clear without IAM or code changes.
Why it matters: That confirms the root cause was signer time health, not authorization or payload logic.
Seen in Production
Autoscaled nodes serve traffic before NTP reaches sync
Frequency: common
Example: Fresh nodes immediately produce RequestTimeTooSkewed on S3 operations during scale-out.
Fix: Add readiness gates on time-sync health and suppress traffic until the signer clock is trusted.
One node pool drifts because outbound NTP is blocked
Frequency: medium
Example: Only workloads in a restricted subnet start failing while the rest of the fleet stays healthy.
Fix: Restore time-source reachability or use the cloud provider time-sync service consistently.
Custom signer formats local time instead of UTC
Frequency: rare
Example: One deployment path emits future or past x-amz-date values because timezone conversion leaked into signing.
Fix: Normalize all signing timestamps to UTC and add contract tests around SigV4 date formatting.
Wrong Fix vs Better Fix
Rotate keys vs fix the clock
Wrong fix: Rotate access keys or STS sessions because the error appears auth-related.
Better fix: Prove the signer clock is accurate before changing any credential material.
Why this is better: Fresh credentials do not help if every new request is signed with the wrong current time.
Increase URL lifetime vs isolate skewed signers
Wrong fix: Increase presigned URL TTL or retry count without checking host time health.
Better fix: Remove skewed nodes from traffic, restore NTP, and only then re-sign requests.
Why this is better: This error is about the signer’s notion of current time, not just the allowed lifetime window.
Restart blindly vs gate on time sync
Wrong fix: Restart pods or workers and hope they recover naturally.
Better fix: Gate startup and readiness on successful time synchronization so skewed nodes never sign production traffic.
Why this is better: A restart can temporarily mask the issue, but it does not enforce ongoing signer time correctness.
Debugging Tools
- -NTP / Chrony / timedatectl health checks
- -AWS CLI --debug
- -Signer telemetry with host ID and
x-amz-datecapture - -Autoscaling and readiness-gate event logs
How to Verify the Fix
- -Sign and send a fresh request from the previously failing node and confirm RequestTimeTooSkewed is cleared.
- -Verify NTP or cloud time sync status stays healthy on every signer node.
- -Compare signer clocks across the fleet and confirm skew remains within the accepted budget.
- -Confirm the issue does not reappear during autoscaling, reboot, or failover scenarios.
How to Prevent Recurrence
- -Make time synchronization a hard readiness prerequisite for any workload that signs AWS requests.
- -Alert on clock skew and unsynchronized NTP state per node, not just aggregate fleet health.
- -Standardize signer time libraries and UTC formatting tests across all SDK wrappers and custom signing helpers.
- -Keep one trusted time source policy per environment instead of mixing host, container, and custom application clocks.
Pro Tip
- -emit signer host ID and
x-amz-datein auth debug telemetry so skewed cohorts are obvious during incidents.
Official References
Provider Context
This guidance is specific to AWS services. Always validate implementation details against official provider documentation before deploying to production.