HTTP
429 - Too Many Requests
Hitting a 429 Too Many Requests means you've exceeded the rate limit—too many API calls in a time window, API quota exhausted, or DDoS protection triggered throttling. This client-side error (4xx) happens when servers enforce rate limiting to prevent abuse and ensure fair resource usage. Most common when clients make rapid-fire requests without throttling, but also appears when API quotas are reached, retry loops create request storms, or traffic spikes trigger automatic rate limiting.
#Common Causes
- →Frontend: Rapid-fire API calls without throttling (loops, event handlers firing repeatedly). Retry logic creates request storms (exponential backoff not implemented). Multiple tabs/windows making simultaneous requests. Polling intervals too aggressive. Missing request queuing or debouncing.
- →Backend: Rate limiting middleware enforces per-IP or per-user limits. API quota system tracks usage and blocks when exceeded. DDoS protection triggers automatic throttling. Business logic enforces custom rate limits. Database connection pool exhaustion causes throttling.
- →Infrastructure: Nginx rate limiting (limit_req module) blocks requests. Load balancer enforces global rate limits. WAF triggers rate limiting on suspicious patterns. CDN rate limiting protects origin servers. API gateway enforces tier-based quotas.
✓Solutions
- 1Step 1: Diagnose - Check DevTools Network tab—count requests per second. Look for Retry-After header in 429 response. Review if multiple components are making duplicate requests. Check request timing patterns.
- 2Step 2: Diagnose - Server logs show rate limit configuration and current usage. Review rate limiting middleware settings. Check API quota usage in dashboard. Examine which endpoint or IP triggered the limit.
- 3Step 3: Fix - Client-side: Implement exponential backoff with jitter for retries. Use request queuing to serialize rapid requests. Add debouncing/throttling to event handlers. Respect Retry-After header values. Reduce polling frequency.
- 4Step 4: Fix - Server-side: Return Retry-After header with wait time. Implement sliding window or token bucket rate limiting. Provide rate limit headers (X-RateLimit-Limit, X-RateLimit-Remaining). Log rate limit hits for monitoring.
- 5Step 5: Fix - Infrastructure: Adjust Nginx limit_req zone sizes and burst values. Review load balancer rate limit settings. Configure WAF rate limiting thresholds. Set appropriate API gateway quotas per tier.
</>Code Examples
Fetch API: Exponential Backoff with Retry-After
1// Client-side: Handle 429 with exponential backoff and Retry-After
2async function fetchWithRateLimitHandling(url, options = {}, maxRetries = 5) {
3 for (let attempt = 0; attempt < maxRetries; attempt++) {
4 const response = await fetch(url, options);
5
6 if (response.status === 429) {
7 // Check Retry-After header (seconds) or use exponential backoff
8 const retryAfter = response.headers.get('Retry-After');
9 const rateLimitRemaining = response.headers.get('X-RateLimit-Remaining');
10 const rateLimitReset = response.headers.get('X-RateLimit-Reset');
11
12 let delay;
13 if (retryAfter) {
14 delay = parseInt(retryAfter) * 1000; // Convert to milliseconds
15 } else if (rateLimitReset) {
16 delay = Math.max(0, (parseInt(rateLimitReset) * 1000) - Date.now());
17 } else {
18 // Exponential backoff with jitter
19 delay = Math.pow(2, attempt) * 1000 + Math.random() * 1000;
20 }
21
22 if (attempt < maxRetries - 1) {
23 console.log(`Rate limited, retrying in ${delay}ms (attempt ${attempt + 1}/${maxRetries})`);
24 await new Promise(resolve => setTimeout(resolve, delay));
25 continue;
26 } else {
27 throw new Error('Rate limit exceeded after maximum retries');
28 }
29 }
30
31 return response;
32 }
33}
34
35// Request queue to prevent rate limit storms
36class RequestQueue {
37 constructor(maxConcurrent = 3, minDelay = 100) {
38 this.queue = [];
39 this.processing = 0;
40 this.maxConcurrent = maxConcurrent;
41 this.minDelay = minDelay;
42 this.lastRequestTime = 0;
43 }
44
45 async add(fn) {
46 return new Promise((resolve, reject) => {
47 this.queue.push({ fn, resolve, reject });
48 this.process();
49 });
50 }
51
52 async process() {
53 if (this.processing >= this.maxConcurrent || this.queue.length === 0) {
54 return;
55 }
56
57 const now = Date.now();
58 const timeSinceLastRequest = now - this.lastRequestTime;
59 const delay = Math.max(0, this.minDelay - timeSinceLastRequest);
60
61 setTimeout(async () => {
62 this.processing++;
63 this.lastRequestTime = Date.now();
64
65 const { fn, resolve, reject } = this.queue.shift();
66
67 try {
68 const result = await fn();
69 resolve(result);
70 } catch (error) {
71 reject(error);
72 } finally {
73 this.processing--;
74 this.process();
75 }
76 }, delay);
77 }
78}
79
80// Usage
81const queue = new RequestQueue(3, 200); // Max 3 concurrent, 200ms between requests
82queue.add(() => fetchWithRateLimitHandling('/api/endpoint'));Express.js: Rate Limiting Middleware
1// Server-side: Implement rate limiting with Retry-After
2const express = require('express');
3const rateLimit = require('express-rate-limit');
4const app = express();
5
6// Per-IP rate limiting
7const apiLimiter = rateLimit({
8 windowMs: 15 * 60 * 1000, // 15 minutes
9 max: 100, // Limit each IP to 100 requests per windowMs
10 message: 'Too many requests from this IP, please try again later.',
11 standardHeaders: true, // Return rate limit info in `RateLimit-*` headers
12 legacyHeaders: false, // Disable `X-RateLimit-*` headers
13 handler: (req, res) => {
14 const resetTime = new Date(Date.now() + req.rateLimit.resetTime);
15 const retryAfter = Math.ceil((req.rateLimit.resetTime - Date.now()) / 1000);
16
17 res.status(429)
18 .set('Retry-After', retryAfter.toString())
19 .set('X-RateLimit-Limit', req.rateLimit.limit.toString())
20 .set('X-RateLimit-Remaining', req.rateLimit.remaining.toString())
21 .set('X-RateLimit-Reset', new Date(req.rateLimit.resetTime).toISOString())
22 .json({
23 error: 'Too Many Requests',
24 message: 'Rate limit exceeded',
25 retryAfter: retryAfter,
26 resetTime: resetTime.toISOString(),
27 });
28 },
29});
30
31// Per-user rate limiting (requires authentication)
32const userLimiter = rateLimit({
33 windowMs: 60 * 1000, // 1 minute
34 max: 10,
35 keyGenerator: (req) => req.user?.id || req.ip, // Use user ID if authenticated
36 skip: (req) => !req.user, // Skip if not authenticated
37});
38
39// Apply rate limiting
40app.use('/api/', apiLimiter);
41app.post('/api/users', userLimiter, (req, res) => {
42 res.json({ success: true });
43});Nginx: Rate Limiting Configuration
1# Nginx: Configure rate limiting with Retry-After
2http {
3 # Define rate limit zones
4 limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
5 limit_req_zone $binary_remote_addr zone=login_limit:10m rate=5r/m;
6
7 server {
8 listen 80;
9 server_name api.example.com;
10
11 # General API rate limiting
12 location /api/ {
13 limit_req zone=api_limit burst=20 nodelay;
14 limit_req_status 429;
15
16 # Custom 429 response with Retry-After
17 error_page 429 @rate_limit;
18
19 proxy_pass http://backend;
20 proxy_set_header Host $host;
21 proxy_set_header X-Real-IP $remote_addr;
22 }
23
24 # Stricter rate limiting for login
25 location /api/auth/login {
26 limit_req zone=login_limit burst=3 nodelay;
27 limit_req_status 429;
28 error_page 429 @rate_limit;
29
30 proxy_pass http://backend;
31 }
32
33 # Custom 429 handler with Retry-After header
34 location @rate_limit {
35 default_type application/json;
36 return 429 '{"error":"Too Many Requests","message":"Rate limit exceeded. Please try again later."}';
37 add_header Retry-After 60 always;
38 add_header Content-Type application/json always;
39 }
40 }
41}↗Related Errors
Provider Information
This error code is specific to HTTP services. For more information, refer to the official HTTP documentation.