Rate Limits

PageGun API implements rate limiting to ensure fair usage and maintain service quality for all users. This guide covers limits by plan, headers, and best practices for handling rate limits in your applications.

Overview

Rate limits are enforced per API key and reset at the beginning of each time window. Limits apply to all API endpoints unless specified otherwise.

Current Limits by Plan

PlanRequests/MinuteRequests/HourBurst LimitMonthly Requests
Free601,0001010,000
Pro30010,00050100,000
Business1,00050,000100500,000
EnterpriseCustomCustomCustomCustom

Limit Definitions

  • Requests/Minute: Rolling window of 60 seconds
  • Requests/Hour: Rolling window of 3600 seconds
  • Burst Limit: Maximum requests in a 10-second window
  • Monthly Requests: Total requests per billing cycle

Rate Limit Headers

Every API response includes rate limit information in the headers:

HTTP/1.1 200 OK X-RateLimit-Limit: 300 X-RateLimit-Remaining: 245 X-RateLimit-Reset: 1640995200 X-RateLimit-Burst-Limit: 50 X-RateLimit-Burst-Remaining: 35 X-RateLimit-Retry-After: 45

Header Descriptions

HeaderDescription
X-RateLimit-LimitMaximum requests allowed per minute
X-RateLimit-RemainingRequests remaining in current minute window
X-RateLimit-ResetUnix timestamp when the current window resets
X-RateLimit-Burst-LimitMaximum requests allowed in 10-second burst
X-RateLimit-Burst-RemainingBurst requests remaining in current 10-second window
X-RateLimit-Retry-AfterSeconds to wait before retrying (only when rate limited)

Rate Limit Response

When you exceed the rate limit, the API returns HTTP 429 Too Many Requests:

{ "statusCode": 429, "name": "rate_limit_exceeded", "message": "Too many requests. Please try again later." }

The response includes:

  • retry_after: Seconds to wait before next request
  • limit: Your current rate limit
  • window: Time window for the limit

Endpoint-Specific Limits

Some endpoints have special rate limits:

Content Creation Endpoints

Higher limits for content management:

EndpointFreeProBusinessEnterprise
POST /pages20/min100/min300/minCustom
PUT /pages/:id30/min150/min450/minCustom
POST /projects5/min25/min100/minCustom

Public Rendering Endpoints

More generous limits for public-facing content:

EndpointAll Plans
GET /render/*1000/min
GET /public/*1000/min

Best Practices

1. Respect Rate Limit Headers

Always check the rate limit headers and implement proper backoff:

const makeAPIRequest = async (url, options) => { const response = await fetch(url, options); // Check remaining requests const remaining = parseInt(response.headers.get('X-RateLimit-Remaining')); const resetTime = parseInt(response.headers.get('X-RateLimit-Reset')); if (remaining < 10) { const waitTime = (resetTime * 1000) - Date.now(); console.warn(`Rate limit low (${remaining} remaining), reset in ${waitTime}ms`); } if (response.status === 429) { const retryAfter = parseInt(response.headers.get('X-RateLimit-Retry-After')); throw new RateLimitError(`Rate limited. Retry after ${retryAfter}s`, retryAfter); } return response; };

2. Implement Exponential Backoff

Handle rate limiting gracefully with exponential backoff:

class APIClient { async requestWithRetry(url, options, maxRetries = 3) { for (let attempt = 1; attempt <= maxRetries; attempt++) { try { const response = await fetch(url, options); if (response.status === 429) { const retryAfter = parseInt(response.headers.get('X-RateLimit-Retry-After')) || 60; const backoffDelay = Math.min(retryAfter * 1000, Math.pow(2, attempt) * 1000); console.log(`Rate limited. Waiting ${backoffDelay}ms before retry ${attempt}/${maxRetries}`); await this.delay(backoffDelay); continue; } return response; } catch (error) { if (attempt === maxRetries) throw error; await this.delay(Math.pow(2, attempt) * 1000); } } } delay(ms) { return new Promise(resolve => setTimeout(resolve, ms)); } }

3. Cache Responses

Reduce API calls by caching responses:

class CachedAPIClient { constructor() { this.cache = new Map(); this.cacheTTL = 5 * 60 * 1000; // 5 minutes } async get(url, options = {}) { const cacheKey = `${url}${JSON.stringify(options)}`; const cached = this.cache.get(cacheKey); // Return cached response if still valid if (cached && Date.now() - cached.timestamp < this.cacheTTL) { return cached.data; } // Make API request const response = await this.makeRequest(url, options); const data = await response.json(); // Cache the response this.cache.set(cacheKey, { data, timestamp: Date.now() }); return data; } }

4. Batch Operations

Combine multiple operations when possible:

// Instead of multiple individual requests const pages = await Promise.all([ api.get('/pages/page_1'), api.get('/pages/page_2'), api.get('/pages/page_3') ]); // 3 API calls // Use list endpoint with filtering const pages = await api.get('/pages', { project_id: 'proj_123', id: 'page_1,page_2,page_3' }); // 1 API call

5. Monitor Usage

Track your API usage to stay within limits:

class RateLimitMonitor { constructor() { this.requestCounts = { minute: { count: 0, resetTime: Date.now() + 60000 }, hour: { count: 0, resetTime: Date.now() + 3600000 } }; } beforeRequest() { this.updateCounts(); if (this.requestCounts.minute.count >= this.limits.minute) { throw new Error('Minute rate limit would be exceeded'); } if (this.requestCounts.hour.count >= this.limits.hour) { throw new Error('Hour rate limit would be exceeded'); } this.requestCounts.minute.count++; this.requestCounts.hour.count++; } updateCounts() { const now = Date.now(); if (now >= this.requestCounts.minute.resetTime) { this.requestCounts.minute = { count: 0, resetTime: now + 60000 }; } if (now >= this.requestCounts.hour.resetTime) { this.requestCounts.hour = { count: 0, resetTime: now + 3600000 }; } } }

Rate Limiting Strategies

For High-Volume Applications

  1. Distribute Load: Use multiple API keys if you have multiple projects
  2. Queue Requests: Implement a request queue to smooth out bursts
  3. Prioritize Critical Operations: Handle important requests first
  4. Use Webhooks: Reduce polling by using webhook notifications

For Real-Time Applications

class RealTimeAPIClient { constructor(apiKey) { this.apiKey = apiKey; this.requestQueue = []; this.isProcessingQueue = false; this.rateLimitDelay = 0; } async queueRequest(url, options) { return new Promise((resolve, reject) => { this.requestQueue.push({ url, options, resolve, reject }); this.processQueue(); }); } async processQueue() { if (this.isProcessingQueue) return; this.isProcessingQueue = true; while (this.requestQueue.length > 0) { const { url, options, resolve, reject } = this.requestQueue.shift(); try { // Wait if we're rate limited if (this.rateLimitDelay > 0) { await this.delay(this.rateLimitDelay); this.rateLimitDelay = 0; } const response = await fetch(url, options); if (response.status === 429) { const retryAfter = parseInt(response.headers.get('X-RateLimit-Retry-After')) * 1000; this.rateLimitDelay = retryAfter; // Re-queue the request this.requestQueue.unshift({ url, options, resolve, reject }); continue; } resolve(response); } catch (error) { reject(error); } } this.isProcessingQueue = false; } delay(ms) { return new Promise(resolve => setTimeout(resolve, ms)); } }

Monitoring and Alerts

Track Usage Programmatically

const trackUsage = () => { const usage = { timestamp: Date.now(), minute_remaining: response.headers.get('X-RateLimit-Remaining'), minute_limit: response.headers.get('X-RateLimit-Limit'), reset_time: response.headers.get('X-RateLimit-Reset') }; // Log to monitoring service console.log('Rate limit usage:', usage); // Alert if usage is high const usagePercent = ((usage.minute_limit - usage.minute_remaining) / usage.minute_limit) * 100; if (usagePercent > 80) { console.warn('High API usage detected:', usagePercent + '%'); } };

Set Up Alerts

Monitor your application for rate limiting:

const setupRateLimitAlerts = () => { // Monitor for 429 responses process.on('rateLimitExceeded', (error) => { // Send alert to monitoring service console.error('Rate limit exceeded:', error); // Could integrate with Slack, email, etc. notifyTeam('API rate limit exceeded', error); }); // Monitor API usage patterns setInterval(() => { const stats = getAPIStats(); if (stats.errorRate > 0.1) { console.warn('High API error rate detected'); } }, 60000); // Check every minute };

Plan Upgrade Considerations

When to Upgrade

Consider upgrading your plan if:

  • ✅ Regularly hitting rate limits
  • ✅ Need higher burst capacity
  • ✅ Building production applications
  • ✅ Expecting traffic growth

Free to Pro Upgrade Benefits

  • 5x higher rate limits (60 → 300 requests/minute)
  • 10x higher hourly limits (1,000 → 10,000 requests/hour)
  • 5x higher burst limits (10 → 50 requests/10s)
  • Priority support for rate limit issues

Enterprise Custom Limits

Enterprise plans offer:

  • Custom rate limits based on your needs
  • Dedicated capacity for consistent performance
  • SLA guarantees on API availability
  • Direct support for optimization

FAQ

What happens if I exceed my monthly limit?

Requests will be rejected with HTTP 429 until your next billing cycle. Consider upgrading to a higher plan or implementing more aggressive caching.

Do rate limits reset immediately at billing renewal?

No, rate limits use rolling windows:

  • Minute limits: Reset every 60 seconds
  • Hour limits: Reset every 3600 seconds
  • Monthly limits: Reset on billing date

Can I get temporary limit increases?

Yes, contact support for temporary increases during expected high-traffic events. This is typically available for Pro and Business plans.

Do different endpoints share the same rate limit?

Most endpoints share the global rate limit, but some endpoints (like public rendering) have separate, higher limits.

How do rate limits work with multiple API keys?

Each API key has independent rate limits. You can use multiple API keys to scale your application, but ensure they're associated with appropriate projects.

Status and Monitoring

Monitor current API status and any rate limiting issues at status.pagegun.com.

For rate limit optimization assistance, contact support with your usage patterns and we'll help optimize your implementation.

© 2026 PageGun. All rights reserved.