Rate Limits and Fair Use
Orsunpay implements rate limiting to ensure fair resource usage and maintain optimal performance for all users.
Rate Limit Overview
Standard Limits
| Environment | Requests per Minute | Burst Limit | Concurrent Requests |
|---|
| Sandbox | 1,000 | 100/second | 50 |
| Production | 5,000 | 500/second | 200 |
Endpoint-Specific Limits
| Endpoint Category | Rate Limit | Reset Window |
|---|
| Authentication | 100/hour | 1 hour |
| Transaction Creation | As per plan | 1 minute |
| Transaction Retrieval | 2x standard limit | 1 minute |
| List Operations | Standard limit | 1 minute |
| Webhook Testing | 50/hour | 1 hour |
All API responses include rate limiting information in headers:
HTTP/1.1 200 OK
X-RateLimit-Limit: 5000
X-RateLimit-Remaining: 4999
X-RateLimit-Reset: 1609459200
X-RateLimit-Window: 60
| Header | Description |
|---|
X-RateLimit-Limit | Maximum requests allowed in the current window |
X-RateLimit-Remaining | Remaining requests in current window |
X-RateLimit-Reset | Timestamp when the rate limit resets |
X-RateLimit-Window | Rate limit window in seconds |
Rate Limit Exceeded Response
When you exceed rate limits, you’ll receive a 429 status code:
{
"error": {
"code": "rate_limit_exceeded",
"message": "Too many requests. Please retry after some time.",
"details": {
"limit": 5000,
"window": "1 minute",
"retryAfter": 60
}
}
}
Best Practices
1. Implement Exponential Backoff
async function apiCallWithBackoff(apiCall, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await apiCall();
} catch (error) {
if (error.status === 429) {
const retryAfter = parseInt(error.headers['retry-after']) || 60;
const backoffDelay = Math.min(retryAfter * 1000, Math.pow(2, attempt) * 1000);
console.log(`Rate limited. Retrying after ${backoffDelay}ms`);
await new Promise(resolve => setTimeout(resolve, backoffDelay));
if (attempt === maxRetries - 1) {
throw error;
}
} else {
throw error;
}
}
}
}
function checkRateLimit(response) {
const remaining = parseInt(response.headers['x-ratelimit-remaining']);
const limit = parseInt(response.headers['x-ratelimit-limit']);
const resetTime = parseInt(response.headers['x-ratelimit-reset']);
// Warning when approaching limit
if (remaining < limit * 0.1) {
console.warn(`Approaching rate limit: ${remaining}/${limit} remaining`);
// Consider pausing requests
const timeToReset = (resetTime * 1000) - Date.now();
if (timeToReset > 0 && remaining < 10) {
console.log(`Pausing requests for ${timeToReset}ms`);
return new Promise(resolve => setTimeout(resolve, timeToReset));
}
}
return Promise.resolve();
}
// Usage
const response = await fetch('/api/transactions', options);
await checkRateLimit(response);
const data = await response.json();
3. Implement Request Queuing
class RequestQueue {
constructor(rateLimit = 5000, window = 60000) {
this.rateLimit = rateLimit;
this.window = window;
this.queue = [];
this.processing = false;
this.requestCount = 0;
this.windowStart = Date.now();
}
async enqueue(apiCall) {
return new Promise((resolve, reject) => {
this.queue.push({ apiCall, resolve, reject });
this.processQueue();
});
}
async processQueue() {
if (this.processing || this.queue.length === 0) return;
this.processing = true;
while (this.queue.length > 0) {
// Check if we need to reset the window
const now = Date.now();
if (now - this.windowStart >= this.window) {
this.requestCount = 0;
this.windowStart = now;
}
// Check if we can make more requests
if (this.requestCount >= this.rateLimit) {
const waitTime = this.window - (now - this.windowStart);
await new Promise(resolve => setTimeout(resolve, waitTime));
continue;
}
const { apiCall, resolve, reject } = this.queue.shift();
try {
this.requestCount++;
const result = await apiCall();
resolve(result);
} catch (error) {
reject(error);
}
// Small delay to prevent overwhelming the server
await new Promise(resolve => setTimeout(resolve, 10));
}
this.processing = false;
}
}
// Usage
const queue = new RequestQueue(5000, 60000); // 5000 requests per minute
const transaction = await queue.enqueue(() =>
orsunpay.transactions.create(transactionData)
);
4. Batch Operations
async function batchTransactionLookups(transactionIds) {
const batchSize = 50; // Process in smaller batches
const results = [];
for (let i = 0; i < transactionIds.length; i += batchSize) {
const batch = transactionIds.slice(i, i + batchSize);
// Process batch with some delay between batches
const batchPromises = batch.map(id =>
orsunpay.transactions.retrieve(id)
);
try {
const batchResults = await Promise.all(batchPromises);
results.push(...batchResults);
} catch (error) {
console.error(`Batch ${i}-${i + batchSize} failed:`, error);
// Handle partial failures as needed
}
// Delay between batches to respect rate limits
if (i + batchSize < transactionIds.length) {
await new Promise(resolve => setTimeout(resolve, 1000));
}
}
return results;
}
Fair Use Policy
Acceptable Use
✅ Allowed:
- Normal business operations and transaction processing
- Reasonable testing and development activities
- Periodic data synchronization and reconciliation
- Customer support and investigation activities
- Legitimate reporting and analytics
Prohibited Use
❌ Not Allowed:
- Excessive API polling or unnecessary requests
- Automated data scraping or harvesting
- Load testing on production endpoints without approval
- Sharing API keys or credentials
- Circumventing rate limits through multiple accounts
Monitoring and Enforcement
We monitor API usage patterns and may take action for:
- Consistently exceeding rate limits
- Unusual traffic patterns that affect service performance
- Suspected abuse or misuse of the API
- Violations of the fair use policy
Enterprise Rate Limits
Contact sales for higher rate limits if you need:
Custom Rate Limits
- Higher request volumes: Up to 50,000 requests/minute
- Dedicated resources: Isolated processing capacity
- Priority queuing: Faster processing for critical requests
- Custom burst limits: Higher short-term request capacity
SLA Guarantees
- 99.9% uptime SLA: Service level agreement
- Response time guarantees: Performance commitments
- Priority support: Faster issue resolution
- Dedicated account management: Personal support contact
Troubleshooting Rate Limits
Common Issues
| Issue | Cause | Solution |
|---|
| Frequent 429 errors | Too many requests | Implement backoff strategy |
| Inconsistent limits | Shared API keys | Use separate keys per service |
| Burst limit exceeded | Too many concurrent requests | Implement request queuing |
| Webhook test failures | Webhook testing limits | Reduce test frequency |
class RateLimitDebugger {
constructor() {
this.requests = [];
this.startTime = Date.now();
}
logRequest(response) {
const now = Date.now();
this.requests.push({
timestamp: now,
status: response.status,
remaining: response.headers['x-ratelimit-remaining'],
limit: response.headers['x-ratelimit-limit'],
reset: response.headers['x-ratelimit-reset']
});
// Keep only last 100 requests
if (this.requests.length > 100) {
this.requests.shift();
}
}
getStats() {
const now = Date.now();
const lastMinute = this.requests.filter(
r => now - r.timestamp < 60000
);
return {
totalRequests: this.requests.length,
requestsLastMinute: lastMinute.length,
rateLimitErrors: this.requests.filter(r => r.status === 429).length,
averageRemaining: this.requests
.map(r => parseInt(r.remaining))
.reduce((a, b, _, arr) => a + b / arr.length, 0)
};
}
}
// Usage
const debugger = new RateLimitDebugger();
// Log each API response
const response = await fetch('/api/transactions');
debugger.logRequest(response);
// Get usage statistics
console.log(debugger.getStats());
If you consistently need higher rate limits or encounter issues:
- Email: [email protected]
- Subject: Rate Limit Increase Request
- Include:
- Current usage patterns
- Business justification
- Expected request volume
- Integration timeline
Monitor your rate limit usage proactively. Implement proper backoff strategies and request queuing to avoid disrupting your payment flows.
Attempting to circumvent rate limits through multiple API keys or accounts may result in account suspension.