Implement robust rate limiting for API endpoints using Redis, in-memory stores, and middleware patterns for production applications.
# API Rate Limiting for Google Antigravity
Protecting your APIs from abuse is critical for production applications. This guide covers rate limiting strategies optimized for Google Antigravity and Gemini 3 development.
## Redis-Based Rate Limiter
Implement distributed rate limiting with Redis:
```typescript
// lib/rate-limit/redis-limiter.ts
import { Redis } from "@upstash/redis";
interface RateLimitConfig {
interval: number; // in seconds
maxRequests: number;
}
interface RateLimitResult {
success: boolean;
remaining: number;
reset: number;
limit: number;
}
const redis = new Redis({
url: process.env.UPSTASH_REDIS_URL!,
token: process.env.UPSTASH_REDIS_TOKEN!,
});
export async function rateLimit(
identifier: string,
config: RateLimitConfig
): Promise<RateLimitResult> {
const key = `rate_limit:${identifier}`;
const now = Math.floor(Date.now() / 1000);
const windowStart = now - config.interval;
// Use Redis sorted set for sliding window
const pipeline = redis.pipeline();
// Remove old entries
pipeline.zremrangebyscore(key, 0, windowStart);
// Count current requests
pipeline.zcard(key);
// Add current request
pipeline.zadd(key, { score: now, member: `${now}:${Math.random()}` });
// Set expiry
pipeline.expire(key, config.interval);
const results = await pipeline.exec();
const requestCount = (results[1] as number) || 0;
const success = requestCount < config.maxRequests;
const remaining = Math.max(0, config.maxRequests - requestCount - 1);
const reset = now + config.interval;
return {
success,
remaining,
reset,
limit: config.maxRequests,
};
}
// Token bucket implementation
export async function tokenBucket(
identifier: string,
config: { capacity: number; refillRate: number }
): Promise<RateLimitResult> {
const key = `bucket:${identifier}`;
const now = Date.now();
const bucket = await redis.get<{
tokens: number;
lastRefill: number;
}>(key);
let tokens = config.capacity;
let lastRefill = now;
if (bucket) {
const elapsed = (now - bucket.lastRefill) / 1000;
const refill = Math.floor(elapsed * config.refillRate);
tokens = Math.min(config.capacity, bucket.tokens + refill);
lastRefill = bucket.lastRefill + refill * (1000 / config.refillRate);
}
const success = tokens > 0;
if (success) tokens--;
await redis.set(key, { tokens, lastRefill: now }, { ex: 3600 });
return {
success,
remaining: tokens,
reset: Math.floor(now / 1000) + Math.ceil((1 - tokens) / config.refillRate),
limit: config.capacity,
};
}
```
## Middleware Implementation
Create reusable rate limiting middleware:
```typescript
// middleware/rate-limit.ts
import { NextRequest, NextResponse } from "next/server";
import { rateLimit } from "@/lib/rate-limit/redis-limiter";
interface RateLimitOptions {
interval?: number;
maxRequests?: number;
keyGenerator?: (req: NextRequest) => string;
}
export function withRateLimit(options: RateLimitOptions = {}) {
const {
interval = 60,
maxRequests = 100,
keyGenerator = (req) => req.ip || "anonymous",
} = options;
return async function rateLimitMiddleware(
req: NextRequest,
handler: () => Promise<NextResponse>
): Promise<NextResponse> {
const identifier = keyGenerator(req);
const result = await rateLimit(identifier, {
interval,
maxRequests,
});
if (!result.success) {
return NextResponse.json(
{
error: "Too many requests",
retryAfter: result.reset - Math.floor(Date.now() / 1000),
},
{
status: 429,
headers: {
"X-RateLimit-Limit": result.limit.toString(),
"X-RateLimit-Remaining": "0",
"X-RateLimit-Reset": result.reset.toString(),
"Retry-After": (result.reset - Math.floor(Date.now() / 1000)).toString(),
},
}
);
}
const response = await handler();
response.headers.set("X-RateLimit-Limit", result.limit.toString());
response.headers.set("X-RateLimit-Remaining", result.remaining.toString());
response.headers.set("X-RateLimit-Reset", result.reset.toString());
return response;
};
}
// Usage in API route
export async function GET(req: NextRequest) {
const rateLimiter = withRateLimit({
interval: 60,
maxRequests: 30,
keyGenerator: (req) => {
const authHeader = req.headers.get("authorization");
if (authHeader) {
return `auth:${authHeader.slice(0, 20)}`;
}
return `ip:${req.ip}`;
},
});
return rateLimiter(req, async () => {
return NextResponse.json({ data: "success" });
});
}
```
## In-Memory Rate Limiter
For simpler deployments without Redis:
```typescript
// lib/rate-limit/memory-limiter.ts
interface WindowEntry {
count: number;
resetAt: number;
}
const windows = new Map<string, WindowEntry>();
export function memoryRateLimit(
identifier: string,
config: { interval: number; maxRequests: number }
): { success: boolean; remaining: number } {
const now = Date.now();
const key = identifier;
let entry = windows.get(key);
if (!entry || now > entry.resetAt) {
entry = {
count: 0,
resetAt: now + config.interval * 1000,
};
}
entry.count++;
windows.set(key, entry);
// Cleanup old entries periodically
if (Math.random() < 0.01) {
for (const [k, v] of windows.entries()) {
if (now > v.resetAt) windows.delete(k);
}
}
return {
success: entry.count <= config.maxRequests,
remaining: Math.max(0, config.maxRequests - entry.count),
};
}
```
## Best Practices
When implementing rate limiting in Antigravity projects, use Redis for distributed systems, implement graduated limits by user tier, add rate limit headers to responses, log rate limit violations, provide clear error messages, consider using token buckets for burst handling, and monitor rate limit metrics for capacity planning.This API prompt is ideal for developers working on:
By using this prompt, you can save hours of manual coding and ensure best practices are followed from the start. It's particularly valuable for teams looking to maintain consistency across their api implementations.
Yes! All prompts on Antigravity AI Directory are free to use for both personal and commercial projects. No attribution required, though it's always appreciated.
This prompt works excellently with Claude, ChatGPT, Cursor, GitHub Copilot, and other modern AI coding assistants. For best results, use models with large context windows.
You can modify the prompt by adding specific requirements, constraints, or preferences. For API projects, consider mentioning your framework version, coding style, and any specific libraries you're using.