Google Antigravity Directory

The #1 directory for Google Antigravity prompts, rules, workflows & MCP servers. Optimized for Gemini 3 agentic development.

Resources

PromptsMCP ServersAntigravity RulesGEMINI.md GuideBest Practices

Company

Submit PromptAntigravityAI.directory

Popular Prompts

Next.js 14 App RouterReact TypeScriptTypeScript AdvancedFastAPI GuideDocker Best Practices

Legal

Privacy PolicyTerms of ServiceContact Us
Featured on FazierVerified on Verified ToolsFeatured on WayfindioAntigravity AI - Featured on Startup FameFeatured on Wired BusinessFeatured on Twelve ToolsListed on Turbo0Featured on findly.toolsFeatured on Aura++That App ShowFeatured on FazierVerified on Verified ToolsFeatured on WayfindioAntigravity AI - Featured on Startup FameFeatured on Wired BusinessFeatured on Twelve ToolsListed on Turbo0Featured on findly.toolsFeatured on Aura++That App Show

© 2026 Antigravity AI Directory. All rights reserved.

The #1 directory for Google Antigravity IDE

This website is not affiliated with, endorsed by, or associated with Google LLC. "Google" and "Gemini" are trademarks of Google LLC.

Antigravity AI Directory
PromptsMCPBest PracticesUse CasesLearn
Home
Prompts
API Rate Limiting Strategies

API Rate Limiting Strategies

Implement rate limiting for APIs to prevent abuse

APISecurityRate LimitingBackend
by Community
⭐1Stars
👁️6Views
📋1Copies
.antigravity
# API Rate Limiting Strategies for Google Antigravity

Implement robust API rate limiting in your Google Antigravity projects to protect your services from abuse and ensure fair resource allocation. This guide covers token bucket, sliding window, and distributed rate limiting patterns.

## Token Bucket Algorithm

Implement flexible token bucket rate limiting:

```typescript
// src/lib/rate-limit/tokenBucket.ts
interface TokenBucket {
  tokens: number;
  lastRefill: number;
}

interface TokenBucketConfig {
  capacity: number;      // Maximum tokens
  refillRate: number;    // Tokens per second
  refillInterval: number; // Milliseconds between refills
}

class TokenBucketLimiter {
  private buckets: Map<string, TokenBucket> = new Map();
  private config: TokenBucketConfig;
  
  constructor(config: TokenBucketConfig) {
    this.config = config;
  }
  
  async consume(key: string, tokens: number = 1): Promise<{
    allowed: boolean;
    remaining: number;
    retryAfter?: number;
  }> {
    const now = Date.now();
    let bucket = this.buckets.get(key);
    
    if (!bucket) {
      bucket = { tokens: this.config.capacity, lastRefill: now };
      this.buckets.set(key, bucket);
    }
    
    // Calculate tokens to add based on time elapsed
    const elapsed = now - bucket.lastRefill;
    const tokensToAdd = Math.floor(elapsed / this.config.refillInterval) * this.config.refillRate;
    
    bucket.tokens = Math.min(this.config.capacity, bucket.tokens + tokensToAdd);
    bucket.lastRefill = now;
    
    if (bucket.tokens >= tokens) {
      bucket.tokens -= tokens;
      return { allowed: true, remaining: bucket.tokens };
    }
    
    const tokensNeeded = tokens - bucket.tokens;
    const refillsNeeded = Math.ceil(tokensNeeded / this.config.refillRate);
    const retryAfter = refillsNeeded * this.config.refillInterval;
    
    return {
      allowed: false,
      remaining: bucket.tokens,
      retryAfter: Math.ceil(retryAfter / 1000),
    };
  }
}

// Create limiter instances for different tiers
export const apiLimiter = new TokenBucketLimiter({
  capacity: 100,
  refillRate: 10,
  refillInterval: 1000,
});

export const authLimiter = new TokenBucketLimiter({
  capacity: 5,
  refillRate: 1,
  refillInterval: 60000,
});
```

## Redis-Based Sliding Window

Implement distributed rate limiting with Redis:

```typescript
// src/lib/rate-limit/slidingWindow.ts
import { Redis } from "ioredis";

interface SlidingWindowConfig {
  windowMs: number;
  maxRequests: number;
}

export class SlidingWindowLimiter {
  private redis: Redis;
  private config: SlidingWindowConfig;
  
  constructor(redis: Redis, config: SlidingWindowConfig) {
    this.redis = redis;
    this.config = config;
  }
  
  async check(key: string): Promise<{
    allowed: boolean;
    remaining: number;
    resetAt: number;
  }> {
    const now = Date.now();
    const windowStart = now - this.config.windowMs;
    const redisKey = `ratelimit:${key}`;
    
    // Use Redis transaction for atomic operations
    const pipeline = this.redis.pipeline();
    
    // Remove old entries outside the window
    pipeline.zremrangebyscore(redisKey, 0, windowStart);
    
    // Count current requests in window
    pipeline.zcard(redisKey);
    
    // Add current request
    pipeline.zadd(redisKey, now, `${now}-${Math.random()}`);
    
    // Set expiry on the key
    pipeline.pexpire(redisKey, this.config.windowMs);
    
    const results = await pipeline.exec();
    const currentCount = (results?.[1]?.[1] as number) || 0;
    
    const allowed = currentCount < this.config.maxRequests;
    const remaining = Math.max(0, this.config.maxRequests - currentCount - 1);
    const resetAt = now + this.config.windowMs;
    
    if (!allowed) {
      // Remove the request we just added since it's not allowed
      await this.redis.zremrangebyscore(redisKey, now, now);
    }
    
    return { allowed, remaining, resetAt };
  }
}

// Create limiters for different endpoints
export function createEndpointLimiter(redis: Redis, endpoint: string, config: SlidingWindowConfig) {
  const limiter = new SlidingWindowLimiter(redis, config);
  
  return async (identifier: string) => {
    const key = `${endpoint}:${identifier}`;
    return limiter.check(key);
  };
}
```

## Next.js Middleware Integration

Apply rate limiting at the edge:

```typescript
// src/middleware.ts
import { NextRequest, NextResponse } from "next/server";
import { Ratelimit } from "@upstash/ratelimit";
import { Redis } from "@upstash/redis";

const redis = new Redis({
  url: process.env.UPSTASH_REDIS_URL!,
  token: process.env.UPSTASH_REDIS_TOKEN!,
});

// Different rate limits for different endpoints
const limiters = {
  api: new Ratelimit({
    redis,
    limiter: Ratelimit.slidingWindow(100, "1 m"),
    analytics: true,
    prefix: "ratelimit:api",
  }),
  auth: new Ratelimit({
    redis,
    limiter: Ratelimit.fixedWindow(5, "15 m"),
    analytics: true,
    prefix: "ratelimit:auth",
  }),
  upload: new Ratelimit({
    redis,
    limiter: Ratelimit.tokenBucket(10, "1 h", 5),
    analytics: true,
    prefix: "ratelimit:upload",
  }),
};

export async function middleware(request: NextRequest) {
  const ip = request.headers.get("x-forwarded-for")?.split(",")[0] ?? 
             request.headers.get("x-real-ip") ?? 
             "127.0.0.1";
  
  const path = request.nextUrl.pathname;
  
  // Select appropriate limiter
  let limiter = limiters.api;
  let identifier = ip;
  
  if (path.startsWith("/api/auth")) {
    limiter = limiters.auth;
    identifier = `${ip}:${request.nextUrl.pathname}`;
  } else if (path.startsWith("/api/upload")) {
    limiter = limiters.upload;
    const userId = request.headers.get("x-user-id");
    identifier = userId || ip;
  }
  
  const { success, limit, remaining, reset } = await limiter.limit(identifier);
  
  const response = success
    ? NextResponse.next()
    : NextResponse.json(
        { error: "Too many requests", retryAfter: Math.ceil((reset - Date.now()) / 1000) },
        { status: 429 }
      );
  
  // Add rate limit headers
  response.headers.set("X-RateLimit-Limit", limit.toString());
  response.headers.set("X-RateLimit-Remaining", remaining.toString());
  response.headers.set("X-RateLimit-Reset", reset.toString());
  
  if (!success) {
    response.headers.set("Retry-After", Math.ceil((reset - Date.now()) / 1000).toString());
  }
  
  return response;
}

export const config = {
  matcher: ["/api/:path*"],
};
```

## Tiered Rate Limiting

Implement user-based rate limit tiers:

```typescript
// src/lib/rate-limit/tiered.ts
interface RateLimitTier {
  name: string;
  requestsPerMinute: number;
  requestsPerDay: number;
  burstLimit: number;
}

const tiers: Record<string, RateLimitTier> = {
  free: {
    name: "Free",
    requestsPerMinute: 20,
    requestsPerDay: 1000,
    burstLimit: 5,
  },
  pro: {
    name: "Pro",
    requestsPerMinute: 100,
    requestsPerDay: 10000,
    burstLimit: 20,
  },
  enterprise: {
    name: "Enterprise",
    requestsPerMinute: 1000,
    requestsPerDay: 100000,
    burstLimit: 100,
  },
};

export async function checkTieredRateLimit(
  userId: string,
  userTier: string,
  redis: Redis
): Promise<{ allowed: boolean; tier: RateLimitTier; usage: { minute: number; day: number } }> {
  const tier = tiers[userTier] || tiers.free;
  const now = Date.now();
  const minuteKey = `ratelimit:${userId}:minute:${Math.floor(now / 60000)}`;
  const dayKey = `ratelimit:${userId}:day:${Math.floor(now / 86400000)}`;
  
  const pipeline = redis.pipeline();
  pipeline.incr(minuteKey);
  pipeline.expire(minuteKey, 60);
  pipeline.incr(dayKey);
  pipeline.expire(dayKey, 86400);
  
  const results = await pipeline.exec();
  const minuteCount = (results?.[0]?.[1] as number) || 0;
  const dayCount = (results?.[2]?.[1] as number) || 0;
  
  const allowed = minuteCount <= tier.requestsPerMinute && dayCount <= tier.requestsPerDay;
  
  return {
    allowed,
    tier,
    usage: { minute: minuteCount, day: dayCount },
  };
}
```

Google Antigravity generates comprehensive rate limiting solutions that protect your APIs from abuse while providing fair access based on user tiers and usage patterns.

When to Use This Prompt

This API prompt is ideal for developers working on:

  • API applications requiring modern best practices and optimal performance
  • Projects that need production-ready API code with proper error handling
  • Teams looking to standardize their api development workflow
  • Developers wanting to learn industry-standard API patterns and techniques

By using this prompt, you can save hours of manual coding and ensure best practices are followed from the start. It's particularly valuable for teams looking to maintain consistency across their api implementations.

How to Use

  1. Copy the prompt - Click the copy button above to copy the entire prompt to your clipboard
  2. Paste into your AI assistant - Use with Claude, ChatGPT, Cursor, or any AI coding tool
  3. Customize as needed - Adjust the prompt based on your specific requirements
  4. Review the output - Always review generated code for security and correctness
💡 Pro Tip: For best results, provide context about your project structure and any specific constraints or preferences you have.

Best Practices

  • ✓ Always review generated code for security vulnerabilities before deploying
  • ✓ Test the API code in a development environment first
  • ✓ Customize the prompt output to match your project's coding standards
  • ✓ Keep your AI assistant's context window in mind for complex requirements
  • ✓ Version control your prompts alongside your code for reproducibility

Frequently Asked Questions

Can I use this API prompt commercially?

Yes! All prompts on Antigravity AI Directory are free to use for both personal and commercial projects. No attribution required, though it's always appreciated.

Which AI assistants work best with this prompt?

This prompt works excellently with Claude, ChatGPT, Cursor, GitHub Copilot, and other modern AI coding assistants. For best results, use models with large context windows.

How do I customize this prompt for my specific needs?

You can modify the prompt by adding specific requirements, constraints, or preferences. For API projects, consider mentioning your framework version, coding style, and any specific libraries you're using.

Related Prompts

💬 Comments

Loading comments...