Google Antigravity Directory

The #1 directory for Google Antigravity prompts, rules, workflows & MCP servers. Optimized for Gemini 3 agentic development.

Resources

PromptsMCP ServersAntigravity RulesGEMINI.md GuideBest Practices

Company

Submit PromptAntigravityAI.directory

Popular Prompts

Next.js 14 App RouterReact TypeScriptTypeScript AdvancedFastAPI GuideDocker Best Practices

Legal

Privacy PolicyTerms of ServiceContact Us
Featured on FazierFeatured on WayfindioAntigravity AI - Featured on Startup FameFeatured on Wired BusinessFeatured on Twelve ToolsListed on Turbo0Featured on findly.toolsFeatured on Aura++That App ShowAI ToolzShinyLaunchMillion Dot HomepageSolver ToolsFeatured on FazierFeatured on WayfindioAntigravity AI - Featured on Startup FameFeatured on Wired BusinessFeatured on Twelve ToolsListed on Turbo0Featured on findly.toolsFeatured on Aura++That App ShowAI ToolzShinyLaunchMillion Dot HomepageSolver Tools

© 2026 Antigravity AI Directory. All rights reserved.

The #1 directory for Google Antigravity IDE

This website is not affiliated with, endorsed by, or associated with Google LLC. "Google" and "Gemini" are trademarks of Google LLC.

Antigravity AI Directory
PromptsMCPBest PracticesUse CasesLearn
Home
Prompts
API Rate Limiting for Antigravity

API Rate Limiting for Antigravity

Implement robust rate limiting for API endpoints using Redis, in-memory stores, and middleware patterns for production applications.

APIRate LimitingRedisSecurityTypeScript
by Antigravity Team
⭐0Stars
.antigravity
# API Rate Limiting for Google Antigravity

Protecting your APIs from abuse is critical for production applications. This guide covers rate limiting strategies optimized for Google Antigravity and Gemini 3 development.

## Redis-Based Rate Limiter

Implement distributed rate limiting with Redis:

```typescript
// lib/rate-limit/redis-limiter.ts
import { Redis } from "@upstash/redis";

interface RateLimitConfig {
  interval: number; // in seconds
  maxRequests: number;
}

interface RateLimitResult {
  success: boolean;
  remaining: number;
  reset: number;
  limit: number;
}

const redis = new Redis({
  url: process.env.UPSTASH_REDIS_URL!,
  token: process.env.UPSTASH_REDIS_TOKEN!,
});

export async function rateLimit(
  identifier: string,
  config: RateLimitConfig
): Promise<RateLimitResult> {
  const key = `rate_limit:${identifier}`;
  const now = Math.floor(Date.now() / 1000);
  const windowStart = now - config.interval;

  // Use Redis sorted set for sliding window
  const pipeline = redis.pipeline();
  
  // Remove old entries
  pipeline.zremrangebyscore(key, 0, windowStart);
  
  // Count current requests
  pipeline.zcard(key);
  
  // Add current request
  pipeline.zadd(key, { score: now, member: `${now}:${Math.random()}` });
  
  // Set expiry
  pipeline.expire(key, config.interval);

  const results = await pipeline.exec();
  const requestCount = (results[1] as number) || 0;

  const success = requestCount < config.maxRequests;
  const remaining = Math.max(0, config.maxRequests - requestCount - 1);
  const reset = now + config.interval;

  return {
    success,
    remaining,
    reset,
    limit: config.maxRequests,
  };
}

// Token bucket implementation
export async function tokenBucket(
  identifier: string,
  config: { capacity: number; refillRate: number }
): Promise<RateLimitResult> {
  const key = `bucket:${identifier}`;
  const now = Date.now();

  const bucket = await redis.get<{
    tokens: number;
    lastRefill: number;
  }>(key);

  let tokens = config.capacity;
  let lastRefill = now;

  if (bucket) {
    const elapsed = (now - bucket.lastRefill) / 1000;
    const refill = Math.floor(elapsed * config.refillRate);
    tokens = Math.min(config.capacity, bucket.tokens + refill);
    lastRefill = bucket.lastRefill + refill * (1000 / config.refillRate);
  }

  const success = tokens > 0;
  if (success) tokens--;

  await redis.set(key, { tokens, lastRefill: now }, { ex: 3600 });

  return {
    success,
    remaining: tokens,
    reset: Math.floor(now / 1000) + Math.ceil((1 - tokens) / config.refillRate),
    limit: config.capacity,
  };
}
```

## Middleware Implementation

Create reusable rate limiting middleware:

```typescript
// middleware/rate-limit.ts
import { NextRequest, NextResponse } from "next/server";
import { rateLimit } from "@/lib/rate-limit/redis-limiter";

interface RateLimitOptions {
  interval?: number;
  maxRequests?: number;
  keyGenerator?: (req: NextRequest) => string;
}

export function withRateLimit(options: RateLimitOptions = {}) {
  const {
    interval = 60,
    maxRequests = 100,
    keyGenerator = (req) => req.ip || "anonymous",
  } = options;

  return async function rateLimitMiddleware(
    req: NextRequest,
    handler: () => Promise<NextResponse>
  ): Promise<NextResponse> {
    const identifier = keyGenerator(req);
    
    const result = await rateLimit(identifier, {
      interval,
      maxRequests,
    });

    if (!result.success) {
      return NextResponse.json(
        {
          error: "Too many requests",
          retryAfter: result.reset - Math.floor(Date.now() / 1000),
        },
        {
          status: 429,
          headers: {
            "X-RateLimit-Limit": result.limit.toString(),
            "X-RateLimit-Remaining": "0",
            "X-RateLimit-Reset": result.reset.toString(),
            "Retry-After": (result.reset - Math.floor(Date.now() / 1000)).toString(),
          },
        }
      );
    }

    const response = await handler();
    
    response.headers.set("X-RateLimit-Limit", result.limit.toString());
    response.headers.set("X-RateLimit-Remaining", result.remaining.toString());
    response.headers.set("X-RateLimit-Reset", result.reset.toString());

    return response;
  };
}

// Usage in API route
export async function GET(req: NextRequest) {
  const rateLimiter = withRateLimit({
    interval: 60,
    maxRequests: 30,
    keyGenerator: (req) => {
      const authHeader = req.headers.get("authorization");
      if (authHeader) {
        return `auth:${authHeader.slice(0, 20)}`;
      }
      return `ip:${req.ip}`;
    },
  });

  return rateLimiter(req, async () => {
    return NextResponse.json({ data: "success" });
  });
}
```

## In-Memory Rate Limiter

For simpler deployments without Redis:

```typescript
// lib/rate-limit/memory-limiter.ts
interface WindowEntry {
  count: number;
  resetAt: number;
}

const windows = new Map<string, WindowEntry>();

export function memoryRateLimit(
  identifier: string,
  config: { interval: number; maxRequests: number }
): { success: boolean; remaining: number } {
  const now = Date.now();
  const key = identifier;
  
  let entry = windows.get(key);
  
  if (!entry || now > entry.resetAt) {
    entry = {
      count: 0,
      resetAt: now + config.interval * 1000,
    };
  }
  
  entry.count++;
  windows.set(key, entry);
  
  // Cleanup old entries periodically
  if (Math.random() < 0.01) {
    for (const [k, v] of windows.entries()) {
      if (now > v.resetAt) windows.delete(k);
    }
  }
  
  return {
    success: entry.count <= config.maxRequests,
    remaining: Math.max(0, config.maxRequests - entry.count),
  };
}
```

## Best Practices

When implementing rate limiting in Antigravity projects, use Redis for distributed systems, implement graduated limits by user tier, add rate limit headers to responses, log rate limit violations, provide clear error messages, consider using token buckets for burst handling, and monitor rate limit metrics for capacity planning.

When to Use This Prompt

This API prompt is ideal for developers working on:

  • API applications requiring modern best practices and optimal performance
  • Projects that need production-ready API code with proper error handling
  • Teams looking to standardize their api development workflow
  • Developers wanting to learn industry-standard API patterns and techniques

By using this prompt, you can save hours of manual coding and ensure best practices are followed from the start. It's particularly valuable for teams looking to maintain consistency across their api implementations.

How to Use

  1. Copy the prompt - Click the copy button above to copy the entire prompt to your clipboard
  2. Paste into your AI assistant - Use with Claude, ChatGPT, Cursor, or any AI coding tool
  3. Customize as needed - Adjust the prompt based on your specific requirements
  4. Review the output - Always review generated code for security and correctness
💡 Pro Tip: For best results, provide context about your project structure and any specific constraints or preferences you have.

Best Practices

  • ✓ Always review generated code for security vulnerabilities before deploying
  • ✓ Test the API code in a development environment first
  • ✓ Customize the prompt output to match your project's coding standards
  • ✓ Keep your AI assistant's context window in mind for complex requirements
  • ✓ Version control your prompts alongside your code for reproducibility

Frequently Asked Questions

Can I use this API prompt commercially?

Yes! All prompts on Antigravity AI Directory are free to use for both personal and commercial projects. No attribution required, though it's always appreciated.

Which AI assistants work best with this prompt?

This prompt works excellently with Claude, ChatGPT, Cursor, GitHub Copilot, and other modern AI coding assistants. For best results, use models with large context windows.

How do I customize this prompt for my specific needs?

You can modify the prompt by adding specific requirements, constraints, or preferences. For API projects, consider mentioning your framework version, coding style, and any specific libraries you're using.

Related Prompts

💬 Comments

Loading comments...