Google Antigravity Directory

The #1 directory for Google Antigravity prompts, rules, workflows & MCP servers. Optimized for Gemini 3 agentic development.

Resources

PromptsMCP ServersAntigravity RulesGEMINI.md GuideBest Practices

Company

Submit PromptAntigravityAI.directory

Popular Prompts

Next.js 14 App RouterReact TypeScriptTypeScript AdvancedFastAPI GuideDocker Best Practices

Legal

Privacy PolicyTerms of ServiceContact Us
Featured on FazierVerified on Verified ToolsFeatured on WayfindioAntigravity AI - Featured on Startup FameFeatured on Wired BusinessFeatured on Twelve ToolsListed on Turbo0Featured on findly.toolsFeatured on Aura++That App ShowFeatured on FazierVerified on Verified ToolsFeatured on WayfindioAntigravity AI - Featured on Startup FameFeatured on Wired BusinessFeatured on Twelve ToolsListed on Turbo0Featured on findly.toolsFeatured on Aura++That App Show

© 2026 Antigravity AI Directory. All rights reserved.

The #1 directory for Google Antigravity IDE

This website is not affiliated with, endorsed by, or associated with Google LLC. "Google" and "Gemini" are trademarks of Google LLC.

Antigravity AI Directory
PromptsMCPBest PracticesUse CasesLearn
Home
Prompts
OpenAI API Integration with Next.js Complete Guide

OpenAI API Integration with Next.js Complete Guide

Build AI-powered applications with OpenAI GPT models. Learn streaming responses, function calling, embeddings, moderation, rate limiting, and production-ready AI integration patterns with Next.js.

openaigptaistreamingembeddingsnextjstypescriptfunction-calling
by AntigravityAI
⭐0Stars
👁️2Views
.antigravity
# OpenAI API Integration with Next.js

Build intelligent AI-powered applications using OpenAI GPT models with Next.js. This guide covers streaming, function calling, embeddings, and production patterns.

## Setting Up OpenAI Client

### Configuration and Types

```typescript
// lib/openai.ts
import OpenAI from "openai";

export const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

export interface ChatMessage {
  role: "system" | "user" | "assistant";
  content: string;
}

export interface ChatCompletionOptions {
  messages: ChatMessage[];
  model?: string;
  temperature?: number;
  maxTokens?: number;
  stream?: boolean;
}

// Rate limiting helper
const rateLimiter = new Map<string, { count: number; resetTime: number }>();

export function checkRateLimit(userId: string, limit = 20): boolean {
  const now = Date.now();
  const windowMs = 60 * 1000; // 1 minute
  
  const userLimit = rateLimiter.get(userId);
  
  if (!userLimit || now > userLimit.resetTime) {
    rateLimiter.set(userId, { count: 1, resetTime: now + windowMs });
    return true;
  }
  
  if (userLimit.count >= limit) {
    return false;
  }
  
  userLimit.count++;
  return true;
}
```

### Streaming Chat Completions

```typescript
// app/api/chat/route.ts
import { NextRequest } from "next/server";
import { openai, checkRateLimit } from "@/lib/openai";
import { getServerSession } from "next-auth";
import { authOptions } from "@/lib/auth";

export const runtime = "edge";

export async function POST(req: NextRequest) {
  const session = await getServerSession(authOptions);
  if (!session?.user?.id) {
    return new Response("Unauthorized", { status: 401 });
  }

  if (!checkRateLimit(session.user.id)) {
    return new Response("Rate limit exceeded", { status: 429 });
  }

  const { messages, model = "gpt-4-turbo-preview" } = await req.json();

  const response = await openai.chat.completions.create({
    model,
    messages: [
      {
        role: "system",
        content: "You are a helpful assistant. Be concise and accurate.",
      },
      ...messages,
    ],
    temperature: 0.7,
    max_tokens: 2000,
    stream: true,
  });

  const encoder = new TextEncoder();
  const stream = new ReadableStream({
    async start(controller) {
      for await (const chunk of response) {
        const content = chunk.choices[0]?.delta?.content || "";
        if (content) {
          controller.enqueue(encoder.encode(`data: ${JSON.stringify({ content })}\n\n`));
        }
      }
      controller.enqueue(encoder.encode("data: [DONE]\n\n"));
      controller.close();
    },
  });

  return new Response(stream, {
    headers: {
      "Content-Type": "text/event-stream",
      "Cache-Control": "no-cache",
      Connection: "keep-alive",
    },
  });
}
```

### Function Calling

```typescript
// lib/ai-functions.ts
import { openai } from "./openai";

const functions = [
  {
    name: "get_weather",
    description: "Get current weather for a location",
    parameters: {
      type: "object",
      properties: {
        location: { type: "string", description: "City name" },
        unit: { type: "string", enum: ["celsius", "fahrenheit"] },
      },
      required: ["location"],
    },
  },
  {
    name: "search_products",
    description: "Search for products in the catalog",
    parameters: {
      type: "object",
      properties: {
        query: { type: "string", description: "Search query" },
        category: { type: "string" },
        maxPrice: { type: "number" },
      },
      required: ["query"],
    },
  },
];

async function executeFunction(name: string, args: Record<string, unknown>) {
  switch (name) {
    case "get_weather":
      return await fetchWeather(args.location as string);
    case "search_products":
      return await searchProducts(args);
    default:
      throw new Error(`Unknown function: ${name}`);
  }
}

export async function chatWithFunctions(messages: ChatMessage[]) {
  const response = await openai.chat.completions.create({
    model: "gpt-4-turbo-preview",
    messages,
    functions,
    function_call: "auto",
  });

  const message = response.choices[0].message;

  if (message.function_call) {
    const functionName = message.function_call.name;
    const functionArgs = JSON.parse(message.function_call.arguments);
    const functionResult = await executeFunction(functionName, functionArgs);

    // Continue conversation with function result
    return openai.chat.completions.create({
      model: "gpt-4-turbo-preview",
      messages: [
        ...messages,
        message,
        {
          role: "function",
          name: functionName,
          content: JSON.stringify(functionResult),
        },
      ],
    });
  }

  return response;
}
```

### Embeddings and Semantic Search

```typescript
// lib/embeddings.ts
import { openai } from "./openai";
import { db } from "./db";

export async function generateEmbedding(text: string): Promise<number[]> {
  const response = await openai.embeddings.create({
    model: "text-embedding-3-small",
    input: text.replace(/\n/g, " ").trim(),
  });

  return response.data[0].embedding;
}

export async function semanticSearch(query: string, limit = 5) {
  const queryEmbedding = await generateEmbedding(query);

  // Using pgvector for similarity search
  const results = await db.$queryRaw`
    SELECT id, title, content,
           1 - (embedding <=> ${queryEmbedding}::vector) as similarity
    FROM documents
    WHERE 1 - (embedding <=> ${queryEmbedding}::vector) > 0.7
    ORDER BY similarity DESC
    LIMIT ${limit}
  `;

  return results;
}

export async function indexDocument(doc: { id: string; title: string; content: string }) {
  const embedding = await generateEmbedding(`${doc.title}\n${doc.content}`);
  
  await db.document.upsert({
    where: { id: doc.id },
    update: { embedding },
    create: { ...doc, embedding },
  });
}
```

### Content Moderation

```typescript
// lib/moderation.ts
import { openai } from "./openai";

export interface ModerationResult {
  flagged: boolean;
  categories: Record<string, boolean>;
  scores: Record<string, number>;
}

export async function moderateContent(content: string): Promise<ModerationResult> {
  const response = await openai.moderations.create({
    input: content,
  });

  const result = response.results[0];
  
  return {
    flagged: result.flagged,
    categories: result.categories,
    scores: result.category_scores,
  };
}

// Middleware for content moderation
export async function validateUserInput(input: string): Promise<{ valid: boolean; reason?: string }> {
  const moderation = await moderateContent(input);
  
  if (moderation.flagged) {
    const flaggedCategories = Object.entries(moderation.categories)
      .filter(([_, flagged]) => flagged)
      .map(([category]) => category);
    
    return {
      valid: false,
      reason: `Content flagged for: ${flaggedCategories.join(", ")}`,
    };
  }
  
  return { valid: true };
}
```

### React Hook for Streaming Chat

```typescript
// hooks/useChat.ts
"use client";

import { useState, useCallback } from "react";

interface Message {
  role: "user" | "assistant";
  content: string;
}

export function useChat() {
  const [messages, setMessages] = useState<Message[]>([]);
  const [isLoading, setIsLoading] = useState(false);

  const sendMessage = useCallback(async (content: string) => {
    const userMessage: Message = { role: "user", content };
    setMessages((prev) => [...prev, userMessage]);
    setIsLoading(true);

    try {
      const response = await fetch("/api/chat", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify({
          messages: [...messages, userMessage],
        }),
      });

      const reader = response.body?.getReader();
      const decoder = new TextDecoder();
      let assistantMessage = "";

      setMessages((prev) => [...prev, { role: "assistant", content: "" }]);

      while (reader) {
        const { done, value } = await reader.read();
        if (done) break;

        const chunk = decoder.decode(value);
        const lines = chunk.split("\n").filter((line) => line.startsWith("data: "));

        for (const line of lines) {
          const data = line.slice(6);
          if (data === "[DONE]") continue;
          
          const parsed = JSON.parse(data);
          assistantMessage += parsed.content;
          
          setMessages((prev) => {
            const updated = [...prev];
            updated[updated.length - 1] = { role: "assistant", content: assistantMessage };
            return updated;
          });
        }
      }
    } catch (error) {
      console.error("Chat error:", error);
    } finally {
      setIsLoading(false);
    }
  }, [messages]);

  return { messages, sendMessage, isLoading };
}
```

This comprehensive OpenAI integration provides streaming responses, function calling, semantic search with embeddings, content moderation, and production-ready rate limiting.

When to Use This Prompt

This openai prompt is ideal for developers working on:

  • openai applications requiring modern best practices and optimal performance
  • Projects that need production-ready openai code with proper error handling
  • Teams looking to standardize their openai development workflow
  • Developers wanting to learn industry-standard openai patterns and techniques

By using this prompt, you can save hours of manual coding and ensure best practices are followed from the start. It's particularly valuable for teams looking to maintain consistency across their openai implementations.

How to Use

  1. Copy the prompt - Click the copy button above to copy the entire prompt to your clipboard
  2. Paste into your AI assistant - Use with Claude, ChatGPT, Cursor, or any AI coding tool
  3. Customize as needed - Adjust the prompt based on your specific requirements
  4. Review the output - Always review generated code for security and correctness
💡 Pro Tip: For best results, provide context about your project structure and any specific constraints or preferences you have.

Best Practices

  • ✓ Always review generated code for security vulnerabilities before deploying
  • ✓ Test the openai code in a development environment first
  • ✓ Customize the prompt output to match your project's coding standards
  • ✓ Keep your AI assistant's context window in mind for complex requirements
  • ✓ Version control your prompts alongside your code for reproducibility

Frequently Asked Questions

Can I use this openai prompt commercially?

Yes! All prompts on Antigravity AI Directory are free to use for both personal and commercial projects. No attribution required, though it's always appreciated.

Which AI assistants work best with this prompt?

This prompt works excellently with Claude, ChatGPT, Cursor, GitHub Copilot, and other modern AI coding assistants. For best results, use models with large context windows.

How do I customize this prompt for my specific needs?

You can modify the prompt by adding specific requirements, constraints, or preferences. For openai projects, consider mentioning your framework version, coding style, and any specific libraries you're using.

Related Prompts

💬 Comments

Loading comments...