Google Antigravity Directory

The #1 directory for Google Antigravity prompts, rules, workflows & MCP servers. Optimized for Gemini 3 agentic development.

Resources

PromptsMCP ServersAntigravity RulesGEMINI.md GuideBest Practices

Company

Submit PromptAntigravityAI.directory

Popular Prompts

Next.js 14 App RouterReact TypeScriptTypeScript AdvancedFastAPI GuideDocker Best Practices

Legal

Privacy PolicyTerms of ServiceContact Us
Featured on FazierFeatured on WayfindioAntigravity AI - Featured on Startup FameFeatured on Wired BusinessFeatured on Twelve ToolsListed on Turbo0Featured on findly.toolsFeatured on Aura++That App ShowAI ToolzShinyLaunchMillion Dot HomepageSolver ToolsFeatured on FazierFeatured on WayfindioAntigravity AI - Featured on Startup FameFeatured on Wired BusinessFeatured on Twelve ToolsListed on Turbo0Featured on findly.toolsFeatured on Aura++That App ShowAI ToolzShinyLaunchMillion Dot HomepageSolver Tools

© 2026 Antigravity AI Directory. All rights reserved.

The #1 directory for Google Antigravity IDE

This website is not affiliated with, endorsed by, or associated with Google LLC. "Google" and "Gemini" are trademarks of Google LLC.

Antigravity AI Directory
PromptsMCPBest PracticesUse CasesLearn
Home
Prompts
OpenAI API Integration Patterns

OpenAI API Integration Patterns

Integrate OpenAI APIs in Google Antigravity applications with streaming, function calling, and cost optimization strategies.

openaiaigptllmstreaming
by antigravity-team
⭐0Stars
.antigravity
# OpenAI API Integration Patterns

Build AI-powered features in your Google Antigravity applications with OpenAI APIs. This guide covers chat completions, streaming, function calling, and cost optimization.

## OpenAI Client Setup

Configure the OpenAI client with proper error handling:

```typescript
// lib/openai/client.ts
import OpenAI from "openai";

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY!,
  maxRetries: 3,
  timeout: 30000,
});

export default openai;

// lib/openai/rate-limiter.ts
import { Ratelimit } from "@upstash/ratelimit";
import { Redis } from "@upstash/redis";

const redis = new Redis({
  url: process.env.UPSTASH_REDIS_REST_URL!,
  token: process.env.UPSTASH_REDIS_REST_TOKEN!,
});

export const openaiRateLimiter = new Ratelimit({
  redis,
  limiter: Ratelimit.slidingWindow(10, "1 m"), // 10 requests per minute
  analytics: true,
  prefix: "openai",
});
```

## Chat Completion API

Implement chat completions with proper typing:

```typescript
// lib/openai/chat.ts
import openai from "./client";
import { ChatCompletionMessageParam } from "openai/resources";

export interface ChatOptions {
  model?: string;
  temperature?: number;
  maxTokens?: number;
  systemPrompt?: string;
}

export async function generateChatCompletion(
  messages: ChatCompletionMessageParam[],
  options: ChatOptions = {}
): Promise<string> {
  const {
    model = "gpt-4-turbo-preview",
    temperature = 0.7,
    maxTokens = 1000,
    systemPrompt,
  } = options;

  const allMessages: ChatCompletionMessageParam[] = systemPrompt
    ? [{ role: "system", content: systemPrompt }, ...messages]
    : messages;

  const completion = await openai.chat.completions.create({
    model,
    messages: allMessages,
    temperature,
    max_tokens: maxTokens,
  });

  return completion.choices[0]?.message?.content || "";
}
```

## Streaming Response

Implement streaming for real-time responses:

```typescript
// app/api/chat/route.ts
import { NextRequest } from "next/server";
import openai from "@/lib/openai/client";
import { openaiRateLimiter } from "@/lib/openai/rate-limiter";
import { createClient } from "@/lib/supabase/server";

export async function POST(request: NextRequest) {
  const supabase = await createClient();
  const { data: { user } } = await supabase.auth.getUser();

  if (!user) {
    return new Response("Unauthorized", { status: 401 });
  }

  // Rate limiting
  const { success } = await openaiRateLimiter.limit(user.id);
  if (!success) {
    return new Response("Rate limit exceeded", { status: 429 });
  }

  const { messages, systemPrompt } = await request.json();

  const stream = await openai.chat.completions.create({
    model: "gpt-4-turbo-preview",
    messages: [
      { role: "system", content: systemPrompt || "You are a helpful assistant." },
      ...messages,
    ],
    stream: true,
    max_tokens: 2000,
  });

  const encoder = new TextEncoder();

  const readableStream = new ReadableStream({
    async start(controller) {
      for await (const chunk of stream) {
        const content = chunk.choices[0]?.delta?.content || "";
        if (content) {
          controller.enqueue(encoder.encode(`data: ${JSON.stringify({ content })}\n\n`));
        }
      }
      controller.enqueue(encoder.encode("data: [DONE]\n\n"));
      controller.close();
    },
  });

  return new Response(readableStream, {
    headers: {
      "Content-Type": "text/event-stream",
      "Cache-Control": "no-cache",
      Connection: "keep-alive",
    },
  });
}
```

## React Streaming Hook

Consume streaming responses in React:

```typescript
// hooks/useChat.ts
import { useState, useCallback } from "react";

interface Message {
  role: "user" | "assistant";
  content: string;
}

export function useChat(systemPrompt?: string) {
  const [messages, setMessages] = useState<Message[]>([]);
  const [isLoading, setIsLoading] = useState(false);
  const [error, setError] = useState<string | null>(null);

  const sendMessage = useCallback(async (content: string) => {
    const userMessage: Message = { role: "user", content };
    setMessages((prev) => [...prev, userMessage]);
    setIsLoading(true);
    setError(null);

    try {
      const response = await fetch("/api/chat", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify({
          messages: [...messages, userMessage],
          systemPrompt,
        }),
      });

      if (!response.ok) throw new Error("Failed to send message");

      const reader = response.body?.getReader();
      const decoder = new TextDecoder();
      let assistantContent = "";

      setMessages((prev) => [...prev, { role: "assistant", content: "" }]);

      while (reader) {
        const { done, value } = await reader.read();
        if (done) break;

        const chunk = decoder.decode(value);
        const lines = chunk.split("\n").filter((line) => line.startsWith("data: "));

        for (const line of lines) {
          const data = line.slice(6);
          if (data === "[DONE]") break;

          try {
            const { content } = JSON.parse(data);
            assistantContent += content;
            setMessages((prev) => [
              ...prev.slice(0, -1),
              { role: "assistant", content: assistantContent },
            ]);
          } catch {}
        }
      }
    } catch (err) {
      setError((err as Error).message);
    } finally {
      setIsLoading(false);
    }
  }, [messages, systemPrompt]);

  return { messages, sendMessage, isLoading, error };
}
```

## Function Calling

Implement function calling for structured outputs:

```typescript
// lib/openai/functions.ts
import openai from "./client";

const functions = [
  {
    name: "search_prompts",
    description: "Search for prompts in the directory",
    parameters: {
      type: "object",
      properties: {
        query: { type: "string", description: "Search query" },
        category: { type: "string", enum: ["react", "nextjs", "typescript", "python"] },
        limit: { type: "number", description: "Max results" },
      },
      required: ["query"],
    },
  },
];

export async function processWithFunctions(userMessage: string) {
  const response = await openai.chat.completions.create({
    model: "gpt-4-turbo-preview",
    messages: [{ role: "user", content: userMessage }],
    functions,
    function_call: "auto",
  });

  const message = response.choices[0].message;

  if (message.function_call) {
    const args = JSON.parse(message.function_call.arguments);
    // Execute the function and return results
    return { functionCall: message.function_call.name, args };
  }

  return { content: message.content };
}
```

## Best Practices

1. **Rate Limiting**: Implement per-user rate limits to control costs
2. **Streaming**: Use streaming for better UX on long responses
3. **Error Handling**: Handle API errors gracefully with retries
4. **Token Counting**: Track token usage for cost monitoring
5. **Caching**: Cache responses for repeated queries
6. **Model Selection**: Use appropriate models for different tasks

When to Use This Prompt

This openai prompt is ideal for developers working on:

  • openai applications requiring modern best practices and optimal performance
  • Projects that need production-ready openai code with proper error handling
  • Teams looking to standardize their openai development workflow
  • Developers wanting to learn industry-standard openai patterns and techniques

By using this prompt, you can save hours of manual coding and ensure best practices are followed from the start. It's particularly valuable for teams looking to maintain consistency across their openai implementations.

How to Use

  1. Copy the prompt - Click the copy button above to copy the entire prompt to your clipboard
  2. Paste into your AI assistant - Use with Claude, ChatGPT, Cursor, or any AI coding tool
  3. Customize as needed - Adjust the prompt based on your specific requirements
  4. Review the output - Always review generated code for security and correctness
💡 Pro Tip: For best results, provide context about your project structure and any specific constraints or preferences you have.

Best Practices

  • ✓ Always review generated code for security vulnerabilities before deploying
  • ✓ Test the openai code in a development environment first
  • ✓ Customize the prompt output to match your project's coding standards
  • ✓ Keep your AI assistant's context window in mind for complex requirements
  • ✓ Version control your prompts alongside your code for reproducibility

Frequently Asked Questions

Can I use this openai prompt commercially?

Yes! All prompts on Antigravity AI Directory are free to use for both personal and commercial projects. No attribution required, though it's always appreciated.

Which AI assistants work best with this prompt?

This prompt works excellently with Claude, ChatGPT, Cursor, GitHub Copilot, and other modern AI coding assistants. For best results, use models with large context windows.

How do I customize this prompt for my specific needs?

You can modify the prompt by adding specific requirements, constraints, or preferences. For openai projects, consider mentioning your framework version, coding style, and any specific libraries you're using.

Related Prompts

💬 Comments

Loading comments...