Integrate OpenAI APIs in Google Antigravity applications with streaming, function calling, and cost optimization strategies.
# OpenAI API Integration Patterns
Build AI-powered features in your Google Antigravity applications with OpenAI APIs. This guide covers chat completions, streaming, function calling, and cost optimization.
## OpenAI Client Setup
Configure the OpenAI client with proper error handling:
```typescript
// lib/openai/client.ts
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY!,
maxRetries: 3,
timeout: 30000,
});
export default openai;
// lib/openai/rate-limiter.ts
import { Ratelimit } from "@upstash/ratelimit";
import { Redis } from "@upstash/redis";
const redis = new Redis({
url: process.env.UPSTASH_REDIS_REST_URL!,
token: process.env.UPSTASH_REDIS_REST_TOKEN!,
});
export const openaiRateLimiter = new Ratelimit({
redis,
limiter: Ratelimit.slidingWindow(10, "1 m"), // 10 requests per minute
analytics: true,
prefix: "openai",
});
```
## Chat Completion API
Implement chat completions with proper typing:
```typescript
// lib/openai/chat.ts
import openai from "./client";
import { ChatCompletionMessageParam } from "openai/resources";
export interface ChatOptions {
model?: string;
temperature?: number;
maxTokens?: number;
systemPrompt?: string;
}
export async function generateChatCompletion(
messages: ChatCompletionMessageParam[],
options: ChatOptions = {}
): Promise<string> {
const {
model = "gpt-4-turbo-preview",
temperature = 0.7,
maxTokens = 1000,
systemPrompt,
} = options;
const allMessages: ChatCompletionMessageParam[] = systemPrompt
? [{ role: "system", content: systemPrompt }, ...messages]
: messages;
const completion = await openai.chat.completions.create({
model,
messages: allMessages,
temperature,
max_tokens: maxTokens,
});
return completion.choices[0]?.message?.content || "";
}
```
## Streaming Response
Implement streaming for real-time responses:
```typescript
// app/api/chat/route.ts
import { NextRequest } from "next/server";
import openai from "@/lib/openai/client";
import { openaiRateLimiter } from "@/lib/openai/rate-limiter";
import { createClient } from "@/lib/supabase/server";
export async function POST(request: NextRequest) {
const supabase = await createClient();
const { data: { user } } = await supabase.auth.getUser();
if (!user) {
return new Response("Unauthorized", { status: 401 });
}
// Rate limiting
const { success } = await openaiRateLimiter.limit(user.id);
if (!success) {
return new Response("Rate limit exceeded", { status: 429 });
}
const { messages, systemPrompt } = await request.json();
const stream = await openai.chat.completions.create({
model: "gpt-4-turbo-preview",
messages: [
{ role: "system", content: systemPrompt || "You are a helpful assistant." },
...messages,
],
stream: true,
max_tokens: 2000,
});
const encoder = new TextEncoder();
const readableStream = new ReadableStream({
async start(controller) {
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content || "";
if (content) {
controller.enqueue(encoder.encode(`data: ${JSON.stringify({ content })}\n\n`));
}
}
controller.enqueue(encoder.encode("data: [DONE]\n\n"));
controller.close();
},
});
return new Response(readableStream, {
headers: {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
Connection: "keep-alive",
},
});
}
```
## React Streaming Hook
Consume streaming responses in React:
```typescript
// hooks/useChat.ts
import { useState, useCallback } from "react";
interface Message {
role: "user" | "assistant";
content: string;
}
export function useChat(systemPrompt?: string) {
const [messages, setMessages] = useState<Message[]>([]);
const [isLoading, setIsLoading] = useState(false);
const [error, setError] = useState<string | null>(null);
const sendMessage = useCallback(async (content: string) => {
const userMessage: Message = { role: "user", content };
setMessages((prev) => [...prev, userMessage]);
setIsLoading(true);
setError(null);
try {
const response = await fetch("/api/chat", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
messages: [...messages, userMessage],
systemPrompt,
}),
});
if (!response.ok) throw new Error("Failed to send message");
const reader = response.body?.getReader();
const decoder = new TextDecoder();
let assistantContent = "";
setMessages((prev) => [...prev, { role: "assistant", content: "" }]);
while (reader) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
const lines = chunk.split("\n").filter((line) => line.startsWith("data: "));
for (const line of lines) {
const data = line.slice(6);
if (data === "[DONE]") break;
try {
const { content } = JSON.parse(data);
assistantContent += content;
setMessages((prev) => [
...prev.slice(0, -1),
{ role: "assistant", content: assistantContent },
]);
} catch {}
}
}
} catch (err) {
setError((err as Error).message);
} finally {
setIsLoading(false);
}
}, [messages, systemPrompt]);
return { messages, sendMessage, isLoading, error };
}
```
## Function Calling
Implement function calling for structured outputs:
```typescript
// lib/openai/functions.ts
import openai from "./client";
const functions = [
{
name: "search_prompts",
description: "Search for prompts in the directory",
parameters: {
type: "object",
properties: {
query: { type: "string", description: "Search query" },
category: { type: "string", enum: ["react", "nextjs", "typescript", "python"] },
limit: { type: "number", description: "Max results" },
},
required: ["query"],
},
},
];
export async function processWithFunctions(userMessage: string) {
const response = await openai.chat.completions.create({
model: "gpt-4-turbo-preview",
messages: [{ role: "user", content: userMessage }],
functions,
function_call: "auto",
});
const message = response.choices[0].message;
if (message.function_call) {
const args = JSON.parse(message.function_call.arguments);
// Execute the function and return results
return { functionCall: message.function_call.name, args };
}
return { content: message.content };
}
```
## Best Practices
1. **Rate Limiting**: Implement per-user rate limits to control costs
2. **Streaming**: Use streaming for better UX on long responses
3. **Error Handling**: Handle API errors gracefully with retries
4. **Token Counting**: Track token usage for cost monitoring
5. **Caching**: Cache responses for repeated queries
6. **Model Selection**: Use appropriate models for different tasksThis openai prompt is ideal for developers working on:
By using this prompt, you can save hours of manual coding and ensure best practices are followed from the start. It's particularly valuable for teams looking to maintain consistency across their openai implementations.
Yes! All prompts on Antigravity AI Directory are free to use for both personal and commercial projects. No attribution required, though it's always appreciated.
This prompt works excellently with Claude, ChatGPT, Cursor, GitHub Copilot, and other modern AI coding assistants. For best results, use models with large context windows.
You can modify the prompt by adding specific requirements, constraints, or preferences. For openai projects, consider mentioning your framework version, coding style, and any specific libraries you're using.