Production patterns for OpenAI integration including streaming, function calling, and error handling
# OpenAI API Integration for Google Antigravity
Integrate OpenAI APIs with Google Antigravity's Gemini 3 engine. This guide covers chat completions, streaming responses, function calling, and production patterns.
## OpenAI Client Setup
```typescript
// lib/openai.ts
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
timeout: 30000,
maxRetries: 3,
});
export { openai };
// Types
export interface ChatMessage {
role: 'system' | 'user' | 'assistant' | 'function';
content: string;
name?: string;
function_call?: {
name: string;
arguments: string;
};
}
export interface ChatOptions {
model?: string;
temperature?: number;
maxTokens?: number;
functions?: OpenAI.Chat.ChatCompletionCreateParams.Function[];
stream?: boolean;
}
```
## Chat Completion Service
```typescript
// services/ai.ts
import { openai, ChatMessage, ChatOptions } from '@/lib/openai';
import OpenAI from 'openai';
const DEFAULT_MODEL = 'gpt-4-turbo-preview';
const DEFAULT_TEMPERATURE = 0.7;
const DEFAULT_MAX_TOKENS = 2000;
export class AIService {
async chat(
messages: ChatMessage[],
options: ChatOptions = {}
): Promise<string> {
const {
model = DEFAULT_MODEL,
temperature = DEFAULT_TEMPERATURE,
maxTokens = DEFAULT_MAX_TOKENS,
functions,
} = options;
const response = await openai.chat.completions.create({
model,
messages,
temperature,
max_tokens: maxTokens,
...(functions && { functions, function_call: 'auto' }),
});
const message = response.choices[0].message;
// Handle function calls
if (message.function_call) {
const functionResult = await this.executeFunction(
message.function_call.name,
JSON.parse(message.function_call.arguments)
);
// Continue conversation with function result
return this.chat([
...messages,
message as ChatMessage,
{
role: 'function',
name: message.function_call.name,
content: JSON.stringify(functionResult),
},
], options);
}
return message.content || '';
}
async *chatStream(
messages: ChatMessage[],
options: ChatOptions = {}
): AsyncGenerator<string> {
const {
model = DEFAULT_MODEL,
temperature = DEFAULT_TEMPERATURE,
maxTokens = DEFAULT_MAX_TOKENS,
} = options;
const stream = await openai.chat.completions.create({
model,
messages,
temperature,
max_tokens: maxTokens,
stream: true,
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
yield content;
}
}
}
private async executeFunction(
name: string,
args: Record<string, any>
): Promise<any> {
switch (name) {
case 'get_weather':
return this.getWeather(args.location);
case 'search_products':
return this.searchProducts(args.query, args.category);
case 'create_task':
return this.createTask(args.title, args.description);
default:
throw new Error(`Unknown function: ${name}`);
}
}
private async getWeather(location: string) {
// Implementation
return { location, temperature: 72, condition: 'sunny' };
}
private async searchProducts(query: string, category?: string) {
// Implementation
return [{ id: '1', name: 'Product', price: 29.99 }];
}
private async createTask(title: string, description: string) {
// Implementation
return { id: '1', title, description, status: 'pending' };
}
}
export const aiService = new AIService();
```
## Streaming API Route
```typescript
// app/api/chat/route.ts
import { NextRequest } from 'next/server';
import { openai } from '@/lib/openai';
import { auth } from '@/lib/auth';
export const runtime = 'edge';
export async function POST(request: NextRequest) {
const session = await auth();
if (!session) {
return new Response('Unauthorized', { status: 401 });
}
const { messages, model = 'gpt-4-turbo-preview' } = await request.json();
try {
const stream = await openai.chat.completions.create({
model,
messages,
stream: true,
});
// Create a readable stream
const encoder = new TextEncoder();
const readable = new ReadableStream({
async start(controller) {
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
controller.enqueue(encoder.encode(`data: ${JSON.stringify({ content })}\n\n`));
}
}
controller.enqueue(encoder.encode('data: [DONE]\n\n'));
controller.close();
},
});
return new Response(readable, {
headers: {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
},
});
} catch (error) {
console.error('OpenAI error:', error);
return new Response(
JSON.stringify({ error: 'Failed to generate response' }),
{ status: 500 }
);
}
}
```
## Function Calling Definitions
```typescript
// lib/openai/functions.ts
import OpenAI from 'openai';
export const chatFunctions: OpenAI.Chat.ChatCompletionCreateParams.Function[] = [
{
name: 'get_weather',
description: 'Get the current weather for a location',
parameters: {
type: 'object',
properties: {
location: {
type: 'string',
description: 'The city and state, e.g. San Francisco, CA',
},
unit: {
type: 'string',
enum: ['celsius', 'fahrenheit'],
description: 'Temperature unit',
},
},
required: ['location'],
},
},
{
name: 'search_products',
description: 'Search for products in the catalog',
parameters: {
type: 'object',
properties: {
query: {
type: 'string',
description: 'Search query',
},
category: {
type: 'string',
description: 'Product category to filter by',
},
maxPrice: {
type: 'number',
description: 'Maximum price filter',
},
},
required: ['query'],
},
},
{
name: 'create_task',
description: 'Create a new task in the task management system',
parameters: {
type: 'object',
properties: {
title: {
type: 'string',
description: 'Task title',
},
description: {
type: 'string',
description: 'Task description',
},
priority: {
type: 'string',
enum: ['low', 'medium', 'high'],
description: 'Task priority',
},
dueDate: {
type: 'string',
format: 'date',
description: 'Due date in YYYY-MM-DD format',
},
},
required: ['title'],
},
},
];
```
## React Chat Component
```typescript
// components/AIChat.tsx
'use client';
import { useState, useRef, useCallback } from 'react';
import { useChat } from 'ai/react';
export function AIChat() {
const { messages, input, handleInputChange, handleSubmit, isLoading, error } = useChat({
api: '/api/chat',
onError: (error) => {
console.error('Chat error:', error);
},
});
return (
<div className="flex flex-col h-full max-w-2xl mx-auto">
<div className="flex-1 overflow-y-auto p-4 space-y-4">
{messages.map((message) => (
<div
key={message.id}
className={`flex ${message.role === 'user' ? 'justify-end' : 'justify-start'}`}
>
<div
className={`max-w-[80%] px-4 py-2 rounded-lg ${
message.role === 'user'
? 'bg-blue-500 text-white'
: 'bg-gray-100 text-gray-900'
}`}
>
<div className="prose prose-sm">{message.content}</div>
</div>
</div>
))}
{isLoading && (
<div className="flex justify-start">
<div className="bg-gray-100 px-4 py-2 rounded-lg">
<div className="flex space-x-2">
<div className="w-2 h-2 bg-gray-400 rounded-full animate-bounce" />
<div className="w-2 h-2 bg-gray-400 rounded-full animate-bounce delay-100" />
<div className="w-2 h-2 bg-gray-400 rounded-full animate-bounce delay-200" />
</div>
</div>
</div>
)}
</div>
{error && (
<div className="px-4 py-2 bg-red-50 text-red-600 text-sm">
{error.message}
</div>
)}
<form onSubmit={handleSubmit} className="p-4 border-t">
<div className="flex gap-2">
<input
type="text"
value={input}
onChange={handleInputChange}
placeholder="Ask anything..."
className="flex-1 px-4 py-2 border rounded-lg focus:outline-none focus:ring-2 focus:ring-blue-500"
/>
<button
type="submit"
disabled={isLoading || !input.trim()}
className="px-6 py-2 bg-blue-500 text-white rounded-lg disabled:opacity-50"
>
Send
</button>
</div>
</form>
</div>
);
}
```
## Best Practices
Google Antigravity's Gemini 3 engine recommends these OpenAI patterns: Implement streaming for better UX. Use function calling for structured outputs. Add retry logic for resilience. Cache responses when appropriate. Monitor token usage and costs.This OpenAI prompt is ideal for developers working on:
By using this prompt, you can save hours of manual coding and ensure best practices are followed from the start. It's particularly valuable for teams looking to maintain consistency across their openai implementations.
Yes! All prompts on Antigravity AI Directory are free to use for both personal and commercial projects. No attribution required, though it's always appreciated.
This prompt works excellently with Claude, ChatGPT, Cursor, GitHub Copilot, and other modern AI coding assistants. For best results, use models with large context windows.
You can modify the prompt by adding specific requirements, constraints, or preferences. For OpenAI projects, consider mentioning your framework version, coding style, and any specific libraries you're using.