Build AI assistants with OpenAI Assistants API for Google Antigravity projects including threads, tools, and file handling.
# OpenAI Assistants API Patterns for Google Antigravity
Build intelligent AI assistants with OpenAI Assistants API in your Google Antigravity IDE projects. This comprehensive guide covers assistant creation, threads, tool usage, and file handling optimized for Gemini 3 agentic development.
## Assistant Configuration
Set up the OpenAI client and create assistants:
```typescript
// lib/openai.ts
import OpenAI from 'openai';
export const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
// Assistant configuration
export const ASSISTANT_CONFIG = {
name: 'Antigravity Code Assistant',
instructions: `You are an expert coding assistant for Google Antigravity IDE.
Your role is to help developers write better code, debug issues, and learn best practices.
Guidelines:
- Provide clear, concise explanations
- Include code examples with proper TypeScript types
- Reference best practices and design patterns
- Suggest improvements when appropriate
- Be honest about limitations`,
model: 'gpt-4-turbo-preview',
tools: [
{ type: 'code_interpreter' },
{ type: 'retrieval' },
{
type: 'function',
function: {
name: 'search_prompts',
description: 'Search the Antigravity prompt library for relevant prompts',
parameters: {
type: 'object',
properties: {
query: {
type: 'string',
description: 'The search query',
},
category: {
type: 'string',
description: 'Optional category filter',
},
},
required: ['query'],
},
},
},
],
} as const;
// Create or retrieve assistant
export async function getOrCreateAssistant(): Promise<OpenAI.Assistant> {
const assistants = await openai.beta.assistants.list({ limit: 100 });
const existing = assistants.data.find(
(a) => a.name === ASSISTANT_CONFIG.name
);
if (existing) {
return existing;
}
return openai.beta.assistants.create(ASSISTANT_CONFIG);
}
```
## Thread Management
Handle conversation threads:
```typescript
// lib/threads.ts
import { openai } from './openai';
// Create a new thread
export async function createThread(metadata?: Record<string, string>) {
return openai.beta.threads.create({
metadata,
});
}
// Add a message to a thread
export async function addMessage(
threadId: string,
content: string,
fileIds?: string[]
) {
return openai.beta.threads.messages.create(threadId, {
role: 'user',
content,
attachments: fileIds?.map((fileId) => ({
file_id: fileId,
tools: [{ type: 'code_interpreter' }],
})),
});
}
// Get thread messages
export async function getMessages(threadId: string, limit: number = 20) {
const response = await openai.beta.threads.messages.list(threadId, {
limit,
order: 'desc',
});
return response.data.reverse();
}
// Delete a thread
export async function deleteThread(threadId: string) {
return openai.beta.threads.del(threadId);
}
```
## Running Assistants
Execute assistant runs with streaming:
```typescript
// lib/runs.ts
import { openai, getOrCreateAssistant } from './openai';
import { searchPrompts } from './prompts';
interface RunOptions {
threadId: string;
onMessage?: (message: string) => void;
onToolCall?: (name: string, args: unknown) => void;
onComplete?: (messages: OpenAI.Message[]) => void;
onError?: (error: Error) => void;
}
export async function runAssistant({
threadId,
onMessage,
onToolCall,
onComplete,
onError,
}: RunOptions) {
try {
const assistant = await getOrCreateAssistant();
const stream = openai.beta.threads.runs.stream(threadId, {
assistant_id: assistant.id,
});
// Handle streaming events
stream
.on('textDelta', (delta) => {
if (delta.value) {
onMessage?.(delta.value);
}
})
.on('toolCallCreated', (toolCall) => {
onToolCall?.(toolCall.type, toolCall);
})
.on('event', async (event) => {
// Handle tool calls that require action
if (
event.event === 'thread.run.requires_action' &&
event.data.required_action?.type === 'submit_tool_outputs'
) {
const toolCalls = event.data.required_action.submit_tool_outputs.tool_calls;
const toolOutputs = await processToolCalls(toolCalls);
await openai.beta.threads.runs.submitToolOutputsStream(
threadId,
event.data.id,
{ tool_outputs: toolOutputs }
);
}
})
.on('error', (error) => {
onError?.(error);
});
// Wait for completion
const finalRun = await stream.finalRun();
if (finalRun.status === 'completed') {
const messages = await openai.beta.threads.messages.list(threadId);
onComplete?.(messages.data);
}
return finalRun;
} catch (error) {
onError?.(error as Error);
throw error;
}
}
// Process tool calls
async function processToolCalls(
toolCalls: OpenAI.RequiredActionFunctionToolCall[]
): Promise<OpenAI.RunSubmitToolOutputsParams.ToolOutput[]> {
return Promise.all(
toolCalls.map(async (toolCall) => {
const args = JSON.parse(toolCall.function.arguments);
let output: string;
switch (toolCall.function.name) {
case 'search_prompts':
const results = await searchPrompts(args.query, args.category);
output = JSON.stringify(results);
break;
default:
output = JSON.stringify({ error: 'Unknown function' });
}
return {
tool_call_id: toolCall.id,
output,
};
})
);
}
```
## File Handling
Upload and manage files for assistants:
```typescript
// lib/files.ts
import { openai } from './openai';
// Upload a file for the assistant
export async function uploadFile(
file: File | Buffer,
filename: string,
purpose: 'assistants' | 'vision' = 'assistants'
) {
const fileObject = file instanceof Buffer
? new File([file], filename)
: file;
return openai.files.create({
file: fileObject,
purpose,
});
}
// Upload code files for analysis
export async function uploadCodeFiles(files: { name: string; content: string }[]) {
const uploadedFiles = await Promise.all(
files.map(async (file) => {
const buffer = Buffer.from(file.content, 'utf-8');
return uploadFile(buffer, file.name);
})
);
return uploadedFiles.map((f) => f.id);
}
// Delete a file
export async function deleteFile(fileId: string) {
return openai.files.del(fileId);
}
// List files
export async function listFiles(purpose?: 'assistants' | 'vision') {
return openai.files.list({ purpose });
}
// Retrieve file content
export async function getFileContent(fileId: string) {
const response = await openai.files.content(fileId);
return response.text();
}
```
## API Route Integration
Create API routes for the assistant:
```typescript
// app/api/assistant/chat/route.ts
import { NextRequest } from 'next/server';
import { createThread, addMessage, getMessages } from '@/lib/threads';
import { runAssistant } from '@/lib/runs';
export async function POST(request: NextRequest) {
const { message, threadId: existingThreadId } = await request.json();
// Create or use existing thread
const threadId = existingThreadId || (await createThread()).id;
// Add user message
await addMessage(threadId, message);
// Create streaming response
const encoder = new TextEncoder();
const stream = new ReadableStream({
async start(controller) {
await runAssistant({
threadId,
onMessage: (text) => {
controller.enqueue(
encoder.encode(`data: ${JSON.stringify({ type: 'text', content: text })}
`)
);
},
onToolCall: (name, args) => {
controller.enqueue(
encoder.encode(`data: ${JSON.stringify({ type: 'tool', name, args })}
`)
);
},
onComplete: () => {
controller.enqueue(
encoder.encode(`data: ${JSON.stringify({ type: 'done', threadId })}
`)
);
controller.close();
},
onError: (error) => {
controller.enqueue(
encoder.encode(`data: ${JSON.stringify({ type: 'error', message: error.message })}
`)
);
controller.close();
},
});
},
});
return new Response(stream, {
headers: {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
Connection: 'keep-alive',
},
});
}
// GET messages from a thread
export async function GET(request: NextRequest) {
const threadId = request.nextUrl.searchParams.get('threadId');
if (!threadId) {
return Response.json({ error: 'Thread ID required' }, { status: 400 });
}
const messages = await getMessages(threadId);
return Response.json({ messages });
}
```
## Best Practices
1. **Reuse assistants** instead of creating new ones for each user
2. **Use streaming** for responsive user experiences
3. **Implement tool functions** for extended capabilities
4. **Handle errors gracefully** with proper fallbacks
5. **Clean up old threads** to manage storage
6. **Use file attachments** for code analysis tasks
7. **Monitor usage** to control API costsThis openai prompt is ideal for developers working on:
By using this prompt, you can save hours of manual coding and ensure best practices are followed from the start. It's particularly valuable for teams looking to maintain consistency across their openai implementations.
Yes! All prompts on Antigravity AI Directory are free to use for both personal and commercial projects. No attribution required, though it's always appreciated.
This prompt works excellently with Claude, ChatGPT, Cursor, GitHub Copilot, and other modern AI coding assistants. For best results, use models with large context windows.
You can modify the prompt by adding specific requirements, constraints, or preferences. For openai projects, consider mentioning your framework version, coding style, and any specific libraries you're using.