Integrate AI models and LLMs into your applications
# AI Integration Patterns for Google Antigravity
Integrate AI models and LLMs effectively into your applications with Google Antigravity IDE.
## Vercel AI SDK Setup
```typescript
// app/api/chat/route.ts
import { streamText, convertToCoreMessages } from "ai";
import { openai } from "@ai-sdk/openai";
import { anthropic } from "@ai-sdk/anthropic";
export const runtime = "edge";
export async function POST(req: Request) {
const { messages, model = "gpt-4" } = await req.json();
const provider = model.startsWith("claude") ? anthropic : openai;
const result = await streamText({
model: provider(model),
messages: convertToCoreMessages(messages),
system: `You are a helpful coding assistant.
Provide clear, concise answers with code examples when appropriate.
Use markdown formatting for better readability.`,
temperature: 0.7,
maxTokens: 2000
});
return result.toDataStreamResponse();
}
```
## React Chat Component
```typescript
// components/Chat.tsx
"use client";
import { useChat } from "ai/react";
import { useRef, useEffect } from "react";
export function Chat() {
const { messages, input, handleInputChange, handleSubmit, isLoading, error } = useChat({
api: "/api/chat",
onError: (error) => {
console.error("Chat error:", error);
}
});
const messagesEndRef = useRef<HTMLDivElement>(null);
useEffect(() => {
messagesEndRef.current?.scrollIntoView({ behavior: "smooth" });
}, [messages]);
return (
<div className="flex flex-col h-[600px]">
<div className="flex-1 overflow-y-auto p-4 space-y-4">
{messages.map((message) => (
<div
key={message.id}
className={`flex ${message.role === "user" ? "justify-end" : "justify-start"}`}
>
<div
className={`max-w-[80%] rounded-lg p-4 ${
message.role === "user"
? "bg-blue-500 text-white"
: "bg-gray-100 text-gray-900"
}`}
>
<div className="prose prose-sm">
{message.content}
</div>
</div>
</div>
))}
{isLoading && (
<div className="flex justify-start">
<div className="bg-gray-100 rounded-lg p-4">
<span className="animate-pulse">Thinking...</span>
</div>
</div>
)}
<div ref={messagesEndRef} />
</div>
{error && (
<div className="p-2 bg-red-100 text-red-600 text-sm">
Error: {error.message}
</div>
)}
<form onSubmit={handleSubmit} className="p-4 border-t">
<div className="flex gap-2">
<input
value={input}
onChange={handleInputChange}
placeholder="Ask a question..."
className="flex-1 p-2 border rounded-lg"
disabled={isLoading}
/>
<button
type="submit"
disabled={isLoading || !input.trim()}
className="px-4 py-2 bg-blue-500 text-white rounded-lg disabled:opacity-50"
>
Send
</button>
</div>
</form>
</div>
);
}
```
## Structured Output Generation
```typescript
// lib/ai/structured.ts
import { generateObject } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
const productSchema = z.object({
name: z.string().describe("Product name"),
description: z.string().describe("Product description"),
price: z.number().positive().describe("Price in USD"),
category: z.enum(["electronics", "clothing", "home", "sports"]),
tags: z.array(z.string()).describe("Relevant tags for search"),
specifications: z.record(z.string()).describe("Technical specifications")
});
export async function generateProductListing(prompt: string) {
const { object } = await generateObject({
model: openai("gpt-4-turbo"),
schema: productSchema,
prompt: `Generate a product listing based on: ${prompt}`
});
return object;
}
// Code generation
const codeSchema = z.object({
language: z.string(),
code: z.string(),
explanation: z.string(),
dependencies: z.array(z.string())
});
export async function generateCode(task: string, language: string) {
const { object } = await generateObject({
model: openai("gpt-4-turbo"),
schema: codeSchema,
prompt: `Generate ${language} code to: ${task}.
Include necessary imports and a brief explanation.`
});
return object;
}
```
## RAG Implementation
```typescript
// lib/ai/rag.ts
import { embed, embedMany } from "ai";
import { openai } from "@ai-sdk/openai";
import { Index } from "@upstash/vector";
const vectorIndex = new Index({
url: process.env.UPSTASH_VECTOR_URL!,
token: process.env.UPSTASH_VECTOR_TOKEN!
});
// Ingest documents
export async function ingestDocuments(documents: { id: string; content: string; metadata: Record<string, unknown> }[]) {
const { embeddings } = await embedMany({
model: openai.embedding("text-embedding-3-small"),
values: documents.map(d => d.content)
});
const vectors = documents.map((doc, i) => ({
id: doc.id,
vector: embeddings[i],
metadata: { ...doc.metadata, content: doc.content }
}));
await vectorIndex.upsert(vectors);
}
// Query with RAG
export async function queryWithContext(query: string, topK = 5) {
const { embedding } = await embed({
model: openai.embedding("text-embedding-3-small"),
value: query
});
const results = await vectorIndex.query({
vector: embedding,
topK,
includeMetadata: true
});
const context = results
.map(r => r.metadata?.content)
.filter(Boolean)
.join("\n\n");
return context;
}
// RAG-enhanced chat
export async function chatWithRAG(messages: Message[], query: string) {
const context = await queryWithContext(query);
const systemPrompt = `Use the following context to answer questions:
${context}
If the context does not contain relevant information, say so.`;
return streamText({
model: openai("gpt-4-turbo"),
system: systemPrompt,
messages
});
}
```
## Best Practices
1. **Stream responses** for better UX
2. **Use structured outputs** for reliable parsing
3. **Implement RAG** for domain-specific knowledge
4. **Cache embeddings** to reduce costs
5. **Handle rate limits** with retries
6. **Monitor token usage** and costs
7. **Validate AI outputs** before using
Google Antigravity integrates seamlessly with AI models to enhance your development workflow.This ai prompt is ideal for developers working on:
By using this prompt, you can save hours of manual coding and ensure best practices are followed from the start. It's particularly valuable for teams looking to maintain consistency across their ai implementations.
Yes! All prompts on Antigravity AI Directory are free to use for both personal and commercial projects. No attribution required, though it's always appreciated.
This prompt works excellently with Claude, ChatGPT, Cursor, GitHub Copilot, and other modern AI coding assistants. For best results, use models with large context windows.
You can modify the prompt by adding specific requirements, constraints, or preferences. For ai projects, consider mentioning your framework version, coding style, and any specific libraries you're using.