Google Antigravity Directory

The #1 directory for Google Antigravity prompts, rules, workflows & MCP servers. Optimized for Gemini 3 agentic development.

Resources

PromptsMCP ServersAntigravity RulesGEMINI.md GuideBest Practices

Company

Submit PromptAntigravityAI.directory

Popular Prompts

Next.js 14 App RouterReact TypeScriptTypeScript AdvancedFastAPI GuideDocker Best Practices

Legal

Privacy PolicyTerms of ServiceContact Us
Featured on FazierFeatured on WayfindioAntigravity AI - Featured on Startup FameFeatured on Wired BusinessFeatured on Twelve ToolsListed on Turbo0Featured on findly.toolsFeatured on Aura++That App ShowAI ToolzShinyLaunchMillion Dot HomepageSolver ToolsFeatured on FazierFeatured on WayfindioAntigravity AI - Featured on Startup FameFeatured on Wired BusinessFeatured on Twelve ToolsListed on Turbo0Featured on findly.toolsFeatured on Aura++That App ShowAI ToolzShinyLaunchMillion Dot HomepageSolver Tools

© 2026 Antigravity AI Directory. All rights reserved.

The #1 directory for Google Antigravity IDE

This website is not affiliated with, endorsed by, or associated with Google LLC. "Google" and "Gemini" are trademarks of Google LLC.

Antigravity AI Directory
PromptsMCPBest PracticesUse CasesLearn
Home
Prompts
Vercel AI SDK Patterns

Vercel AI SDK Patterns

Building AI applications with Vercel AI SDK including streaming, tool calling, and multi-provider support

Vercel AI SDKAIStreamingReact
by Antigravity Team
⭐0Stars
.antigravity
# Vercel AI SDK Patterns for Google Antigravity

Build AI applications with Vercel AI SDK using Google Antigravity's Gemini 3 engine. This guide covers streaming UI, tool calling, multi-provider support, and generative UI patterns.

## SDK Setup

```typescript
// lib/ai/providers.ts
import { createOpenAI } from '@ai-sdk/openai';
import { createAnthropic } from '@ai-sdk/anthropic';
import { createGoogleGenerativeAI } from '@ai-sdk/google';

export const openai = createOpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

export const anthropic = createAnthropic({
  apiKey: process.env.ANTHROPIC_API_KEY,
});

export const google = createGoogleGenerativeAI({
  apiKey: process.env.GOOGLE_AI_API_KEY,
});

// Model configurations
export const models = {
  'gpt-4-turbo': openai('gpt-4-turbo-preview'),
  'gpt-3.5-turbo': openai('gpt-3.5-turbo'),
  'claude-3-opus': anthropic('claude-3-opus-20240229'),
  'claude-3-sonnet': anthropic('claude-3-sonnet-20240229'),
  'gemini-pro': google('gemini-pro'),
} as const;

export type ModelId = keyof typeof models;
```

## Streaming Chat Route

```typescript
// app/api/chat/route.ts
import { streamText, StreamingTextResponse } from 'ai';
import { models, type ModelId } from '@/lib/ai/providers';
import { auth } from '@/lib/auth';

export const runtime = 'edge';

export async function POST(req: Request) {
  const session = await auth();
  if (!session) {
    return new Response('Unauthorized', { status: 401 });
  }

  const { messages, model = 'gpt-4-turbo' } = await req.json();

  const result = await streamText({
    model: models[model as ModelId],
    messages,
    system: `You are a helpful AI assistant. Be concise and accurate.`,
    temperature: 0.7,
    maxTokens: 2000,
  });

  return new StreamingTextResponse(result.textStream);
}
```

## Tool Calling

```typescript
// app/api/chat-with-tools/route.ts
import { streamText, tool } from 'ai';
import { z } from 'zod';
import { models } from '@/lib/ai/providers';

export const runtime = 'edge';

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = await streamText({
    model: models['gpt-4-turbo'],
    messages,
    tools: {
      getWeather: tool({
        description: 'Get the current weather for a location',
        parameters: z.object({
          location: z.string().describe('City and state, e.g. San Francisco, CA'),
          unit: z.enum(['celsius', 'fahrenheit']).optional(),
        }),
        execute: async ({ location, unit = 'fahrenheit' }) => {
          // Fetch weather data
          const weather = await fetchWeather(location);
          return {
            location,
            temperature: unit === 'celsius' ? weather.tempC : weather.tempF,
            unit,
            condition: weather.condition,
          };
        },
      }),

      searchProducts: tool({
        description: 'Search for products in the catalog',
        parameters: z.object({
          query: z.string().describe('Search query'),
          category: z.string().optional(),
          maxPrice: z.number().optional(),
        }),
        execute: async ({ query, category, maxPrice }) => {
          const products = await searchProducts({ query, category, maxPrice });
          return products.slice(0, 5);
        },
      }),

      createTask: tool({
        description: 'Create a new task',
        parameters: z.object({
          title: z.string(),
          description: z.string().optional(),
          priority: z.enum(['low', 'medium', 'high']).default('medium'),
          dueDate: z.string().optional(),
        }),
        execute: async (params) => {
          const task = await createTask(params);
          return { success: true, taskId: task.id };
        },
      }),
    },
    maxSteps: 5, // Allow multiple tool calls
  });

  return result.toAIStreamResponse();
}
```

## Generative UI with RSC

```typescript
// app/actions.tsx
'use server';

import { createStreamableUI, createStreamableValue } from 'ai/rsc';
import { streamText } from 'ai';
import { models } from '@/lib/ai/providers';
import { ProductCard } from '@/components/ProductCard';
import { WeatherCard } from '@/components/WeatherCard';
import { LoadingSpinner } from '@/components/LoadingSpinner';

export async function chat(messages: any[]) {
  const ui = createStreamableUI(<LoadingSpinner />);
  const textStream = createStreamableValue('');

  (async () => {
    const result = await streamText({
      model: models['gpt-4-turbo'],
      messages,
      tools: {
        showWeather: {
          description: 'Show weather for a location',
          parameters: z.object({
            location: z.string(),
          }),
          execute: async ({ location }) => {
            const weather = await fetchWeather(location);

            ui.update(
              <WeatherCard
                location={location}
                temperature={weather.temperature}
                condition={weather.condition}
                icon={weather.icon}
              />
            );

            return `Weather displayed for ${location}`;
          },
        },

        showProducts: {
          description: 'Show product recommendations',
          parameters: z.object({
            category: z.string(),
          }),
          execute: async ({ category }) => {
            const products = await getProducts(category);

            ui.update(
              <div className="grid grid-cols-2 gap-4">
                {products.map((product) => (
                  <ProductCard key={product.id} product={product} />
                ))}
              </div>
            );

            return `Showing ${products.length} products in ${category}`;
          },
        },
      },
      onFinish: ({ text }) => {
        if (text) {
          textStream.done(text);
        }
        ui.done();
      },
    });

    for await (const chunk of result.textStream) {
      textStream.update(chunk);
      ui.update(
        <div className="prose">
          <p>{chunk}</p>
        </div>
      );
    }
  })();

  return {
    ui: ui.value,
    text: textStream.value,
  };
}
```

## useChat Hook

```typescript
// components/Chat.tsx
'use client';

import { useChat } from 'ai/react';
import { useState } from 'react';
import { ModelId } from '@/lib/ai/providers';

export function Chat() {
  const [model, setModel] = useState<ModelId>('gpt-4-turbo');

  const {
    messages,
    input,
    handleInputChange,
    handleSubmit,
    isLoading,
    error,
    reload,
    stop,
  } = useChat({
    api: '/api/chat',
    body: { model },
    onFinish: (message) => {
      console.log('Message completed:', message);
    },
    onError: (error) => {
      console.error('Chat error:', error);
    },
  });

  return (
    <div className="flex flex-col h-screen">
      <header className="p-4 border-b flex justify-between items-center">
        <h1 className="text-lg font-semibold">AI Chat</h1>
        <select
          value={model}
          onChange={(e) => setModel(e.target.value as ModelId)}
          className="px-3 py-1 border rounded"
        >
          <option value="gpt-4-turbo">GPT-4 Turbo</option>
          <option value="gpt-3.5-turbo">GPT-3.5 Turbo</option>
          <option value="claude-3-opus">Claude 3 Opus</option>
          <option value="claude-3-sonnet">Claude 3 Sonnet</option>
          <option value="gemini-pro">Gemini Pro</option>
        </select>
      </header>

      <div className="flex-1 overflow-y-auto p-4 space-y-4">
        {messages.map((message) => (
          <div
            key={message.id}
            className={`flex ${message.role === 'user' ? 'justify-end' : 'justify-start'}`}
          >
            <div
              className={`max-w-[80%] px-4 py-2 rounded-lg ${
                message.role === 'user'
                  ? 'bg-blue-500 text-white'
                  : 'bg-gray-100'
              }`}
            >
              {message.content}
            </div>
          </div>
        ))}

        {isLoading && (
          <div className="flex justify-start">
            <div className="bg-gray-100 px-4 py-2 rounded-lg animate-pulse">
              Thinking...
            </div>
          </div>
        )}
      </div>

      {error && (
        <div className="px-4 py-2 bg-red-50 text-red-600">
          Error: {error.message}
          <button onClick={reload} className="ml-2 underline">
            Retry
          </button>
        </div>
      )}

      <form onSubmit={handleSubmit} className="p-4 border-t">
        <div className="flex gap-2">
          <input
            value={input}
            onChange={handleInputChange}
            placeholder="Type your message..."
            className="flex-1 px-4 py-2 border rounded-lg"
          />
          {isLoading ? (
            <button type="button" onClick={stop} className="px-4 py-2 bg-red-500 text-white rounded-lg">
              Stop
            </button>
          ) : (
            <button type="submit" className="px-4 py-2 bg-blue-500 text-white rounded-lg">
              Send
            </button>
          )}
        </div>
      </form>
    </div>
  );
}
```

## Best Practices

Google Antigravity's Gemini 3 engine recommends these Vercel AI SDK patterns: Use streaming for real-time responses. Implement tool calling for structured actions. Support multiple AI providers for flexibility. Use generative UI for rich interactions. Handle errors gracefully with retry options.

When to Use This Prompt

This Vercel AI SDK prompt is ideal for developers working on:

  • Vercel AI SDK applications requiring modern best practices and optimal performance
  • Projects that need production-ready Vercel AI SDK code with proper error handling
  • Teams looking to standardize their vercel ai sdk development workflow
  • Developers wanting to learn industry-standard Vercel AI SDK patterns and techniques

By using this prompt, you can save hours of manual coding and ensure best practices are followed from the start. It's particularly valuable for teams looking to maintain consistency across their vercel ai sdk implementations.

How to Use

  1. Copy the prompt - Click the copy button above to copy the entire prompt to your clipboard
  2. Paste into your AI assistant - Use with Claude, ChatGPT, Cursor, or any AI coding tool
  3. Customize as needed - Adjust the prompt based on your specific requirements
  4. Review the output - Always review generated code for security and correctness
💡 Pro Tip: For best results, provide context about your project structure and any specific constraints or preferences you have.

Best Practices

  • ✓ Always review generated code for security vulnerabilities before deploying
  • ✓ Test the Vercel AI SDK code in a development environment first
  • ✓ Customize the prompt output to match your project's coding standards
  • ✓ Keep your AI assistant's context window in mind for complex requirements
  • ✓ Version control your prompts alongside your code for reproducibility

Frequently Asked Questions

Can I use this Vercel AI SDK prompt commercially?

Yes! All prompts on Antigravity AI Directory are free to use for both personal and commercial projects. No attribution required, though it's always appreciated.

Which AI assistants work best with this prompt?

This prompt works excellently with Claude, ChatGPT, Cursor, GitHub Copilot, and other modern AI coding assistants. For best results, use models with large context windows.

How do I customize this prompt for my specific needs?

You can modify the prompt by adding specific requirements, constraints, or preferences. For Vercel AI SDK projects, consider mentioning your framework version, coding style, and any specific libraries you're using.

Related Prompts

💬 Comments

Loading comments...