Google Antigravity Directory

The #1 directory for Google Antigravity prompts, rules, workflows & MCP servers. Optimized for Gemini 3 agentic development.

Resources

PromptsMCP ServersAntigravity RulesGEMINI.md GuideBest Practices

Company

Submit PromptAntigravityAI.directory

Popular Prompts

Next.js 14 App RouterReact TypeScriptTypeScript AdvancedFastAPI GuideDocker Best Practices

Legal

Privacy PolicyTerms of ServiceContact Us
Featured on FazierFeatured on WayfindioAntigravity AI - Featured on Startup FameFeatured on Wired BusinessFeatured on Twelve ToolsListed on Turbo0Featured on findly.toolsFeatured on Aura++That App ShowAI ToolzShinyLaunchMillion Dot HomepageSolver ToolsFeatured on FazierFeatured on WayfindioAntigravity AI - Featured on Startup FameFeatured on Wired BusinessFeatured on Twelve ToolsListed on Turbo0Featured on findly.toolsFeatured on Aura++That App ShowAI ToolzShinyLaunchMillion Dot HomepageSolver Tools

© 2026 Antigravity AI Directory. All rights reserved.

The #1 directory for Google Antigravity IDE

This website is not affiliated with, endorsed by, or associated with Google LLC. "Google" and "Gemini" are trademarks of Google LLC.

Antigravity AI Directory
PromptsMCPBest PracticesUse CasesLearn
Home
Prompts
Vector Database Patterns

Vector Database Patterns

Building AI applications with vector databases including embedding storage, similarity search, and RAG

Vector DatabaseEmbeddingsRAGAI
by Antigravity Team
⭐0Stars
.antigravity
# Vector Database Patterns for Google Antigravity

Master vector databases with Google Antigravity's Gemini 3 engine. This guide covers embedding generation, similarity search, hybrid queries, and production patterns.

## Pinecone Integration

```typescript
// lib/vectordb/pinecone.ts
import { Pinecone } from '@pinecone-database/pinecone';
import { OpenAIEmbeddings } from '@langchain/openai';

const pinecone = new Pinecone({
  apiKey: process.env.PINECONE_API_KEY!,
});

const embeddings = new OpenAIEmbeddings({
  modelName: 'text-embedding-3-small',
  dimensions: 1536,
});

const index = pinecone.index(process.env.PINECONE_INDEX!);

interface Document {
  id: string;
  content: string;
  metadata: Record<string, any>;
}

export class PineconeService {
  async upsertDocuments(documents: Document[]): Promise<void> {
    const vectors = await Promise.all(
      documents.map(async (doc) => {
        const embedding = await embeddings.embedQuery(doc.content);
        return {
          id: doc.id,
          values: embedding,
          metadata: {
            ...doc.metadata,
            content: doc.content.slice(0, 1000), // Store truncated content
          },
        };
      })
    );

    // Upsert in batches of 100
    const batchSize = 100;
    for (let i = 0; i < vectors.length; i += batchSize) {
      const batch = vectors.slice(i, i + batchSize);
      await index.upsert(batch);
    }
  }

  async search(
    query: string,
    options: {
      topK?: number;
      filter?: Record<string, any>;
      namespace?: string;
    } = {}
  ): Promise<Array<{ id: string; score: number; metadata: any }>> {
    const { topK = 10, filter, namespace } = options;

    const queryEmbedding = await embeddings.embedQuery(query);

    const results = await index.query({
      vector: queryEmbedding,
      topK,
      filter,
      namespace,
      includeMetadata: true,
    });

    return results.matches.map((match) => ({
      id: match.id,
      score: match.score || 0,
      metadata: match.metadata,
    }));
  }

  async deleteByIds(ids: string[]): Promise<void> {
    await index.deleteMany(ids);
  }

  async deleteByFilter(filter: Record<string, any>): Promise<void> {
    await index.deleteMany({ filter });
  }
}

export const pineconeService = new PineconeService();
```

## Supabase pgvector

```typescript
// lib/vectordb/supabase.ts
import { createClient } from '@supabase/supabase-js';
import { OpenAIEmbeddings } from '@langchain/openai';

const supabase = createClient(
  process.env.SUPABASE_URL!,
  process.env.SUPABASE_SERVICE_KEY!
);

const embeddings = new OpenAIEmbeddings({
  modelName: 'text-embedding-3-small',
});

interface DocumentInput {
  content: string;
  metadata?: Record<string, any>;
}

export class SupabaseVectorService {
  private tableName = 'documents';

  async createTable(): Promise<void> {
    // Run this SQL in Supabase dashboard
    const sql = `
      CREATE EXTENSION IF NOT EXISTS vector;
      
      CREATE TABLE IF NOT EXISTS documents (
        id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
        content TEXT NOT NULL,
        metadata JSONB,
        embedding vector(1536),
        created_at TIMESTAMPTZ DEFAULT NOW()
      );
      
      CREATE INDEX IF NOT EXISTS documents_embedding_idx 
      ON documents USING ivfflat (embedding vector_cosine_ops)
      WITH (lists = 100);
    `;
    console.log('Run this SQL in Supabase:', sql);
  }

  async insert(documents: DocumentInput[]): Promise<string[]> {
    const ids: string[] = [];

    for (const doc of documents) {
      const embedding = await embeddings.embedQuery(doc.content);

      const { data, error } = await supabase
        .from(this.tableName)
        .insert({
          content: doc.content,
          metadata: doc.metadata || {},
          embedding: embedding,
        })
        .select('id')
        .single();

      if (error) throw error;
      ids.push(data.id);
    }

    return ids;
  }

  async search(
    query: string,
    options: {
      limit?: number;
      threshold?: number;
      filter?: Record<string, any>;
    } = {}
  ): Promise<Array<{ id: string; content: string; similarity: number; metadata: any }>> {
    const { limit = 10, threshold = 0.7, filter } = options;

    const queryEmbedding = await embeddings.embedQuery(query);

    // Use RPC function for similarity search
    const { data, error } = await supabase.rpc('match_documents', {
      query_embedding: queryEmbedding,
      match_threshold: threshold,
      match_count: limit,
      filter_metadata: filter || {},
    });

    if (error) throw error;

    return data.map((row: any) => ({
      id: row.id,
      content: row.content,
      similarity: row.similarity,
      metadata: row.metadata,
    }));
  }

  async hybridSearch(
    query: string,
    options: {
      limit?: number;
      semanticWeight?: number;
      keywordWeight?: number;
    } = {}
  ): Promise<any[]> {
    const { limit = 10, semanticWeight = 0.7, keywordWeight = 0.3 } = options;

    const queryEmbedding = await embeddings.embedQuery(query);

    const { data, error } = await supabase.rpc('hybrid_search', {
      query_text: query,
      query_embedding: queryEmbedding,
      match_count: limit,
      semantic_weight: semanticWeight,
      keyword_weight: keywordWeight,
    });

    if (error) throw error;
    return data;
  }
}

export const supabaseVectorService = new SupabaseVectorService();
```

## RAG Implementation

```typescript
// lib/rag/service.ts
import { pineconeService } from '../vectordb/pinecone';
import { ChatOpenAI } from '@langchain/openai';
import { StringOutputParser } from '@langchain/core/output_parsers';
import { ChatPromptTemplate } from '@langchain/core/prompts';

const llm = new ChatOpenAI({
  modelName: 'gpt-4-turbo-preview',
  temperature: 0.7,
});

export class RAGService {
  async query(
    question: string,
    options: {
      namespace?: string;
      topK?: number;
      systemPrompt?: string;
    } = {}
  ): Promise<{ answer: string; sources: any[] }> {
    const { namespace, topK = 5, systemPrompt } = options;

    // Retrieve relevant documents
    const results = await pineconeService.search(question, {
      topK,
      namespace,
    });

    // Build context from results
    const context = results
      .map((r, i) => `[${i + 1}] ${r.metadata.content}`)
      .join('\n\n');

    // Generate response
    const prompt = ChatPromptTemplate.fromMessages([
      [
        'system',
        systemPrompt ||
          `You are a helpful assistant. Answer questions based on the provided context.
          If the context doesn't contain relevant information, say so.
          Cite sources using [1], [2], etc.`,
      ],
      [
        'human',
        `Context:
{context}

Question: {question}

Answer:`,
      ],
    ]);

    const chain = prompt.pipe(llm).pipe(new StringOutputParser());

    const answer = await chain.invoke({ context, question });

    return {
      answer,
      sources: results.map((r) => ({
        id: r.id,
        score: r.score,
        metadata: r.metadata,
      })),
    };
  }

  async ingestDocument(
    content: string,
    metadata: Record<string, any>
  ): Promise<void> {
    // Split content into chunks
    const chunks = this.splitIntoChunks(content, 500, 50);

    const documents = chunks.map((chunk, i) => ({
      id: `${metadata.documentId}-chunk-${i}`,
      content: chunk,
      metadata: {
        ...metadata,
        chunkIndex: i,
        totalChunks: chunks.length,
      },
    }));

    await pineconeService.upsertDocuments(documents);
  }

  private splitIntoChunks(
    text: string,
    chunkSize: number,
    overlap: number
  ): string[] {
    const chunks: string[] = [];
    let start = 0;

    while (start < text.length) {
      const end = Math.min(start + chunkSize, text.length);
      chunks.push(text.slice(start, end));
      start = end - overlap;
    }

    return chunks;
  }
}

export const ragService = new RAGService();
```

## Best Practices

Google Antigravity's Gemini 3 engine recommends these vector database patterns: Choose appropriate embedding dimensions for your use case. Implement chunking strategies for long documents. Use hybrid search for better results. Add metadata filtering for scoped searches. Monitor and optimize index performance.

When to Use This Prompt

This Vector Database prompt is ideal for developers working on:

  • Vector Database applications requiring modern best practices and optimal performance
  • Projects that need production-ready Vector Database code with proper error handling
  • Teams looking to standardize their vector database development workflow
  • Developers wanting to learn industry-standard Vector Database patterns and techniques

By using this prompt, you can save hours of manual coding and ensure best practices are followed from the start. It's particularly valuable for teams looking to maintain consistency across their vector database implementations.

How to Use

  1. Copy the prompt - Click the copy button above to copy the entire prompt to your clipboard
  2. Paste into your AI assistant - Use with Claude, ChatGPT, Cursor, or any AI coding tool
  3. Customize as needed - Adjust the prompt based on your specific requirements
  4. Review the output - Always review generated code for security and correctness
💡 Pro Tip: For best results, provide context about your project structure and any specific constraints or preferences you have.

Best Practices

  • ✓ Always review generated code for security vulnerabilities before deploying
  • ✓ Test the Vector Database code in a development environment first
  • ✓ Customize the prompt output to match your project's coding standards
  • ✓ Keep your AI assistant's context window in mind for complex requirements
  • ✓ Version control your prompts alongside your code for reproducibility

Frequently Asked Questions

Can I use this Vector Database prompt commercially?

Yes! All prompts on Antigravity AI Directory are free to use for both personal and commercial projects. No attribution required, though it's always appreciated.

Which AI assistants work best with this prompt?

This prompt works excellently with Claude, ChatGPT, Cursor, GitHub Copilot, and other modern AI coding assistants. For best results, use models with large context windows.

How do I customize this prompt for my specific needs?

You can modify the prompt by adding specific requirements, constraints, or preferences. For Vector Database projects, consider mentioning your framework version, coding style, and any specific libraries you're using.

Related Prompts

💬 Comments

Loading comments...