Google Antigravity Directory

The #1 directory for Google Antigravity prompts, rules, workflows & MCP servers. Optimized for Gemini 3 agentic development.

Resources

PromptsMCP ServersAntigravity RulesGEMINI.md GuideBest Practices

Company

Submit PromptAntigravityAI.directory

Popular Prompts

Next.js 14 App RouterReact TypeScriptTypeScript AdvancedFastAPI GuideDocker Best Practices

Legal

Privacy PolicyTerms of ServiceContact Us
Featured on FazierFeatured on WayfindioAntigravity AI - Featured on Startup FameFeatured on Wired BusinessFeatured on Twelve ToolsListed on Turbo0Featured on findly.toolsFeatured on Aura++That App ShowAI ToolzShinyLaunchMillion Dot HomepageSolver ToolsFeatured on FazierFeatured on WayfindioAntigravity AI - Featured on Startup FameFeatured on Wired BusinessFeatured on Twelve ToolsListed on Turbo0Featured on findly.toolsFeatured on Aura++That App ShowAI ToolzShinyLaunchMillion Dot HomepageSolver Tools

© 2026 Antigravity AI Directory. All rights reserved.

The #1 directory for Google Antigravity IDE

This website is not affiliated with, endorsed by, or associated with Google LLC. "Google" and "Gemini" are trademarks of Google LLC.

Antigravity AI Directory
PromptsMCPBest PracticesUse CasesLearn
Home
Prompts
LangChain Development Patterns

LangChain Development Patterns

Building LLM applications with LangChain including chains, agents, and RAG implementations

LangChainAIRAGAgents
by Antigravity Team
⭐0Stars
.antigravity
# LangChain Development Patterns for Google Antigravity

Build sophisticated LLM applications with LangChain using Google Antigravity's Gemini 3 engine. This guide covers chains, agents, RAG, and memory patterns.

## LangChain Setup

```typescript
// lib/langchain/config.ts
import { ChatOpenAI } from '@langchain/openai';
import { ChatAnthropic } from '@langchain/anthropic';
import { MemoryVectorStore } from 'langchain/vectorstores/memory';
import { OpenAIEmbeddings } from '@langchain/openai';

export const chatModel = new ChatOpenAI({
  modelName: 'gpt-4-turbo-preview',
  temperature: 0.7,
  maxTokens: 2000,
});

export const claudeModel = new ChatAnthropic({
  modelName: 'claude-3-opus-20240229',
  temperature: 0.7,
  maxTokens: 2000,
});

export const embeddings = new OpenAIEmbeddings({
  modelName: 'text-embedding-3-small',
});

export async function createVectorStore(documents: string[]) {
  return MemoryVectorStore.fromTexts(
    documents,
    documents.map((_, i) => ({ id: i })),
    embeddings
  );
}
```

## Chain Patterns

```typescript
// lib/langchain/chains.ts
import { ChatPromptTemplate, MessagesPlaceholder } from '@langchain/core/prompts';
import { RunnableSequence, RunnablePassthrough } from '@langchain/core/runnables';
import { StringOutputParser } from '@langchain/core/output_parsers';
import { chatModel } from './config';

// Simple conversation chain
export const conversationChain = RunnableSequence.from([
  ChatPromptTemplate.fromMessages([
    ['system', 'You are a helpful assistant. Be concise and accurate.'],
    new MessagesPlaceholder('history'),
    ['human', '{input}'],
  ]),
  chatModel,
  new StringOutputParser(),
]);

// Summarization chain
export const summarizeChain = RunnableSequence.from([
  ChatPromptTemplate.fromTemplate(`
    Summarize the following text in 3-5 bullet points:
    
    {text}
    
    Summary:
  `),
  chatModel,
  new StringOutputParser(),
]);

// Translation chain
export const translateChain = RunnableSequence.from([
  ChatPromptTemplate.fromTemplate(`
    Translate the following text to {language}:
    
    {text}
    
    Translation:
  `),
  chatModel,
  new StringOutputParser(),
]);

// Code explanation chain
export const codeExplainChain = RunnableSequence.from([
  ChatPromptTemplate.fromTemplate(`
    Explain the following code in simple terms:
    
    ```{language}
    {code}
    ```
    
    Explanation:
  `),
  chatModel,
  new StringOutputParser(),
]);

// Multi-step chain with context
export const researchChain = RunnableSequence.from([
  {
    query: new RunnablePassthrough(),
    context: async (input: { query: string }) => {
      // Fetch relevant context
      const results = await searchKnowledgeBase(input.query);
      return results.map(r => r.content).join('\n\n');
    },
  },
  ChatPromptTemplate.fromTemplate(`
    Based on the following context, answer the question.
    
    Context:
    {context}
    
    Question: {query}
    
    Answer:
  `),
  chatModel,
  new StringOutputParser(),
]);
```

## RAG Implementation

```typescript
// lib/langchain/rag.ts
import { Document } from '@langchain/core/documents';
import { RecursiveCharacterTextSplitter } from 'langchain/text_splitter';
import { SupabaseVectorStore } from '@langchain/community/vectorstores/supabase';
import { createClient } from '@supabase/supabase-js';
import { embeddings, chatModel } from './config';
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { RunnableSequence } from '@langchain/core/runnables';
import { StringOutputParser } from '@langchain/core/output_parsers';

const supabase = createClient(
  process.env.SUPABASE_URL!,
  process.env.SUPABASE_SERVICE_KEY!
);

export class RAGService {
  private vectorStore: SupabaseVectorStore;
  private textSplitter: RecursiveCharacterTextSplitter;

  constructor() {
    this.vectorStore = new SupabaseVectorStore(embeddings, {
      client: supabase,
      tableName: 'documents',
      queryName: 'match_documents',
    });

    this.textSplitter = new RecursiveCharacterTextSplitter({
      chunkSize: 1000,
      chunkOverlap: 200,
    });
  }

  async ingestDocuments(documents: { content: string; metadata: Record<string, any> }[]) {
    const docs: Document[] = [];

    for (const doc of documents) {
      const chunks = await this.textSplitter.splitText(doc.content);

      for (const chunk of chunks) {
        docs.push(new Document({
          pageContent: chunk,
          metadata: doc.metadata,
        }));
      }
    }

    await this.vectorStore.addDocuments(docs);

    return { ingested: docs.length };
  }

  async query(question: string, k = 4): Promise<string> {
    // Retrieve relevant documents
    const retriever = this.vectorStore.asRetriever({ k });
    const relevantDocs = await retriever.invoke(question);

    // Build context from documents
    const context = relevantDocs
      .map((doc) => doc.pageContent)
      .join('\n\n');

    // Generate response using RAG chain
    const ragChain = RunnableSequence.from([
      ChatPromptTemplate.fromTemplate(`
        You are a helpful assistant. Answer the question based on the following context.
        If you cannot find the answer in the context, say so.
        
        Context:
        {context}
        
        Question: {question}
        
        Answer:
      `),
      chatModel,
      new StringOutputParser(),
    ]);

    return ragChain.invoke({ context, question });
  }

  async similaritySearch(query: string, k = 4): Promise<Document[]> {
    return this.vectorStore.similaritySearch(query, k);
  }
}

export const ragService = new RAGService();
```

## Agent Implementation

```typescript
// lib/langchain/agent.ts
import { ChatOpenAI } from '@langchain/openai';
import { AgentExecutor, createOpenAIFunctionsAgent } from 'langchain/agents';
import { DynamicStructuredTool } from '@langchain/core/tools';
import { ChatPromptTemplate, MessagesPlaceholder } from '@langchain/core/prompts';
import { z } from 'zod';

const model = new ChatOpenAI({
  modelName: 'gpt-4-turbo-preview',
  temperature: 0,
});

// Define tools
const searchTool = new DynamicStructuredTool({
  name: 'search',
  description: 'Search for information on a topic',
  schema: z.object({
    query: z.string().describe('The search query'),
  }),
  func: async ({ query }) => {
    // Implement search logic
    const results = await searchWeb(query);
    return JSON.stringify(results);
  },
});

const calculatorTool = new DynamicStructuredTool({
  name: 'calculator',
  description: 'Perform mathematical calculations',
  schema: z.object({
    expression: z.string().describe('Mathematical expression to evaluate'),
  }),
  func: async ({ expression }) => {
    try {
      const result = eval(expression);
      return `Result: ${result}`;
    } catch {
      return 'Invalid expression';
    }
  },
});

const createTaskTool = new DynamicStructuredTool({
  name: 'createTask',
  description: 'Create a new task in the task management system',
  schema: z.object({
    title: z.string(),
    description: z.string().optional(),
    priority: z.enum(['low', 'medium', 'high']).default('medium'),
  }),
  func: async ({ title, description, priority }) => {
    const task = await createTask({ title, description, priority });
    return `Task created with ID: ${task.id}`;
  },
});

const tools = [searchTool, calculatorTool, createTaskTool];

// Create agent
const prompt = ChatPromptTemplate.fromMessages([
  ['system', `You are a helpful assistant with access to various tools.
    Use the tools when needed to help answer questions.
    Always explain your reasoning.`],
  new MessagesPlaceholder('chat_history'),
  ['human', '{input}'],
  new MessagesPlaceholder('agent_scratchpad'),
]);

export async function createAgent() {
  const agent = await createOpenAIFunctionsAgent({
    llm: model,
    tools,
    prompt,
  });

  return new AgentExecutor({
    agent,
    tools,
    verbose: process.env.NODE_ENV === 'development',
    maxIterations: 5,
  });
}

// Usage
export async function runAgent(input: string, chatHistory: any[] = []) {
  const agentExecutor = await createAgent();

  const result = await agentExecutor.invoke({
    input,
    chat_history: chatHistory,
  });

  return result.output;
}
```

## Best Practices

Google Antigravity's Gemini 3 engine recommends these LangChain patterns: Use chains for structured workflows. Implement RAG for knowledge-based Q&A. Create agents for dynamic tool usage. Add memory for conversational context. Use streaming for real-time responses.

When to Use This Prompt

This LangChain prompt is ideal for developers working on:

  • LangChain applications requiring modern best practices and optimal performance
  • Projects that need production-ready LangChain code with proper error handling
  • Teams looking to standardize their langchain development workflow
  • Developers wanting to learn industry-standard LangChain patterns and techniques

By using this prompt, you can save hours of manual coding and ensure best practices are followed from the start. It's particularly valuable for teams looking to maintain consistency across their langchain implementations.

How to Use

  1. Copy the prompt - Click the copy button above to copy the entire prompt to your clipboard
  2. Paste into your AI assistant - Use with Claude, ChatGPT, Cursor, or any AI coding tool
  3. Customize as needed - Adjust the prompt based on your specific requirements
  4. Review the output - Always review generated code for security and correctness
💡 Pro Tip: For best results, provide context about your project structure and any specific constraints or preferences you have.

Best Practices

  • ✓ Always review generated code for security vulnerabilities before deploying
  • ✓ Test the LangChain code in a development environment first
  • ✓ Customize the prompt output to match your project's coding standards
  • ✓ Keep your AI assistant's context window in mind for complex requirements
  • ✓ Version control your prompts alongside your code for reproducibility

Frequently Asked Questions

Can I use this LangChain prompt commercially?

Yes! All prompts on Antigravity AI Directory are free to use for both personal and commercial projects. No attribution required, though it's always appreciated.

Which AI assistants work best with this prompt?

This prompt works excellently with Claude, ChatGPT, Cursor, GitHub Copilot, and other modern AI coding assistants. For best results, use models with large context windows.

How do I customize this prompt for my specific needs?

You can modify the prompt by adding specific requirements, constraints, or preferences. For LangChain projects, consider mentioning your framework version, coding style, and any specific libraries you're using.

Related Prompts

💬 Comments

Loading comments...