Building LLM applications with LangChain including chains, agents, and RAG implementations
# LangChain Development Patterns for Google Antigravity
Build sophisticated LLM applications with LangChain using Google Antigravity's Gemini 3 engine. This guide covers chains, agents, RAG, and memory patterns.
## LangChain Setup
```typescript
// lib/langchain/config.ts
import { ChatOpenAI } from '@langchain/openai';
import { ChatAnthropic } from '@langchain/anthropic';
import { MemoryVectorStore } from 'langchain/vectorstores/memory';
import { OpenAIEmbeddings } from '@langchain/openai';
export const chatModel = new ChatOpenAI({
modelName: 'gpt-4-turbo-preview',
temperature: 0.7,
maxTokens: 2000,
});
export const claudeModel = new ChatAnthropic({
modelName: 'claude-3-opus-20240229',
temperature: 0.7,
maxTokens: 2000,
});
export const embeddings = new OpenAIEmbeddings({
modelName: 'text-embedding-3-small',
});
export async function createVectorStore(documents: string[]) {
return MemoryVectorStore.fromTexts(
documents,
documents.map((_, i) => ({ id: i })),
embeddings
);
}
```
## Chain Patterns
```typescript
// lib/langchain/chains.ts
import { ChatPromptTemplate, MessagesPlaceholder } from '@langchain/core/prompts';
import { RunnableSequence, RunnablePassthrough } from '@langchain/core/runnables';
import { StringOutputParser } from '@langchain/core/output_parsers';
import { chatModel } from './config';
// Simple conversation chain
export const conversationChain = RunnableSequence.from([
ChatPromptTemplate.fromMessages([
['system', 'You are a helpful assistant. Be concise and accurate.'],
new MessagesPlaceholder('history'),
['human', '{input}'],
]),
chatModel,
new StringOutputParser(),
]);
// Summarization chain
export const summarizeChain = RunnableSequence.from([
ChatPromptTemplate.fromTemplate(`
Summarize the following text in 3-5 bullet points:
{text}
Summary:
`),
chatModel,
new StringOutputParser(),
]);
// Translation chain
export const translateChain = RunnableSequence.from([
ChatPromptTemplate.fromTemplate(`
Translate the following text to {language}:
{text}
Translation:
`),
chatModel,
new StringOutputParser(),
]);
// Code explanation chain
export const codeExplainChain = RunnableSequence.from([
ChatPromptTemplate.fromTemplate(`
Explain the following code in simple terms:
```{language}
{code}
```
Explanation:
`),
chatModel,
new StringOutputParser(),
]);
// Multi-step chain with context
export const researchChain = RunnableSequence.from([
{
query: new RunnablePassthrough(),
context: async (input: { query: string }) => {
// Fetch relevant context
const results = await searchKnowledgeBase(input.query);
return results.map(r => r.content).join('\n\n');
},
},
ChatPromptTemplate.fromTemplate(`
Based on the following context, answer the question.
Context:
{context}
Question: {query}
Answer:
`),
chatModel,
new StringOutputParser(),
]);
```
## RAG Implementation
```typescript
// lib/langchain/rag.ts
import { Document } from '@langchain/core/documents';
import { RecursiveCharacterTextSplitter } from 'langchain/text_splitter';
import { SupabaseVectorStore } from '@langchain/community/vectorstores/supabase';
import { createClient } from '@supabase/supabase-js';
import { embeddings, chatModel } from './config';
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { RunnableSequence } from '@langchain/core/runnables';
import { StringOutputParser } from '@langchain/core/output_parsers';
const supabase = createClient(
process.env.SUPABASE_URL!,
process.env.SUPABASE_SERVICE_KEY!
);
export class RAGService {
private vectorStore: SupabaseVectorStore;
private textSplitter: RecursiveCharacterTextSplitter;
constructor() {
this.vectorStore = new SupabaseVectorStore(embeddings, {
client: supabase,
tableName: 'documents',
queryName: 'match_documents',
});
this.textSplitter = new RecursiveCharacterTextSplitter({
chunkSize: 1000,
chunkOverlap: 200,
});
}
async ingestDocuments(documents: { content: string; metadata: Record<string, any> }[]) {
const docs: Document[] = [];
for (const doc of documents) {
const chunks = await this.textSplitter.splitText(doc.content);
for (const chunk of chunks) {
docs.push(new Document({
pageContent: chunk,
metadata: doc.metadata,
}));
}
}
await this.vectorStore.addDocuments(docs);
return { ingested: docs.length };
}
async query(question: string, k = 4): Promise<string> {
// Retrieve relevant documents
const retriever = this.vectorStore.asRetriever({ k });
const relevantDocs = await retriever.invoke(question);
// Build context from documents
const context = relevantDocs
.map((doc) => doc.pageContent)
.join('\n\n');
// Generate response using RAG chain
const ragChain = RunnableSequence.from([
ChatPromptTemplate.fromTemplate(`
You are a helpful assistant. Answer the question based on the following context.
If you cannot find the answer in the context, say so.
Context:
{context}
Question: {question}
Answer:
`),
chatModel,
new StringOutputParser(),
]);
return ragChain.invoke({ context, question });
}
async similaritySearch(query: string, k = 4): Promise<Document[]> {
return this.vectorStore.similaritySearch(query, k);
}
}
export const ragService = new RAGService();
```
## Agent Implementation
```typescript
// lib/langchain/agent.ts
import { ChatOpenAI } from '@langchain/openai';
import { AgentExecutor, createOpenAIFunctionsAgent } from 'langchain/agents';
import { DynamicStructuredTool } from '@langchain/core/tools';
import { ChatPromptTemplate, MessagesPlaceholder } from '@langchain/core/prompts';
import { z } from 'zod';
const model = new ChatOpenAI({
modelName: 'gpt-4-turbo-preview',
temperature: 0,
});
// Define tools
const searchTool = new DynamicStructuredTool({
name: 'search',
description: 'Search for information on a topic',
schema: z.object({
query: z.string().describe('The search query'),
}),
func: async ({ query }) => {
// Implement search logic
const results = await searchWeb(query);
return JSON.stringify(results);
},
});
const calculatorTool = new DynamicStructuredTool({
name: 'calculator',
description: 'Perform mathematical calculations',
schema: z.object({
expression: z.string().describe('Mathematical expression to evaluate'),
}),
func: async ({ expression }) => {
try {
const result = eval(expression);
return `Result: ${result}`;
} catch {
return 'Invalid expression';
}
},
});
const createTaskTool = new DynamicStructuredTool({
name: 'createTask',
description: 'Create a new task in the task management system',
schema: z.object({
title: z.string(),
description: z.string().optional(),
priority: z.enum(['low', 'medium', 'high']).default('medium'),
}),
func: async ({ title, description, priority }) => {
const task = await createTask({ title, description, priority });
return `Task created with ID: ${task.id}`;
},
});
const tools = [searchTool, calculatorTool, createTaskTool];
// Create agent
const prompt = ChatPromptTemplate.fromMessages([
['system', `You are a helpful assistant with access to various tools.
Use the tools when needed to help answer questions.
Always explain your reasoning.`],
new MessagesPlaceholder('chat_history'),
['human', '{input}'],
new MessagesPlaceholder('agent_scratchpad'),
]);
export async function createAgent() {
const agent = await createOpenAIFunctionsAgent({
llm: model,
tools,
prompt,
});
return new AgentExecutor({
agent,
tools,
verbose: process.env.NODE_ENV === 'development',
maxIterations: 5,
});
}
// Usage
export async function runAgent(input: string, chatHistory: any[] = []) {
const agentExecutor = await createAgent();
const result = await agentExecutor.invoke({
input,
chat_history: chatHistory,
});
return result.output;
}
```
## Best Practices
Google Antigravity's Gemini 3 engine recommends these LangChain patterns: Use chains for structured workflows. Implement RAG for knowledge-based Q&A. Create agents for dynamic tool usage. Add memory for conversational context. Use streaming for real-time responses.This LangChain prompt is ideal for developers working on:
By using this prompt, you can save hours of manual coding and ensure best practices are followed from the start. It's particularly valuable for teams looking to maintain consistency across their langchain implementations.
Yes! All prompts on Antigravity AI Directory are free to use for both personal and commercial projects. No attribution required, though it's always appreciated.
This prompt works excellently with Claude, ChatGPT, Cursor, GitHub Copilot, and other modern AI coding assistants. For best results, use models with large context windows.
You can modify the prompt by adding specific requirements, constraints, or preferences. For LangChain projects, consider mentioning your framework version, coding style, and any specific libraries you're using.