LLM orchestration framework by deepset.
## Haystack MCP Server: Production-Ready RAG Framework The **Haystack MCP Server** integrates Haystack's modular RAG framework into Google Antigravity. This enterprise-grade framework enables building custom retrieval-augmented generation pipelines with components for every step of the NLP workflow. ### Why Haystack MCP? Haystack provides production-grade RAG infrastructure: - **Modular Design**: Mix and match components - **Model Agnostic**: Works with any LLM provider - **Production Ready**: Battle-tested at enterprise scale - **Rich Ecosystem**: Integrations for every use case - **Antigravity Native**: AI-assisted pipeline building ### Key Features #### 1. RAG Pipeline Construction ```python from haystack import Pipeline from haystack.components.embedders import OpenAITextEmbedder from haystack.components.retrievers import InMemoryEmbeddingRetriever from haystack.components.generators import OpenAIGenerator from haystack.components.builders import PromptBuilder # Build a complete RAG pipeline rag = Pipeline() rag.add_component("embedder", OpenAITextEmbedder()) rag.add_component("retriever", InMemoryEmbeddingRetriever(document_store)) rag.add_component("prompt", PromptBuilder(template=template)) rag.add_component("generator", OpenAIGenerator()) rag.connect("embedder", "retriever") rag.connect("retriever", "prompt.documents") rag.connect("prompt", "generator") ``` #### 2. Document Processing ```python from haystack.components.preprocessors import DocumentCleaner, DocumentSplitter from haystack.components.converters import PyPDFToDocument # Document ingestion pipeline indexing = Pipeline() indexing.add_component("converter", PyPDFToDocument()) indexing.add_component("cleaner", DocumentCleaner()) indexing.add_component("splitter", DocumentSplitter( split_by="sentence", split_length=3 )) indexing.add_component("embedder", OpenAIDocumentEmbedder()) indexing.add_component("writer", DocumentWriter(document_store)) ``` #### 3. Evaluation Components ```python from haystack.components.evaluators import SASEvaluator, ContextRelevanceEvaluator # Evaluate RAG quality evaluator = Pipeline() evaluator.add_component("sas", SASEvaluator()) evaluator.add_component("relevance", ContextRelevanceEvaluator()) results = evaluator.run({ "sas": {"predicted_answers": predictions, "ground_truth": truth}, "relevance": {"questions": questions, "contexts": contexts} }) ``` ### Configuration ```json { "mcpServers": { "haystack": { "command": "npx", "args": ["-y", "@anthropic/mcp-haystack"], "env": { "OPENAI_API_KEY": "your-openai-key", "HAYSTACK_TELEMETRY_ENABLED": "False" } } } } ``` ### Use Cases **Custom RAG Systems**: Build sophisticated question-answering systems over your private data. **Document Processing**: Create automated pipelines for ingesting and indexing large document collections. **Evaluation Frameworks**: Systematically evaluate and improve RAG system performance. The Haystack MCP Server enables building production-grade RAG systems within Antigravity.
{
"mcpServers": {
"haystack": {}
}
}