Big data serving engine for search, recommendation, and AI.
## Vespa MCP Server: Big Data Serving Engine The **Vespa MCP Server** integrates Yahoo's Vespa search engine into Google Antigravity. This powerful platform combines search, recommendation, and AI serving at any scale, handling billions of documents with millisecond latency. ### Why Vespa MCP? Vespa excels at large-scale AI serving: - **Hybrid Search**: Vector + keyword in one query - **Real-Time**: Update and query simultaneously - **ML Serving**: Deploy models alongside data - **Massive Scale**: Billions of documents, low latency - **Open Source**: Battle-tested by Yahoo/Verizon ### Key Features #### 1. Hybrid Search ```python from vespa.application import Vespa app = Vespa(url="http://localhost:8080") # Hybrid query combining vector and text search response = app.query( yql="select * from sources * where userQuery()", query="machine learning best practices", ranking="hybrid", body={ "input.query(embedding)": query_embedding, "ranking.features.query(keywords)": "machine learning" } ) for hit in response.hits: print(f"{hit['relevance']}: {hit['fields']['title']}") ``` #### 2. Real-Time Updates ```python # Feed documents in real-time app.feed_data_point( schema="articles", data_id="doc-123", fields={ "title": "Understanding Vector Search", "content": "Vector search enables semantic...", "embedding": document_embedding, "timestamp": datetime.now().isoformat() } ) # Immediately queryable response = app.query( yql="select * from articles where timestamp > @recent", recent=one_hour_ago ) ``` #### 3. ML Model Serving ```yaml # services.xml - Deploy models alongside data schema article { document article { field title type string { } field embedding type tensor<float>(d0[768]) { } } rank-profile semantic { inputs { query(embedding) tensor<float>(d0[768]) } first-phase { expression: closeness(field, embedding) } } } ``` ### Configuration ```json { "mcpServers": { "vespa": { "command": "npx", "args": ["-y", "@anthropic/mcp-vespa"], "env": { "VESPA_URL": "http://localhost:8080", "VESPA_CERT": "/path/to/cert.pem" } } } } ``` ### Use Cases **Enterprise Search**: Build search over billions of documents with hybrid ranking. **Recommendations**: Serve personalized recommendations with real-time updates. **RAG at Scale**: Production-grade retrieval for large-scale RAG applications. The Vespa MCP Server enables enterprise-scale search and AI serving in Antigravity.
{
"mcpServers": {
"vespa": {}
}
}