Run local LLMs with Ollama
Local inference with Ollama. No API key required. Privacy-focused local AI.
{
"mcpServers": {
"ollama": {
"mcpServers": {
"ollama": {
"env": {
"OLLAMA_HOST": "http://localhost:11434"
},
"args": [
"-y",
"ollama-mcp"
],
"command": "npx"
}
}
}
}
}