Google Antigravity Directory

The #1 directory for Google Antigravity prompts, rules, workflows & MCP servers. Optimized for Gemini 3 agentic development.

Resources

PromptsMCP ServersAntigravity RulesGEMINI.md GuideBest Practices

Company

Submit PromptAntigravityAI.directory

Popular Prompts

Next.js 14 App RouterReact TypeScriptTypeScript AdvancedFastAPI GuideDocker Best Practices

Legal

Privacy PolicyTerms of ServiceContact Us
Featured on FazierVerified on Verified ToolsFeatured on WayfindioAntigravity AI - Featured on Startup FameFeatured on Wired BusinessFeatured on Twelve ToolsListed on Turbo0Featured on findly.toolsFeatured on Aura++That App ShowFeatured on FazierVerified on Verified ToolsFeatured on WayfindioAntigravity AI - Featured on Startup FameFeatured on Wired BusinessFeatured on Twelve ToolsListed on Turbo0Featured on findly.toolsFeatured on Aura++That App Show

© 2026 Antigravity AI Directory. All rights reserved.

The #1 directory for Google Antigravity IDE

This website is not affiliated with, endorsed by, or associated with Google LLC. "Google" and "Gemini" are trademarks of Google LLC.

Antigravity AI Directory
PromptsMCPBest PracticesUse CasesLearn
Home
MCP Servers
RSS Crawler MCP Server
database

RSS Crawler MCP Server MCP Server

RSS with SQLite storage and Firecrawl

rsscrawlersqlitefirecrawl

About

## RSS Crawler MCP Server: Deep Feed Discovery The **RSS Crawler MCP Server** integrates RSS/Atom feed discovery and crawling into Google Antigravity, enabling automatic feed detection, website crawling, and content indexing directly from your development environment. ### Why RSS Crawler MCP? - **Auto-Discovery**: Automatically find RSS/Atom feeds on any website - **Deep Crawling**: Discover feeds across entire domains and subdomains - **Link Extraction**: Extract and follow links to discover related content - **Sitemap Parsing**: Parse sitemaps to discover all available feeds - **Rate Limiting**: Respectful crawling with configurable rate limits ### Key Features #### 1. Feed Discovery ```python # Discover feeds on a website feeds = await mcp.discover_feeds( url="https://example.com", follow_links=True, max_depth=2 ) for feed in feeds: print(f"Found: {feed['url']}") print(f" Type: {feed['type']}") # rss, atom, json-feed print(f" Title: {feed['title']}") ``` #### 2. Domain Crawling ```python # Crawl entire domain for feeds results = await mcp.crawl_domain( domain="example.com", include_subdomains=True, max_pages=1000, respect_robots=True ) print(f"Pages crawled: {results['pages_crawled']}") print(f"Feeds found: {len(results['feeds'])}") for feed in results["feeds"]: print(f" - {feed['url']}") ``` #### 3. Sitemap Processing ```python # Parse sitemap for content pages = await mcp.parse_sitemap( sitemap_url="https://example.com/sitemap.xml" ) # Find feeds from sitemap pages feeds = await mcp.discover_from_sitemap( sitemap_url="https://example.com/sitemap.xml", check_feed_links=True ) ``` #### 4. Batch Discovery ```python # Discover feeds from multiple sites sites = ["https://site1.com", "https://site2.com", "https://site3.com"] results = await mcp.batch_discover( urls=sites, concurrent=5 ) for site, feeds in results.items(): print(f"{site}: {len(feeds)} feeds found") ``` ### Configuration ```json { "mcpServers": { "rss-crawler": { "command": "npx", "args": ["-y", "@anthropic/mcp-rss-crawler"], "env": { "CRAWLER_USER_AGENT": "RSSCrawler/1.0", "CRAWLER_RATE_LIMIT": "1", "CRAWLER_MAX_CONCURRENT": "5", "CRAWLER_TIMEOUT": "30000" } } } } ``` ### Use Cases **Feed Database**: Build a comprehensive database of RSS feeds across specific industries. **Content Discovery**: Find new content sources for aggregation and monitoring. **Competitive Analysis**: Discover all content channels from competitor websites. **SEO Research**: Analyze content structure and publishing patterns across domains. The RSS Crawler MCP enables feed discovery and content crawling within your development environment.

Installation

Configuration
{
  "mcpServers": {
    "rss-crawler": {
      "mcpServers": {
        "rss-crawler": {
          "args": [
            "-y",
            "rss-crawler-mcp"
          ],
          "command": "npx"
        }
      }
    }
  }
}

How to Use

  1. 1Stores feeds in SQLite database
  2. 2Firecrawl integration for full content
  3. 3Search and filter capabilities

Related MCP Servers

🧰

Toolhouse MCP

Universal AI tool platform that equips your AI with production-ready capabilities. Execute code, browse the web, manage files, send emails, and more through a unified MCP interface.

🔨

Smithery Registry MCP

The MCP server registry and discovery platform. Browse, search, and install MCP servers from the community. Find the perfect integrations for your AI development workflow.

🔍

MCP Inspector

Official debugging and testing tool for MCP servers. Inspect server capabilities, test tool calls, validate responses, and debug protocol communication in real-time.

← Back to All MCP Servers