Google Antigravity Directory

The #1 directory for Google Antigravity prompts, rules, workflows & MCP servers. Optimized for Gemini 3 agentic development.

Resources

PromptsMCP ServersAntigravity RulesGEMINI.md GuideBest Practices

Company

Submit PromptAntigravityAI.directory

Popular Prompts

Next.js 14 App RouterReact TypeScriptTypeScript AdvancedFastAPI GuideDocker Best Practices

Legal

Privacy PolicyTerms of ServiceContact Us
Featured on FazierVerified on Verified ToolsFeatured on WayfindioAntigravity AI - Featured on Startup FameFeatured on Wired BusinessFeatured on Twelve ToolsListed on Turbo0Featured on findly.toolsFeatured on Aura++That App ShowFeatured on FazierVerified on Verified ToolsFeatured on WayfindioAntigravity AI - Featured on Startup FameFeatured on Wired BusinessFeatured on Twelve ToolsListed on Turbo0Featured on findly.toolsFeatured on Aura++That App Show

© 2026 Antigravity AI Directory. All rights reserved.

The #1 directory for Google Antigravity IDE

This website is not affiliated with, endorsed by, or associated with Google LLC. "Google" and "Gemini" are trademarks of Google LLC.

Antigravity AI Directory
PromptsMCPBest PracticesUse CasesLearn
Home
Prompts
PostgreSQL Performance Optimization Complete Guide

PostgreSQL Performance Optimization Complete Guide

Master PostgreSQL performance tuning with indexing strategies, query optimization, connection pooling, partitioning, and monitoring. Learn to handle millions of rows efficiently.

postgresqldatabaseperformanceindexingoptimizationsqlbackend
by AntigravityAI
⭐0Stars
👁️4Views
.antigravity
# PostgreSQL Performance Optimization Guide

Optimize PostgreSQL for high-performance applications handling millions of rows with proper indexing, query optimization, and database tuning.

## Indexing Strategies

### Index Types and Usage

```sql
-- B-tree index (default, best for equality and range queries)
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_orders_created ON orders(created_at DESC);

-- Partial index (smaller, faster for filtered queries)
CREATE INDEX idx_active_users ON users(email) 
WHERE status = 'active';

CREATE INDEX idx_recent_orders ON orders(user_id, created_at) 
WHERE created_at > NOW() - INTERVAL '30 days';

-- Composite index (order matters!)
CREATE INDEX idx_orders_user_status ON orders(user_id, status, created_at DESC);

-- Covering index (includes all needed columns)
CREATE INDEX idx_products_search ON products(category_id, price) 
INCLUDE (name, description);

-- GIN index for JSONB and arrays
CREATE INDEX idx_products_metadata ON products USING GIN(metadata);
CREATE INDEX idx_posts_tags ON posts USING GIN(tags);

-- GiST index for geometric and full-text search
CREATE INDEX idx_locations_coords ON locations USING GIST(coordinates);
CREATE INDEX idx_articles_search ON articles USING GIN(to_tsvector('english', title || ' ' || content));

-- BRIN index (block range, excellent for time-series data)
CREATE INDEX idx_events_timestamp ON events USING BRIN(timestamp);
```

### Index Analysis

```sql
-- Find unused indexes
SELECT 
    schemaname || '.' || relname AS table,
    indexrelname AS index,
    pg_size_pretty(pg_relation_size(i.indexrelid)) AS size,
    idx_scan AS scans
FROM pg_stat_user_indexes i
JOIN pg_index USING (indexrelid)
WHERE idx_scan < 50
AND NOT indisunique
ORDER BY pg_relation_size(i.indexrelid) DESC;

-- Find missing indexes
SELECT 
    relname AS table,
    seq_scan,
    seq_tup_read,
    idx_scan,
    n_live_tup AS estimated_rows
FROM pg_stat_user_tables
WHERE seq_scan > 0
AND n_live_tup > 10000
ORDER BY seq_tup_read DESC
LIMIT 20;
```

## Query Optimization

### EXPLAIN ANALYZE

```sql
-- Analyze query execution
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT u.name, COUNT(o.id) as order_count
FROM users u
LEFT JOIN orders o ON o.user_id = u.id
WHERE u.created_at > '2024-01-01'
GROUP BY u.id
ORDER BY order_count DESC
LIMIT 100;

-- Key metrics to watch:
-- - Seq Scan on large tables (needs index)
-- - High "rows removed by filter" (needs partial index)
-- - Nested Loop with high row counts (consider hash join)
-- - Sort with high memory (increase work_mem)
```

### Common Query Patterns

```sql
-- Pagination with keyset (faster than OFFSET)
SELECT * FROM posts
WHERE (created_at, id) < ($last_created_at, $last_id)
ORDER BY created_at DESC, id DESC
LIMIT 20;

-- Efficient COUNT for large tables
SELECT reltuples::bigint AS estimate
FROM pg_class
WHERE relname = 'orders';

-- Or use COUNT with covering index
SELECT COUNT(*) FROM orders WHERE status = 'pending';

-- Batch updates (avoid locking entire table)
WITH batch AS (
    SELECT id FROM orders
    WHERE status = 'processing'
    AND updated_at < NOW() - INTERVAL '1 hour'
    LIMIT 1000
    FOR UPDATE SKIP LOCKED
)
UPDATE orders SET status = 'failed'
WHERE id IN (SELECT id FROM batch);

-- Upsert with conflict handling
INSERT INTO user_stats (user_id, views, updated_at)
VALUES ($1, 1, NOW())
ON CONFLICT (user_id) DO UPDATE SET
    views = user_stats.views + 1,
    updated_at = NOW();
```

## Connection Pooling

### PgBouncer Configuration

```ini
; pgbouncer.ini
[databases]
myapp = host=localhost port=5432 dbname=myapp

[pgbouncer]
listen_addr = 127.0.0.1
listen_port = 6432
auth_type = scram-sha-256
auth_file = /etc/pgbouncer/userlist.txt

; Pool settings
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 25
min_pool_size = 5
reserve_pool_size = 5
reserve_pool_timeout = 3

; Connection limits
max_db_connections = 50
max_user_connections = 50

; Timeouts
server_idle_timeout = 600
client_idle_timeout = 0
query_timeout = 30
```

### Prisma with Connection Pooling

```typescript
// lib/db.ts
import { PrismaClient } from "@prisma/client";

const globalForPrisma = globalThis as unknown as {
  prisma: PrismaClient | undefined;
};

export const prisma = globalForPrisma.prisma ?? new PrismaClient({
  datasources: {
    db: {
      url: process.env.DATABASE_URL + "?connection_limit=10&pool_timeout=30",
    },
  },
  log: process.env.NODE_ENV === "development" 
    ? ["query", "error", "warn"] 
    : ["error"],
});

if (process.env.NODE_ENV !== "production") {
  globalForPrisma.prisma = prisma;
}
```

## Table Partitioning

### Range Partitioning for Time-Series

```sql
-- Create partitioned table
CREATE TABLE events (
    id BIGSERIAL,
    event_type VARCHAR(50),
    payload JSONB,
    created_at TIMESTAMPTZ NOT NULL,
    PRIMARY KEY (id, created_at)
) PARTITION BY RANGE (created_at);

-- Create monthly partitions
CREATE TABLE events_2024_01 PARTITION OF events
    FOR VALUES FROM ('2024-01-01') TO ('2024-02-01');
CREATE TABLE events_2024_02 PARTITION OF events
    FOR VALUES FROM ('2024-02-01') TO ('2024-03-01');

-- Auto-create partitions with pg_partman
CREATE EXTENSION pg_partman;
SELECT partman.create_parent(
    p_parent_table := 'public.events',
    p_control := 'created_at',
    p_type := 'native',
    p_interval := 'monthly',
    p_premake := 3
);
```

## Performance Tuning

### postgresql.conf Settings

```ini
# Memory (adjust based on available RAM)
shared_buffers = 4GB                 # 25% of RAM
effective_cache_size = 12GB          # 75% of RAM
work_mem = 256MB                     # Per operation memory
maintenance_work_mem = 1GB           # For VACUUM, CREATE INDEX

# Parallelism
max_worker_processes = 8
max_parallel_workers_per_gather = 4
max_parallel_workers = 8
max_parallel_maintenance_workers = 4

# Write Ahead Log
wal_buffers = 64MB
checkpoint_completion_target = 0.9
max_wal_size = 4GB
min_wal_size = 1GB

# Query Planner
random_page_cost = 1.1               # For SSD storage
effective_io_concurrency = 200       # For SSD storage
default_statistics_target = 100
```

## Monitoring Queries

```sql
-- Active queries and locks
SELECT pid, age(clock_timestamp(), query_start), usename, query, state
FROM pg_stat_activity
WHERE state != 'idle'
ORDER BY query_start;

-- Table bloat estimation
SELECT 
    schemaname, tablename,
    pg_size_pretty(pg_total_relation_size(schemaname || '.' || tablename)) as total_size,
    n_dead_tup,
    n_live_tup,
    round(n_dead_tup * 100.0 / nullif(n_live_tup, 0), 2) as dead_ratio
FROM pg_stat_user_tables
ORDER BY n_dead_tup DESC
LIMIT 20;
```

This PostgreSQL optimization guide covers indexing, query tuning, connection pooling, partitioning, and monitoring for production-scale databases.

When to Use This Prompt

This postgresql prompt is ideal for developers working on:

  • postgresql applications requiring modern best practices and optimal performance
  • Projects that need production-ready postgresql code with proper error handling
  • Teams looking to standardize their postgresql development workflow
  • Developers wanting to learn industry-standard postgresql patterns and techniques

By using this prompt, you can save hours of manual coding and ensure best practices are followed from the start. It's particularly valuable for teams looking to maintain consistency across their postgresql implementations.

How to Use

  1. Copy the prompt - Click the copy button above to copy the entire prompt to your clipboard
  2. Paste into your AI assistant - Use with Claude, ChatGPT, Cursor, or any AI coding tool
  3. Customize as needed - Adjust the prompt based on your specific requirements
  4. Review the output - Always review generated code for security and correctness
💡 Pro Tip: For best results, provide context about your project structure and any specific constraints or preferences you have.

Best Practices

  • ✓ Always review generated code for security vulnerabilities before deploying
  • ✓ Test the postgresql code in a development environment first
  • ✓ Customize the prompt output to match your project's coding standards
  • ✓ Keep your AI assistant's context window in mind for complex requirements
  • ✓ Version control your prompts alongside your code for reproducibility

Frequently Asked Questions

Can I use this postgresql prompt commercially?

Yes! All prompts on Antigravity AI Directory are free to use for both personal and commercial projects. No attribution required, though it's always appreciated.

Which AI assistants work best with this prompt?

This prompt works excellently with Claude, ChatGPT, Cursor, GitHub Copilot, and other modern AI coding assistants. For best results, use models with large context windows.

How do I customize this prompt for my specific needs?

You can modify the prompt by adding specific requirements, constraints, or preferences. For postgresql projects, consider mentioning your framework version, coding style, and any specific libraries you're using.

Related Prompts

💬 Comments

Loading comments...