Master database indexing in PostgreSQL for Google Antigravity applications with B-tree, GiST, GIN indexes and query optimization.
# PostgreSQL Index Optimization Guide
Optimize database performance in your Google Antigravity applications with strategic PostgreSQL indexing. This guide covers index types, query analysis, and performance tuning patterns.
## Understanding Index Types
PostgreSQL offers various index types for different use cases:
```sql
-- B-tree indexes (default) - equality and range queries
CREATE INDEX idx_prompts_created_at ON prompts(created_at DESC);
CREATE INDEX idx_prompts_slug ON prompts(slug);
-- Partial indexes - index only relevant rows
CREATE INDEX idx_approved_prompts ON prompts(created_at DESC)
WHERE is_approved = true;
-- Composite indexes - multiple columns
CREATE INDEX idx_prompts_user_date ON prompts(user_id, created_at DESC);
-- GIN indexes - array and JSONB columns
CREATE INDEX idx_prompts_tags ON prompts USING GIN(tags);
CREATE INDEX idx_prompts_metadata ON prompts USING GIN(metadata jsonb_path_ops);
-- GiST indexes - full-text search
CREATE INDEX idx_prompts_search ON prompts USING GiST(
to_tsvector('english', coalesce(title, '') || ' ' || coalesce(description, ''))
);
-- BRIN indexes - large tables with natural ordering
CREATE INDEX idx_events_timestamp ON events USING BRIN(created_at)
WITH (pages_per_range = 128);
```
## Query Analysis with EXPLAIN
Analyze query performance to identify indexing opportunities:
```sql
-- Analyze query execution plan
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT * FROM prompts
WHERE is_approved = true
AND 'react' = ANY(tags)
ORDER BY created_at DESC
LIMIT 20;
-- Check index usage statistics
SELECT
schemaname,
relname AS table_name,
indexrelname AS index_name,
idx_scan AS times_used,
idx_tup_read AS tuples_read,
idx_tup_fetch AS tuples_fetched
FROM pg_stat_user_indexes
WHERE schemaname = 'public'
ORDER BY idx_scan DESC;
-- Find unused indexes
SELECT
schemaname || '.' || relname AS table,
indexrelname AS index,
pg_size_pretty(pg_relation_size(i.indexrelid)) AS index_size,
idx_scan AS index_scans
FROM pg_stat_user_indexes ui
JOIN pg_index i ON ui.indexrelid = i.indexrelid
WHERE NOT i.indisunique
AND idx_scan < 50
AND pg_relation_size(i.indexrelid) > 5 * 1024 * 1024
ORDER BY pg_relation_size(i.indexrelid) DESC;
```
## Supabase Migration for Indexes
Create indexes through Supabase migrations:
```typescript
// supabase/migrations/20240101000000_add_performance_indexes.sql
-- Performance indexes for prompts table
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_prompts_approved_date
ON prompts(created_at DESC)
WHERE is_approved = true;
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_prompts_tags_gin
ON prompts USING GIN(tags);
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_prompts_search_vector
ON prompts USING GIN(to_tsvector('english', title || ' ' || description));
-- Covering index to avoid table lookups
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_prompts_list_covering
ON prompts(created_at DESC)
INCLUDE (id, slug, title, description, tags, star_count)
WHERE is_approved = true;
```
## Query Optimization Patterns
Optimize queries to take advantage of indexes:
```typescript
// lib/optimized-queries.ts
import { createClient } from "@supabase/supabase-js";
const supabase = createClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!
);
// Use covering index - avoids table lookup
export async function getPromptsList(page = 1, limit = 20) {
const offset = (page - 1) * limit;
const { data, error } = await supabase
.from("prompts")
.select("id, slug, title, description, tags, star_count")
.eq("is_approved", true)
.order("created_at", { ascending: false })
.range(offset, offset + limit - 1);
return { data, error };
}
// Use GIN index for array contains
export async function getPromptsByTag(tag: string) {
const { data, error } = await supabase
.from("prompts")
.select("*")
.eq("is_approved", true)
.contains("tags", [tag])
.order("created_at", { ascending: false })
.limit(50);
return { data, error };
}
```
## Index Maintenance
Maintain indexes for optimal performance:
```sql
-- Reindex to reduce bloat
REINDEX INDEX CONCURRENTLY idx_prompts_tags_gin;
-- Analyze table statistics
ANALYZE prompts;
-- Check index bloat
SELECT
nspname || '.' || relname AS table,
round(100 * pg_relation_size(indexrelid) /
pg_relation_size(indrelid)) AS index_ratio,
pg_size_pretty(pg_relation_size(indexrelid)) AS index_size,
pg_size_pretty(pg_relation_size(indrelid)) AS table_size
FROM pg_index
JOIN pg_class ON pg_class.oid = pg_index.indexrelid
JOIN pg_namespace ON pg_namespace.oid = pg_class.relnamespace
WHERE nspname = 'public'
ORDER BY pg_relation_size(indexrelid) DESC;
```
## Best Practices
1. **Index Selectivity**: Create indexes on columns with high selectivity
2. **Composite Order**: Put most selective columns first in composite indexes
3. **Partial Indexes**: Use WHERE clauses to index only relevant rows
4. **Covering Indexes**: Include frequently selected columns to avoid table lookups
5. **Concurrent Creation**: Use CONCURRENTLY to avoid locking tables
6. **Regular Maintenance**: Analyze and reindex to prevent bloatThis postgresql prompt is ideal for developers working on:
By using this prompt, you can save hours of manual coding and ensure best practices are followed from the start. It's particularly valuable for teams looking to maintain consistency across their postgresql implementations.
Yes! All prompts on Antigravity AI Directory are free to use for both personal and commercial projects. No attribution required, though it's always appreciated.
This prompt works excellently with Claude, ChatGPT, Cursor, GitHub Copilot, and other modern AI coding assistants. For best results, use models with large context windows.
You can modify the prompt by adding specific requirements, constraints, or preferences. For postgresql projects, consider mentioning your framework version, coding style, and any specific libraries you're using.