Implement effective caching with Redis, patterns, and best practices
# Redis Caching Strategies for Google Antigravity
Implement high-performance caching with Redis in your Google Antigravity projects. This guide covers caching patterns, cache invalidation, and distributed caching for scalable applications.
## Redis Client Setup
Configure Redis with connection pooling:
```typescript
// src/lib/redis.ts
import Redis from "ioredis";
const redisConfig = {
host: process.env.REDIS_HOST || "localhost",
port: parseInt(process.env.REDIS_PORT || "6379"),
password: process.env.REDIS_PASSWORD,
maxRetriesPerRequest: 3,
retryDelayOnFailover: 100,
enableReadyCheck: true,
connectTimeout: 10000,
lazyConnect: true,
};
class RedisClient {
private static instance: Redis;
static getInstance(): Redis {
if (!this.instance) {
this.instance = new Redis(redisConfig);
this.instance.on("connect", () => {
console.log("Redis connected successfully");
});
this.instance.on("error", (error) => {
console.error("Redis connection error:", error);
});
}
return this.instance;
}
}
export const redis = RedisClient.getInstance();
```
## Cache-Aside Pattern
Implement lazy loading cache strategy:
```typescript
// src/services/cacheService.ts
import { redis } from "@/lib/redis";
interface CacheOptions {
ttl?: number;
prefix?: string;
}
export class CacheService {
private defaultTTL = 3600; // 1 hour
private prefix: string;
constructor(prefix = "app") {
this.prefix = prefix;
}
private getKey(key: string): string {
return `${this.prefix}:${key}`;
}
async get<T>(key: string): Promise<T | null> {
const data = await redis.get(this.getKey(key));
if (!data) return null;
try {
return JSON.parse(data) as T;
} catch {
return data as unknown as T;
}
}
async set<T>(key: string, value: T, options?: CacheOptions): Promise<void> {
const ttl = options?.ttl || this.defaultTTL;
const serialized = typeof value === "string" ? value : JSON.stringify(value);
await redis.setex(this.getKey(key), ttl, serialized);
}
async getOrSet<T>(
key: string,
fetcher: () => Promise<T>,
options?: CacheOptions
): Promise<T> {
const cached = await this.get<T>(key);
if (cached !== null) {
return cached;
}
const fresh = await fetcher();
await this.set(key, fresh, options);
return fresh;
}
async invalidate(pattern: string): Promise<number> {
const keys = await redis.keys(this.getKey(pattern));
if (keys.length === 0) return 0;
return redis.del(...keys);
}
async invalidateByTags(tags: string[]): Promise<void> {
const pipeline = redis.pipeline();
for (const tag of tags) {
const members = await redis.smembers(`tag:${tag}`);
if (members.length > 0) {
pipeline.del(...members);
}
pipeline.del(`tag:${tag}`);
}
await pipeline.exec();
}
}
export const cache = new CacheService();
```
## Write-Through Cache
Implement synchronized cache updates:
```typescript
// src/repositories/userRepository.ts
import { cache } from "@/services/cacheService";
import { db } from "@/lib/database";
interface User {
id: string;
email: string;
name: string;
updatedAt: Date;
}
export class UserRepository {
private cacheKey(id: string): string {
return `user:${id}`;
}
async findById(id: string): Promise<User | null> {
return cache.getOrSet(
this.cacheKey(id),
async () => {
const user = await db.user.findUnique({ where: { id } });
return user;
},
{ ttl: 1800 }
);
}
async update(id: string, data: Partial<User>): Promise<User> {
// Update database first
const updated = await db.user.update({
where: { id },
data: { ...data, updatedAt: new Date() },
});
// Update cache immediately (write-through)
await cache.set(this.cacheKey(id), updated, { ttl: 1800 });
return updated;
}
async delete(id: string): Promise<void> {
await db.user.delete({ where: { id } });
await cache.invalidate(this.cacheKey(id));
}
}
```
## Distributed Locking
Prevent cache stampede with Redis locks:
```typescript
// src/services/lockService.ts
import { redis } from "@/lib/redis";
import { randomUUID } from "crypto";
export class LockService {
async acquireLock(
resource: string,
ttlMs: number = 10000
): Promise<string | null> {
const lockKey = `lock:${resource}`;
const lockValue = randomUUID();
const acquired = await redis.set(
lockKey,
lockValue,
"PX",
ttlMs,
"NX"
);
return acquired === "OK" ? lockValue : null;
}
async releaseLock(resource: string, lockValue: string): Promise<boolean> {
const script = `
if redis.call("get", KEYS[1]) == ARGV[1] then
return redis.call("del", KEYS[1])
else
return 0
end
`;
const result = await redis.eval(script, 1, `lock:${resource}`, lockValue);
return result === 1;
}
async withLock<T>(
resource: string,
fn: () => Promise<T>,
ttlMs: number = 10000
): Promise<T> {
const lockValue = await this.acquireLock(resource, ttlMs);
if (!lockValue) {
throw new Error(`Could not acquire lock for ${resource}`);
}
try {
return await fn();
} finally {
await this.releaseLock(resource, lockValue);
}
}
}
```
## Cache Warming
Pre-populate cache for optimal performance:
```typescript
// src/services/cacheWarmer.ts
import { cache } from "@/services/cacheService";
import { db } from "@/lib/database";
export async function warmCache(): Promise<void> {
console.log("Starting cache warming...");
// Warm popular products
const popularProducts = await db.product.findMany({
where: { isActive: true },
orderBy: { viewCount: "desc" },
take: 100,
});
await Promise.all(
popularProducts.map((product) =>
cache.set(`product:${product.id}`, product, { ttl: 3600 })
)
);
console.log(`Warmed ${popularProducts.length} products`);
}
```
Google Antigravity generates optimized caching code that improves application performance while maintaining data consistency across distributed systems.This Redis prompt is ideal for developers working on:
By using this prompt, you can save hours of manual coding and ensure best practices are followed from the start. It's particularly valuable for teams looking to maintain consistency across their redis implementations.
Yes! All prompts on Antigravity AI Directory are free to use for both personal and commercial projects. No attribution required, though it's always appreciated.
This prompt works excellently with Claude, ChatGPT, Cursor, GitHub Copilot, and other modern AI coding assistants. For best results, use models with large context windows.
You can modify the prompt by adding specific requirements, constraints, or preferences. For Redis projects, consider mentioning your framework version, coding style, and any specific libraries you're using.