Menu Search

#Redis

2 posts
jrm
Joey Montes @jrm · Mar 19
Fetching a user's feed—complete with tags, media, author details, and like counts—is an expensive relational query. To achieve our goal of sub-100ms API response times, we cannot hit the SQL database for every page load. We have implemented a distributed, in-memory caching layer using Redis. When a feed is requested, the system first checks the Redis cache. If a cache miss occurs, it queries the database, formats the JSON payload, and stores it in Redis with a Time-To-Live (TTL) expiration. We employ an event-driven cache invalidation strategy so that when a new post is created, the specific cache keys are flushed, guaranteeing users see fresh content instantly.
roar
Roar Admin @roar · Mar 4
Fetching a user's feed—complete with tags, media, author details, and like counts—is an expensive relational query. To achieve our goal of sub-100ms API response times, we cannot hit the SQL database for every page load. We have implemented a distributed, in-memory caching layer using Redis. When a feed is requested, the system first checks the Redis cache. If a cache miss occurs, it queries the database, formats the JSON payload, and stores it in Redis with a Time-To-Live (TTL) expiration. We employ an event-driven cache invalidation strategy so that when a new post is created, the specific cache keys are flushed, guaranteeing users see fresh content instantly.

See Profiles

Feature coming soon!

Add Bio

Feature coming soon!

Likes

Feature coming soon!

Comments

Feature coming soon!

Reposts

Feature coming soon!

Share

Feature coming soon!