Building a High-Performance Guestbook with Fan-Out Caching
- Redis
- Next.js
- System Design
- Caching
- TypeScript
How I built a Reddit-style comment system with threaded replies, real-time voting, and multi-dimensional sorting using Redis fan-out architecture.
Building a High-Performance Guestbook with Fan-Out Caching
When I decided to add a guestbook to my portfolio, I didn't want just a simple form that appends messages to a database. I wanted something engaging—threaded replies, upvotes/downvotes, and the ability to sort by recency, popularity, or engagement. What started as a "quick feature" turned into an interesting system design challenge.
The Problem
Traditional comment systems face several challenges:
- Sorting flexibility: Users want to see "latest", "most popular", or "most discussed" content
- Threaded replies: Comments can have nested replies (like Reddit)
- Real-time counts: Vote counts and reply counts should update instantly
- Performance: Can't run
COUNT(*)queries on every page load
The naive approach—invalidating cache on every write—doesn't scale. With 50 entries per page and 3 sorting dimensions, that's a lot of cache churn.
The Solution: Fan-Out Architecture
Instead of cache invalidation, I implemented a fan-out architecture where writes propagate to multiple data structures simultaneously. Here's the key insight:
Don't delete old cache. Update rankings atomically.
Redis Sorted Sets as Rankings
Redis sorted sets (ZSET) are perfect for this. Each entry gets a score, and Redis maintains sorted order automatically.
gb:r:recent → entries sorted by timestamp
gb:r:popular → entries sorted by (upvotes - downvotes)
gb:r:engaged → entries sorted by reply_count
When a user upvotes an entry, instead of invalidating the "popular" cache:
// Old way (bad)
await redis.del("guestbook:popular:page:1");
await redis.del("guestbook:popular:page:2");
// ... delete ALL pages
// New way (good)
await redis.zadd("gb:r:popular", newScore, entryId);
// That's it! The ranking updates automatically
System Design
Here's the overall architecture:
flowchart TB
subgraph Client["Client Layer"]
UI[GuestbookList]
Sort[Sort Dropdown]
Item[GuestbookItem]
end
subgraph Server["Server Actions"]
Fetch[fetchGuestbookEntries]
Submit[submitGuestbookEntry]
Vote[voteEntry]
Replies[fetchMoreReplies]
end
subgraph Cache["Cache Manager"]
Hydrate[Hydration]
FanOut[Fan-out Writer]
Hierarchy[Hierarchy Builder]
end
subgraph Storage["Storage Layer"]
Redis[(Redis)]
DB[(Turso SQLite)]
end
UI --> Fetch
Sort --> Fetch
Item --> Vote
Item --> Replies
Fetch --> Cache
Submit --> FanOut
Vote --> FanOut
Cache --> Redis
Cache --> DB
FanOut --> Redis
The Fan-Out Flow
When a User Posts a New Entry
sequenceDiagram
participant U as User
participant S as Server Action
participant DB as Database
participant R as Redis
U->>S: submitGuestbookEntry()
S->>DB: INSERT INTO guestbook
DB-->>S: newEntry (id: 42)
par Fan-out to all rankings
S->>R: ZADD gb:r:recent {timestamp} 42
S->>R: ZADD gb:r:popular 0 42
S->>R: ZADD gb:r:engaged 0 42
end
S->>R: HSET gb:e:42 {...entryData}
alt Has parent
S->>R: ZADD gb:c:{parentId} {timestamp} 42
S->>DB: UPDATE guestbook SET replies_count++
S->>R: ZADD gb:r:engaged {newCount} {parentId}
end
S-->>U: { success: true }
When a User Votes
sequenceDiagram
participant U as User
participant S as Server Action
participant DB as Database
participant R as Redis
U->>S: voteEntry(42, "up")
S->>DB: Get current score
S->>DB: INSERT/UPDATE vote
S->>DB: UPDATE score
par Cache updates
S->>R: ZADD gb:r:popular {newScore} 42
S->>R: HSET gb:e:42 s {newScore}
S->>R: HSET gb:uv:{userId} 42 "up"
end
S-->>U: { success: true, newScore }
When Fetching Entries
sequenceDiagram
participant U as User
participant S as Server
participant C as Cache Manager
participant R as Redis
participant DB as Database
U->>S: fetchGuestbookEntries("popular", "desc", 0)
S->>C: getGuestbookEntriesPaginated()
C->>R: GET gb:h:popular (hydrated?)
alt Not hydrated
C->>DB: SELECT * ORDER BY score LIMIT 500
C->>R: ZADD gb:r:popular (batch)
C->>R: HSET gb:e:* (batch)
C->>R: SET gb:h:popular "1"
end
C->>R: ZREVRANGE gb:r:popular 0 149
R-->>C: [42, 37, 19, 45, ...]
loop For each entry
C->>R: HGETALL gb:e:{id}
alt Has ancestors
C->>C: Build hierarchy path
end
end
C-->>S: { entries, nextCursor, hasMore }
S-->>U: Paginated response
Handling Threaded Replies
The trickiest part was displaying threaded replies correctly when sorting by non-recency metrics. If a deeply nested reply becomes "most popular", how do we show it?
The Ancestor Path Approach
Each entry stores its full ancestor path:
-- Entry 42 is a reply to 19, which is a reply to 5
id: 42
parent_id: 19
ancestor_ids: "5,19" -- Full path from root
When entry 42 ranks #1 in "popular", we:
- Parse the path:
[5, 19] - Load all ancestors from cache
- Build the tree:
Root(5) → Child(19) → Highlighted(42) - Collapse other siblings with a badge:
[+3 more]
// Simplified hierarchy building
async function buildHierarchy(entry, allEntries, userVotes) {
if (!entry.ancestorIds) {
// Root entry - show directly
return { ...entry, highlighted: true };
}
const path = entry.ancestorIds.split(",").map(Number);
const root = await loadEntry(path[0]);
// Recursively build path to highlighted entry
return buildPathToEntry(root, path.slice(1), entry.id);
}
The Badge for Hidden Replies
When siblings are collapsed, we show a clickable badge:
{
hiddenCount > 0 && (
<Badge
onClick={handleLoadMoreReplies}
className="cursor-pointer bg-orange-500 text-white"
>
+{hiddenCount} more
</Badge>
);
}
Clicking loads 5 more replies at a time, cached separately:
// gb:c:{parentId} = sorted set of child IDs
await redis.zadd(`gb:c:${parentId}`, timestamp, childId);
Memory Optimization
With potentially thousands of entries, memory matters. I used compressed keys:
// Instead of verbose keys
{ id, content, createdAt, isAnonymous, parentId, ... }
// Use single-letter keys
{ i, c, ca, a, p, ap, uv, dv, s, rc, ud }
Each entry uses ~1-2 KB. With 500 entries per category and 3 categories, total cache is ~3-4 MB.
Performance Results
| Operation | Before | After | | ---------- | ----------------------------- | --------------------- | | Page fetch | ~200ms (DB queries) | ~10ms (cached) | | Vote | ~150ms (invalidate + refetch) | ~20ms (atomic update) | | New entry | ~100ms | ~50ms |
The key win: No cache invalidation on writes. Rankings update atomically via ZADD.
Technologies Used
- Next.js 16 with Server Actions
- Redis (ioredis) for caching
- Turso (SQLite) for persistence
- Drizzle ORM for type-safe queries
- TypeScript throughout
Lessons Learned
- Fan-out beats invalidation for multi-dimensional sorting
- Denormalize wisely -
ancestor_idssaves recursive queries - Lazy hydration - only load categories when accessed
- Optimistic UI - update locally, revert on error
- Compressed keys - single letters save memory at scale
What's Next
- Real-time updates via WebSocket
- Rate limiting for votes
- Analytics dashboard
- Moderation tools
Conclusion
Building this guestbook taught me that sometimes the "simple" features require the most thought. The fan-out architecture eliminated cache invalidation headaches and made sorting feel instant. If you're building a comment system with multiple sort dimensions, consider this approach—it scales beautifully.
The full implementation is available in the GitHub repository. Feel free to explore the code and reach out with questions!
Thanks for reading! If you found this useful, consider signing my guestbook and testing out the sorting features yourself. 😄