Cache stampede solution— Locking
2 min readJun 15, 2024
Please familiarize with the cache stampede problem here. This post talks about locking mechanism to avoid this problem
Locking Mechanism
Implement a locking mechanism to ensure only one request can fetch data from the backend and populate cache at a time. When a cache miss occurs -
- A request attempts to acquire a lock for that cache key
- Once lock is acquired, it fetches data from backend or database and updates the cache
- If lock cannot be acquired, request waits for lock to be released and uses newly cached data
Code example using Golang and Redis -
import (
"context"
"time"
"github.com/go-redis/redis/v8"
)
const (
lockTimeout = 30 * time.Second
lockRetryInterval = 100 * time.Millisecond
)
func getCacheWithLock(ctx context.Context, client *redis.Client, key string, valueFactory func() (interface{}, error)) (interface{}, error) {
lockKey := "lock:" + key
lockValue := uuid.New().String()
for {
// acquire lock
ok, err := client.SetNX(ctx, lockKey, lockValue, lockTimeout).Result()
if err != nil {
return nil, err
}
if ok {
// release lock
defer client.Del(ctx, lockKey).Result()
break
}
time.Sleep(lockRetryInterval)
}
// fetch value in cache
value, err := client.Get(ctx, key).Result()
if err == redis.Nil {
// fetch new value
newValue, err := fetchData()
if err != nil {
return nil, err
}
// update cache with new value
err = client.Set(ctx, key, newValue, 0).Err()
if err != nil {
return nil, err
}
return newValue, nil
} else if err != nil {
return nil, err
}
return value, nil
}
comments, claps and feedback are welcome. Happy coding 👨💻