Introduction

In the architecture of a typical web application, a database often serves as the primary data store while a distributed cache like Redis supplements it for enhanced performance. While this setup already offers a level of efficiency, there’s room for further optimization. Enter the concept of a local LRU (Least Recently Used) cache.

Using a local LRU cache allows us to reduce the load on Redis and improve the speed and responsiveness of the application even more. This article will explore how you can implement a local LRU cache in Go to boost your web app’s performance.

Implementation

We’ll be leveraging a fantastic library by HashiCorp known as golang-lru to implement our local LRU cache. To showcase how this works, let’s build a simple web application using the Gin framework. Our app will feature a single handler that retrieves user data based on a given user ID.

This example will illustrate how a local LRU cache can further enhance performance, even when you already have a Redis cache supporting your database.

func StartServer(){
    router := gin.Default()

    router.GET("/get-user", GetUser())
    router.Run(":8080")
}

func GetUser() gin.HandlerFunc {
	return func(c *gin.Context) {
        userId, ok := c.Request.URL.Query().Get("user_id")

        // Find in Redis and return
        ...

        // If not found in Redis, get it from the database and return
        ...
    }
}

func main(){
    StartServer()
}

The golang-lru library offers two main mechanisms for caching data:

  1. Size-Based: Cache limited by the number of stored items.
  2. Size-Based with TTL (Time-To-Live): Cache limited by both the number of stored items and a time-based key expiration.

Let’s start by exploring an example that employs the Size-Based LRU approach.

l, _ := lru.New[string, *types.User](128)
l.Add(userId, &types.User{...})
value, ok := l.Get(userId)
if ok{
    // Key found in the cache
}

Once your cache hits the 128-item limit, the LRU mechanism will kick in, evicting the least recently used keys to make room for new ones.

For TTL-based caching, you can also set a maximum key count just like in the size-based approach. This means your cache will not only expire keys based on time but also maintain a maximum count.

Now, let’s enhance our previous web application by implementing a TTL-based LRU cache. This will give us the best of both worlds: time-based expiration and size limitations.

import (
    ...
    "github.com/hashicorp/golang-lru/v2/expirable"
)

func GetLRUCache() *expirable.LRU[string, *types.User]{
    // Create an LRU cache with a TTL of 10 min
    cache := expirable.NewLRU[string, *types.User](128, nil, time.Minute*10)
    // The second argument(nil) to this function can be a handler function which is called
    // when a key is evicted. You can re-populate the data when a key is evicted using this.
    return &cache
}

func StartServer(){
    lru := GetLRUCache()
    router := gin.Default()

    router.GET("/get-user", GetUser(lru))
    router.Run(":8080")
}

func GetUser(lru *expirable.LRU[string, *types.User]) gin.HandlerFunc {
	return func(c *gin.Context) {
        userId, ok := c.Request.URL.Query().Get("user_id")

        user, ok := lru.Get(userId)
        if ok{
            // Found user in LRU, return from here
        }
        // Find in Redis
        ...
        // If not found in Redis, get it from the database
        ...

        lru.Add(userId, user)  // Obtained either from Redis or database

    }
}

func main(){
    StartServer()
}

Conclusion

We’ve journeyed through the ins and outs of implementing local LRU caching in a Go-based web application. From size-based to TTL-based caching mechanisms, the golang-lru library provides a flexible and efficient way to optimize your app’s performance. Coupled with a distributed cache like Redis, a local LRU cache can act as an additional layer of optimization, reducing load and improving responsiveness.

It’s crucial to note that the choice of cache parameters—like maximum size and TTL settings—depends largely on the specific needs and traffic behavior of your application. There’s no one-size-fits-all solution, so it’s essential to tailor your caching strategy to your unique use-case.

Thank you for reading, and feel free to share your thoughts or questions in the comments section below. Happy caching!