Why Redis Is Everywhere ?
From caching and queues to leaderboards and streams, Redis keeps things moving.
Welcome to Hello Engineer, your weekly guide to becoming a better software engineer! No fluff - pure engineering insights.
You can also checkout : CAP Theorem Explained!
Don’t forget to check out the latest job openings at the end of the article!
What is Redis ?
Ever wondered how apps like Amazon or Twitter retrieve data so fast? ⚡
Every time you scroll, search, or shop there’s a behind-the-scenes hero making sure things run blazingly fast.
That hero? Redis.
Redis is a “data structure store” but not your typical one. It’s written in C, runs in-memory and is single-threaded 😱 and that’s exactly what makes it blazingly fast.
Redis Cache is like a super-speedy assistant that stores data in memory (RAM), so your apps don’t have to wait around querying a slower database.
It’s open-source, easy to work with, and runs quietly in the background but the impact it makes is massive.
The story behind Redis: Back in 2009, when platforms like Twitter were scaling like crazy, traditional databases just couldn’t keep up. Redis was built to solve this based on the bold idea that a cache can be more than just a temporary store. It can be fast and reliable.
NOTE:
Redis is fast but speed comes with trade-offs. ⚠️
One thing to keep in mind: Redis isn’t built for durability out of the box.
Since it’s an in-memory store, if the server crashes, your data could vanish. Redis does offer ways to reduce this risk like the Append-Only File (AOF) but it's not the same level of durability you’d get from traditional databases that write every commit to disk.
This isn’t a bug it’s a design choice by the Redis team to prioritize blazing speed over guaranteed persistence.
But, if you do need stronger durability, options like AWS MemoryDB exist. They’re based on Redis but trade a bit of speed for more safety.
📌 So, ask yourself: Do I need ultra-low latency, or do I need data to survive a reboot?
Your answer will guide the tool you pick.
Data Structure Support
Redis isn't just a cache it's a powerful playground of data structures. At its core, It is a key-value store but what you store in those keys can be surprisingly diverse.
Here are some of the fundamental data structures Redis supports:
Strings – The simplest type, great for basic caching.
Hashes – Think of them like lightweight JSON objects.
Lists – For pushing/popping elements, like a queue or stack.
Sets – Unique unordered elements.
Sorted Sets – Ordered with scores — great for things like leaderboards or priority queues.
Bloom Filters – For fast probabilistic checks (e.g., has this been seen before?).
Geospatial Indexes – Perfect for location-based queries.
Time Series – Ideal for logging or tracking values over time.
And Redis goes beyond just storing data. It also supports communication patterns like:
Pub/Sub — For real-time messaging.
Streams — For ordered message processing, somewhat like Kafka-lite.
Whether you're building a real-time leaderboard, a geolocation app, or a chat system, Redis has something built-in to help you out all while staying fast and easy to reason about.
Commands
One of the best things about Redis? You don’t need to learn a complex query language to use it. Redis uses a simple, custom string-based protocol that feels super natural — especially if you’ve used data structures in any programming language.
You can literally connect to a Redis instance and run commands like these straight from the CLI:
SET foo 1 # Store key 'foo' with value 1
GET foo # Returns 1
INCR foo # Increments 'foo', returns 2
XADD mystream * name Sara surname O
Connor # Adds an entry to a stream
And the best part? Commands are grouped by data structure and they make sense.
For example, Redis Sets come with:
SADD – Add an element
SCARD – Get total number of elements
SMEMBERS – List all elements
SISMEMBER – Check if an element exists
It’s like working with a Set in your favorite language just faster and persistent across networked systems.
Performance
Redis is fast like really fast. It can handle over 100k writes per second, and read latency is usually in the microsecond range.
Because of this, you can get away with patterns that would usually be a bad idea in traditional databases. For example, running 100 separate SQL queries to build a list is something you'd avoid with a relational DB. But with Redis, that kind of overhead is much lower. It’s still better to keep things efficient, but if you do need to make a lot of small calls, Redis can handle it.
This is mainly because Redis keeps everything in memory. That makes it great for performance-heavy use cases. It’s not the right fit for everything, especially if you need strong durability, but in the right scenarios, it works really well.
Infrastructure Configuration
Redis can be deployed in different ways depending on your needs as a single node, with a high availability replica, or as a cluster.
When using Redis in cluster mode, each client caches a set of hash slots. These slots map keys to specific nodes, so the client knows exactly which node to contact for a given key. This direct mapping helps Redis maintain its performance edge.
Redis nodes also talk to each other using a gossip protocol. So if a client ends up asking the wrong node for a key, it can be redirected but ideally, you want to hit the correct node on the first try.
Compared to most distributed databases, Redis clustering is quite minimal. It doesn’t abstract away scalability problems instead, it gives you just enough tools to build your own solutions.
For example, Redis assumes that all the data needed for a request lives on one node. So if your data is spread out or your keys are poorly designed, things can fall apart.
That’s why key structure and partitioning strategy play a huge role in scaling Redis effectively. It’s less about what Redis gives you out of the box, and more about how you build on top of it.
Capabilities
Redis as a Cache
One of the most common ways Redis is used is as a cache. In this case, the keys and values in Redis directly represent the keys and values we want to store in the cache.
Redis makes it easy to scale, it can distribute this key-value map across multiple nodes in a cluster. So if you need more capacity, you just add more nodes.
When caching with Redis, it's common to set a TTL (time to live) on each key. Once the TTL expires, Redis ensures that the key is no longer accessible. It also uses TTLs to decide which keys to evict, which helps keep memory usage under control — especially when you're trying to cache more data than your memory can hold.
That said, using Redis as a cache doesn’t solve everything. One common issue is the “hot key” problem where one key gets a lot more traffic than others. This isn’t unique to Redis though; tools like Memcached and even large-scale databases like DynamoDB face similar challenges.
Official Doc: Redis as a Cache
Redis as a Distributed Lock
Sometimes, you’ll need to ensure consistency when multiple users are interacting with your system. For example, when designing a system like Ticketmaster, you need to make sure the same ticket isn’t sold twice. Or in something like Uber, you have to make sure two drivers aren’t assigned the same ride.
Most databases including Redis provide some level of consistency. If your main database already handles it well, it’s usually better to rely on that instead of introducing a distributed lock. Distributed locks can add complexity, and interviewers often dig into edge cases when you use them.
That said, Redis does support basic locking mechanisms. A simple distributed lock might use the INCR
command with a TTL. You try to acquire the lock by calling INCR
. If the result is 1, you own the lock and can proceed. If it’s more than 1, it means someone else already has it so you wait and try again later. Once you're done, you DEL
the key to release the lock.
For more advanced use cases, Redis also supports the Redlock algorithm a more robust way to handle distributed locking across multiple nodes.
Read more : Redis Official Doc for Distributed Lock
Redis for Ranking
Redis' sorted sets are great for keeping data in order which makes them a perfect fit for leaderboard-style use cases. Since inserts and reads are fast, even at scale, they can handle scenarios where a traditional SQL database might struggle.
Let’s say we’re building a feature where users search for cricket-related topics, and we want to show the most liked posts for that keyword.
For example, when someone searches for "kohli", we can maintain a sorted set of posts containing that keyword, ranked by the number of likes.
ZADD kohli_posts 320 "PostId123" # A post about Kohli's century
ZADD kohli_posts 75 "PostId456" # A post about Kohli's training session
ZREMRANGEBYRANK kohli_posts 0 -6 # Keep only the top 5 posts
This gives us a quick way to fetch the most engaging content related to a specific keyword, and we can periodically clean up the lower-ranked posts to keep things efficient.
Redis for Sliding Window Rate Limiting
Instead of traditional fixed window rate limiting, Redis can implement sliding window rate limiting using sorted sets. Each request adds a timestamp to the sorted set, and the system checks how many requests exist in the last N seconds. For example, if an API allows 100 requests per 60 seconds, Redis can dynamically reject requests once that threshold is crossed. This approach ensures smooth limiting, avoids sudden spikes, and is much fairer to end users.
Redis for Proximity Search
Redis also supports geospatial indexes out of the box using commands like GEOADD
and GEORADIUS
. These are useful when you need to store locations and perform quick proximity searches.
Here’s how it works:
GEOADD nearby_restaurants 77.5946 12.9716 "PizzaPlace" # Add a restaurant at a specific location
GEORADIUS nearby_restaurants 77.5946 12.9716 5 km # Find restaurants within 5 km
In this example, we’re storing the location of restaurants and then searching for places near a user’s location. This is useful for features like showing nearby food outlets in a food delivery app.
The GEORADIUS
command runs in O(N + log(M)) time, where N is the number of elements within the radius and M is the total number of elements in the index.
Redis for Event Sourcing
Redis streams are append-only logs, somewhat like Kafka topics. They let you add data in a durable way and provide a built-in mechanism for distributing that data to multiple consumers.
Redis supports this with two key features:
XADD to push new items to the stream
Consumer groups (XREADGROUP, XCLAIM) to read and manage those items
Let’s take a work queue as an example. Imagine you're queuing up background jobs like sending emails. You push each job into the stream using XADD. Now, you have a group of workers (in a consumer group) processing these jobs. Redis tracks what each worker has acknowledged and what’s still pending. If a worker crashes midway, Redis doesn’t lose track. Another worker can use XCLAIM to pick up the unacknowledged message and reprocess it.
This setup gives you reliability and fault tolerance without needing a separate queuing system.
Redis for Write Coalescing (Debouncing)
In high-frequency write scenarios, it’s inefficient to hit your database on every update. Redis can act as a buffer to hold temporary state and update the primary database periodically. For example, in a collaborative document editor, where users type rapidly, every keystroke doesn’t need to be stored in the DB. Instead, Redis can store the intermediate state and sync it every few seconds. This reduces backend load and improves responsiveness, without losing any data.
Hot Key Issue
One common problem you might run into when using Redis at scale is the "hot key" issue. This happens when traffic is heavily focused on just one or a few keys, causing uneven load across your Redis cluster.
Let’s say you're building a social media platform and using Redis to cache user profile data. You’ve distributed user keys evenly across a 50-node Redis cluster. Everything’s running smoothly.
Now imagine a celebrity joins the platform and millions of users start viewing their profile at the same time. Suddenly, the Redis node storing that celebrity’s profile data is handling way more traffic than any other node.
Unless your cluster is overprovisioned with lots of spare capacity, that node will likely become a bottleneck and start failing even though the rest of the cluster is underutilized.
Here are a few ways to handle this:
Client-side caching: Cache the celebrity’s profile data on the frontend or at the edge so you don’t hit Redis every time.
Key duplication: Store the same profile under multiple keys and randomly pick one for reads, distributing load across multiple nodes.
Read replicas: Add replicas for the hotspot node and load-balance requests across them dynamically.
For system design interviews, what matters most is that you identify the hot key problem early and proactively suggest mitigation strategies. It shows you’re thinking beyond the happy path.
Wrapping Up!
By now, you should have a clear understanding of Redis, what it is, how it works, where it fits in, and why it’s used.
Its speed and flexibility make it a top choice for many large-scale systems. From caching and real-time analytics to queues and pub/sub. Redis quietly powers a lot of what makes your apps feel fast and responsive.
Loved this deep dive? Hit a like ❤️
For more simple explanations, useful insights on coding, system design, and tech trends, Subscribe To My Newsletter! 🚀
If you have any questions or suggestions, leave a comment.
See you next week with more exciting content!
Here's Something Extra for You: Exciting Job Openings 🚀
Member of Technical Staff - 3 Distributed systems, Nutanix : Link
Software Engineer (IC2), Oracle : Link
Software Engineer 3, Google: Link
Software Engineer, GenAI, Google: Link
Software Engineer 2, Amazon : Link
Software Engineer, Coinbase: Link
Software Engineer, Gojek : Link