04 Sep 2013 The Joys of Redis
“In our (admittedly limited) experience, Redis is so fast that the slowest part of a cache lookup is the time spent reading and writing bytes to the network” – stackoverflow.com
Can Databases Be Exciting To Work With?
It’s very rare that a project can cause an engineer to get excited about the prospect of working with a database they’ve never worked with previously, especially when it’s a relational one. That mainly boils down to the fact that the majority of them are clunky monstrosities that are painfully slow and cause us to grimace at the thought of having to integrate them into our applications, not to mention having to piece together gnarly and over engineered SQL statements.
But what if you were presented with a NoSQL database that was not only quick1 but also uncomplicated, robust, fun to work with and simply did its job (and did it very well indeed)? Then maybe you might just be forgiven for getting excited about working with databases. Seriously. No joking. If you don’t believe me read on.
Enter my new friend in the application development world – Redis.
Choosing Redis
A recently completed project for a major telecommunications customer raised a significant business requirement to process, store and serve 17.2 million records on a daily basis and update a further 2.1 million records every 30 minutes. In addition to this, that data needed to be seamlessly synchronized across 10 web server nodes while at the same time asynchronously continuing to serve any clients (web tier) without any major performance hits. Finally, this data then needed to be delivered with minimal latency i.e. sub-millisecond times, and be able to handle about 5K requests per second! That’s no mean feat by any stretch of the imagination.
We turned to Redis, an open source key-value data store, to help us realise these requirements for several reasons:
- It is quick
- It is robust
- It has asynchronous replication
- It is scalable
- It has a relatively mature suite of API’s2 to build applications
- It is open source and BSD licensed
It simply ticked all of the boxes for us.
The Über-Basics
There are plenty of good articles3 on the web which explore Redis in great detail and get down and dirty with its internal mechanisms. This blog entry is not intended to do the same. Instead, it is designed to give a brief introduction to Redis, how it works at a basic level (but includes the more complex topic of replication) and how we used it to fulfill the business requirements of this particular project. It also describes some pitfalls we encountered which can hopefully be sidestepped by anyone else thinking of harnessing Redis in any upcoming (or even current projects).
It’s Not Just A Cache
At its core, Redis is really not much more than a glorified hashmap. And, as engineers (I’m assuming the majority of the reading audience are software engineers of some sort or another), we all know how hashmaps work and how they are designed for speed and efficiency. Redis stores all of its data set in memory but it’s not just a cache. I’ll repeat that – Redis stores all of its data set in memory but it’s not just a cache.
The reason for the somewhat pedantic repetition is twofold.
-
Redis stores all of its data set in memory – there is no kind of mixed mode available. We investigated that topic in vain. Storing some parts of the data set in memory and other parts on disk is just not possible with the current version. It’s all or nothing when it comes to using Redis and in fact that’s where Redis’ strength is forged – you know exactly what you are getting with it. No surprises. No WTF moments. Nothing is made complicated.
-
But it’s not just a cache – Redis possesses the capability to model data structures e.g. lists, queues, sorted sets etc. It is also possible to modify a value once it has has been ‘SET’. Finally, it can be persisted and restored from disk also making it a suitable candidate for any project which requires this type of functionality (i.e. persistence).
Keys & Values
In essence, Redis only works with ‘Keys’ and ‘Values’, just like any other map structure. You set the key with its value, you ask for the value back using the key and you’re done. Easy. The following very simple example should be enough to demonstrate Redis working at its most primitive level.
redis 127.0.0.1:6379> set mykey somevalue OK redis 127.0.0.1:6379> get mykey "somevalue" redis 127.0.0.1:6379> set mykey “a new value” OK redis 127.0.0.1:6379> get mykey "a new value" redis 127.0.0.1:6379> del mykey OK redis 127.0.0.1:6379> get mykey (nil)
Redis & Enterprise Solutions
When we did some digging around online, we were pleased (and excited) to discover that Redis appears to have snagged itself some big players in the industry that use it for their enterprise solutions. Some of the biggest include4:
- Github
- Stackoverflow
- Tumblr
- logstash
Redis clearly fits the mold of a perfectly good data store for any type of enterprise architecture. As already mentioned, Redis is scalable and it offers a robust replication/redundancy functionality straight out of the box. But it is the speed at which it operates and performs that separates it from all the others. It is second to none.
From the very outset of the project, the team were acutely aware that the solution needed to handle a massive amount of throughput generated from ~50 high profile/traffic client sites whilst concurrently performing updates on the the data set without impacting them.
As an added level of complexity (there’s always an “added level of complexity”), it also had to handle a “flush and push” of those 17.2 million records every day which meant deleting all of the data (the “flush”) and rebuilding it from scratch (the “push”). Flushing the database and rebuilding it with that many updates had to be accomplished as fast as possible for obvious reasons. This relatively5 seamless process was realised by us in a average time of 29 minutes. That’s approximately 590K updates a minute or almost 10K a second.
However, as I am about to show, this figure is not indicative of Redis’ true speed. It is faster than that. Actually, it is a lot faster! By running some tests and using the built in benchmark utility tool that is shipped with Redis, we were able to estimate that this mass update could have performed in about 5 minutes. Take a moment to let that sink figure sink in. Yes, you read it correctly. Over 17 million data set updates in about 5 minutes6. Now we’re talking.
We ran the benchmark tool on one of our servers with the following parameters set:
- 99 parallel client connections
- randomly keys generated from a range of 0 – 50K
- 1 million requests
- a payload of 100 bytes (the average payload of the real data)
redis-benchmark -h [removed] -c 99 -r 50000 -n 1000000 -d 100 ====== SET ====== 1000000 requests completed in 18.07 seconds 99 parallel clients 100 bytes payload keep alive: 1 0.23% <= 1 milliseconds 86.90% <= 2 milliseconds 98.71% <= 3 milliseconds 99.85% <= 4 milliseconds 100.00% <= 5 milliseconds 100.00% <= 5 milliseconds 55340.34 requests per second ====== GET ====== 1000000 requests completed in 11.90 seconds 99 parallel clients 100 bytes payload keep alive: 1 97.96% <= 1 milliseconds 99.69% <= 2 milliseconds 99.79% <= 3 milliseconds 99.96% <= 4 milliseconds 99.99% <= 5 milliseconds 100.00% <= 5 milliseconds84012.44 requests per second
So why was Redis not performing at these speeds for our application i.e. ~55K requests per second? Well, the answer was really quite simple.
Unfortunately, the project had some constraints as a result of some other business requirements. This meant that the data to be pushed out to our Redis nodes needed to be retrieved from an Oracle database first. This was causing a significant bottleneck in the update process. And as the application was Java based, and thus JDBC was used to fetch this data, this was crippling our update times. However, we were unable to remove this constraint and had to settle for an average time of 29 minutes per “flush and push” update.
Aside from the this daily run, the application also needed to update 2.1 million records (insert/update and delete) every 30 minutes. This update takes an average of 100 seconds to push out across the whole stack (10 nodes) in production. [Even though this is still fast(ish), I know that this figure doesn’t add up when compared with the benchmark results. This is because we deliberately don’t use batch inserts or pipelining for this part of the update. A csv file is processed line by line and then each line is determined to be either a SET or DEL and executed in that exact sequence due to a requirement of the project.]
As a small side note (it’s unfortunately out of scope for this blog), Redis is now emerging on the Cloud and being offered as a enterprise solution from a growing number of vendors e.g. Redis Cloud. They have clearly realised its potential and Redis seems destined for NoSQL greatness.
Replication
One of the project requirements (and by far the most challenging) was to seamlessly replicate the data set across all 10 nodes. Several approaches were discussed and analysed to tackle this particular requirement.
The most obvious solution was to use a cluster configuration (master<->slave) and harness Redis’ built in asynchronous replication functionality. Although we researched replication in some detail, we were initially reluctant to go down this route for 2 main reasons:
-
Previous experiences with other database cluster configurations had knocked our confidence when working within that type of environment. They had always proved to be a headache to set up, maintain and dealing with such issues like corrupt masters was always a struggle.
-
We wanted to preserve horizontal scalability but by introducing a cluster configuration that would be adversely removing this flexibility.
As we were soon to find out, by not putting our trust in Redis and harnessing what it had to offer, we were going to be shooting ourselves in the foot.
Replication Take 1 – Fake It Using The API (The Wrong Way)
Our first attempt to replicate the data across all 10 nodes was (putting a positive spin on it) an interesting learning experience. We came up with an idea that we could somehow implement our own on-the-fly cluster configuration using the API and by hooking into the ‘SLAVEOF’ command. The general work flow would be:
- Perform the updates on just one of the nodes (it didn’t matter which one but for brevity’s sake let’s say it was node 01)
- After updating node 01 each of the other 9 nodes would be flicked over (one at a time) to be slaves of node 01, thus making node 01 act like a surrogate master for the duration of the update.
- When the replication was finished on the slave it would then be flicked back over as a master and the same process was performed on the next node in the stack and so on.
By following this approach we would remove the need for a real cluster configuration and preserve horizontal scalability. However, some pitfalls7 we immediately fell head first into:
-
When the link is established between a master and a slave, a full synchronization is performed between them. That means that the full data set is sent out over the network from to the slave. And we were invoking this every 30 minutes (we’ve just stumbled into the pit and we’re in free fall).
-
Updating the 2.1 million records was taking anywhere between 25-30 minutes when it should have been only taking a fraction of that time. And this was without the Oracle bottleneck as was the case with the daily update (we were reading straight from a file and pushing directly to Redis). Our attempt at an on-the-fly replication was starting buckle and cracks started to appear (we’re now hurtling full speed toward the bottom of the pit).
-
When the link is established between a master and a slave, the master forks and performs a background (asynchronous) save of its data to disk. Our data set was hitting about 1.8g and it was more or less continuously being forced to write out to disk as a result of point directly preceding this one (we’ve now hit the bottom of the pit at a gazillion miles per hour and disintegrated into small puff of dust).
Needless to say, after initial testing and seeing the results, this first approach was scrapped and we swore an oath that it would never be spoken of again (with the exception of writing about it in this blog post).
Replication Take 2 – Taking A Leap Of Faith (The Right Way)
Take 1 was considered a miserable failure on our behalf and by no means the fault of Redis. Redis did exactly what it said it would do but we had brazenly abused its functionality and power. So, we went back to the drawing board and after countless discussions it was finally decided to take a leap of faith and put our trust in Redis’ replication functionality.
We dove right in and started testing a proper cluster configuration which was a breeze to set up:
-
We added just 1 line of configuration to each of the Redis config files for all the nodes that we wanted to be slaves (i.e. “slaveof node01”).
-
With bated breath, we did our first test and pushed 17.2 million records out to node 01 and monitored how Redis handled replication with that volume. It didn’t miss a beat. We tried several times to break it and trip it up. We threw in unexpected scenarios that we thought must cause it to fall over. But each and every time Redis just shrugged off our attempts and laughed in our faces. It was proving to be bulletproof.
It was like poetry in motion. So elegant. So fast. So EASY.
When we took a closer look at what was actually happening during the replication process (using the ‘MONITOR’ command) it was easy to see how Redis is put together when it’s replicating. The master, for each request it receives, simply forwards that request to all the connected slaves. For example, if 100K SET commands are received by the master, we can see the exact same 100K commands being forwarded on to all of the slaves and processed by them.
We ended up settling on using a cluster configuration at its most basic level. That is to say, just 1 master and 9 slaves. However, there is nothing to stop your design entering into a more complex setup like a graph structure i.e. slaves of slaves. In addition, we also examined the replication settings with regard to tuning/changing them but the default set up worked perfectly fine for us.
Pipelining
Another worthwhile topic to quickly touch on is that of ‘pipelining’, an important feature of Redis to understand and to be aware of. Although this technique has been around for some time and used is elsewhere in the industry, it is still remains (surprisingly) unknown to many engineers. We decided to use pipelining in our project and it proved to be a wise move as it drastically improved our performance.
The Redis documentation describes pipelining in depth and you can read plenty of more detailed articles about the topic online. But at a very high level:
-
Redis is a TCP server using a basic Request/Response protocol. In order words, (without pipelining enabled) when a client sends a request to the server it reads from the socket and waits for the server’s response e.g. “OK” to acknowledge that the command was processed. The server processes the request and sends the response back. It should be obvious that this in incur a significant performance hit.
-
With pipelining enabled, it is possible to send a batch of commands to the server and not have it wait for the client to read the responses, therefore allowing it to continue to serve incoming commands. Instead, all the responses are read in one single step.
With this approach we were considerably able to speed up our Redis processing times. The only drawback to pipelining is that if you are interested in the response from the server (i.e. ensuring all commands were executed successfully) you will need to write some boilerplate code to iterate over the response list and marry up the requests that were sent to the responses that were received. This was somewhat finicky to implement.
Although we did take the time to write the code to handle these bulk responses, we noticed however that Redis never once failed in processing a request we sent to it. Neither in testing nor in production did we ever witness an unsuccessful command e.g. SET, DEL, FLUSHDB etc. Redis just seems to work every time.
I’d like to mention at this point that using the Redis protocol for mass insertion was also considered to be a viable option for speeding up insertion times. However, our research of this topic uncovered that most people who used this technique were inserting keys in the billions. We were only in millions territory. We did have mass insertion as the next approach to test if pipelining not been satisfactory in terms of time. But as it turned out, we were more than happy with the results from pipelining.
Conclusion
Redis is like that super reliable friend that everyone wants to have. You know the one. The one that never lets you down. The uncomplicated one. The one that never lies to you and always gives you sound advice. The one that always turns up to the party bang on time, is perfectly dressed for the occasion but in the blink of an eye is able to bust out a dazzling array of moves on the dance floor at breakneck8 speeds but never breaks a sweat doing it. And the one that does this day after day without ever complaining.
Redis solid as a rock. It makes even novice users look good. It handles big data with ease and never seems to yell out and fall over no matter what you throw at it. It does exactly what it says on the tin and then some more. Replication is a breeze to set up and manage whereas it is an absolute headache in most other environments.
Excited yet? Try it out and you will be.
Footnotes:
1 When you google Redis, 9 times out 10 you will see it referred to as “blazingly quick” or “lighting fast” and the so on. It is. But I’ve tried to refrain from using such overused adjectives in this blog.
2 Jedis was the Java API which we used.
3 See http://pauladamsmith.com/articles/redis-under-the-hood.html for an excellent read.
4 For a comprehensive insight into how these companies use Redis see http://blog.togo.io/redisphere/redis-roundup-what-companies-use-redis/
5 While the database is being rebuilt there is a brief period where the data set is incomplete and thus no data may be returned for some keys
6 ((18.07 * 17.2)) / 60
7 I use “pitfalls” but in fact all of this behavior is clearly documented on the Redis site. However, we choose to ignore it, believing our proposed solution would work regardless.
8 Finally succumbed to using an adjective to describe the speed of Redis
Redsmin (redis gui)
Posted at 18:44h, 04 SeptemberGreat introduction to Redis, this article will be featured in our next RedisWeekly, thanks! https://redsmin.com/redisweekly
Brendan Malloy
Posted at 12:16h, 10 SeptemberNice post. I will say one more joy is using redis for development. It is too easy to just put test data in a key and use that to develop an app. I find it much easier than setting up tables in a rdbms.
I use redis exclusively for my Trello reporting app reportsfortrello.com. I turn my objects into json and save the string to redis. It is so nice and the speed…the speed!
Anonymous Coward
Posted at 18:08h, 12 SeptemberFrom what I can see, Redis supports no form of masterless replication, or any type of infrastructure with multiple writable nodes. For some apps, this is a significant shortcoming.
Josiah Carlson
Posted at 02:51h, 13 DecemberNot every database solves every problem, and indeed, Redis does not support fully masterless replication.
That said, client-side sharding is available (for multiple write nodes), Redis cluster (with multiple masters) is in-progress (considered alpha/beta quality), and the mentioned Redis Cloud hosting has their own auto-sharding solution that is pretty solid and available right now (which does multi-master sharding and replication).
Pingback:Tech Stories To Read This Week – October 15 Edition | iRomin
Posted at 08:32h, 15 October[…] The Joy of Redis: A great set of points on what makes Redis interesting to work with. […]
Will
Posted at 08:48h, 08 FebruaryHow much memory did the server have (or need) that held the 17.2 million records in memory?
Graham Polley
Posted at 09:41h, 13 FebruaryHi Will,
We were using about 4g on each node for 17mill records.
Cheers,
Graham
Pingback:License to Queue | Shine Technologies
Posted at 11:03h, 19 December[…] a solution presented itself quite quickly: Redis. We were already using it on other parts of the project and, while you might know it best as an in-memory key store, it can also be used as a queuing […]