Increasing performace of a Redis Docker container in Amazon Web Services

Today we are presenting a new experience of accelerating communications between containers, in this particular case between Docker on Amazon Web Services (AWS). Our technology, implemented in our product Speedus, freely available at our website, is a nice solution to increase performance without scaling up (increasing your server resources) or scaling out (adding more servers). Thus, through optimizing communications you will end up with much more performance at the same infrastructure cost.

Accelerating Dockerized Redis

Next we will present our experience accelerating a dockerized Redis on AWS, selected as instance type a c3.8xlarge, with two Intel Xeon E5-2680 v2 (Ivy Bridge) sockets (running at 2.7-3.5GHz), with 32 (hyperthreaded) cores and 60 GB of memory ECC (for error detection and correction).

Two docker containers have been launched, a Redis server and a Redis built-in benchmark client. And two scenarios have been tested, using the default Docker networking stack (Baseline results), and using Speedus for bypassing TCP/IP and system calls, thus accelerating the data transfers up to 20 times, not only in latency but also in throughput. And the best news is that our product, Speedus, is fully non-intrusive, you do not need to change a single line of code, neither Docker nor Redis code. Neither you need changes in your infrastructure.

Here you have the results:

PAYLOAD 128 bytes
Pipeline Baseline Speedus (%) Baseline Speedus (%)
1 71.090,05 268.576,56 277,80% 71.039,55 242.130,77 240,80%
2 132.158,59 418.410,03 216,60% 141.376,06 463.678,50 228,00%
4 240.384,61 518.134,72 115,50% 244.498,78 601.202,38 145,90%

It is clear that Speedus increases significantly (up to 278%) the number of requests per second. The gain is especially important for a pipeline of 1, when you are not trading off latency for throughput. Thus, with Speedus, you can achive high-throughput (aprox. 250kops/second) with the lowest latency, or more than double the performance for a pipeline of 4.

By removing the TCP/IP sockets overhead your system will be more reliable, other operations which involve a syscall would run faster, and  you will need less infrastructure for boosting your Redis-based applications. Thus, you can concentrate on your business, while we care about making the most from your infrastructure.

How to test it

Docker allow encapsulate this kind of tests to do this more easy to replicate.

First, we will test Redis performance using the built-in benchmark provided by the official Redis image. The default steps for running such benchmark are pretty straightforward:

Note that is essential to share the same network stack between containers to enhance performance between these two containers.

Now, the next step in our search to improve communication performance involves using Speedus. Conveniently for these tests, we already have a Redis image bundled with Speedus in the Docker Hub, so let’s use it:

As you can see, the only difference is that we append ‘speedus’ before the Redis executables, which tells Redis to preload our native libraries in order to intercept and bypass TCP/IP socket calls.

However, for achieving the most you must add an extra parameter to tell the client to reuse the private IPC namespace from the server:

Final Words

Speedus is a non-intrusive technology for improving your communications between containers. In this particular case, with Redis, the benefits are almost 3 times more performance. You can get Speedus from our website and in the Docker Hub. Let us know how your experience goes at