Optimizing communications between Docker containers

Here in Torusware we all love Docker. It really has made our lives easier, allowing us to try new software in no time, to distribute our software in preinstalled images and it plays very nice along Jenkins with our continuous integration system.

However, the exponential increase in the number of containers per server is puttin much more preasure on the intra-host networking. We have extensively measured latencies and transfer rates between dockerized applications versus those same apps running native and we have checked that Docker really does a good job in that aspect, but still it doesn’t deliver the goods for the people which is really picky about communications performance.

Luckily for that last group of people, we have the solution. Since dockerized apps still use sockets and TCP/IP in intrahost connections, we will show you how to bypass that stack and leverage IPC mechanisms to boost transfers between containers. And you can do it transparently, without changing the code of your applications.

Does that sound good? We will show you how we do this in the next examples with well known applications.

Use Case #1: Redis

docker-redis

Firstly, we will test Redis performance using the built-in benchmark provided by the official Redis image. The default steps for running such benchmark are pretty straightforward:

Note that is essential to share the same network stack between containers to enhance performance between these two containers.

Now, the next step in our quest to improve communication performance involves using Speedus Plug&Run Lite**, the free version of our product portfolio. Conveniently for these tests, we already have a Redis image bundled with Speedus in the Docker Hub, so let’s use it:

As you can see, the only difference is that we append ‘speedus’ before the Redis executables, which tells Redis to preload our native libraries in order to intercept and bypass TCP/IP socket calls.

However, there is still room for improvement. We can employ our Speedus Extreme Performance** (Speedus EP) to leverage the full potential of IPC mechanisms. Currently we offer Speedus EP under a subscription model but you can request us an evaluation version which can be easily deployed in a Redis image.

Now, supposing you are using a Redis image with Speedus EP** inside, you must add an extra parameter to tell the client to reuse the private IPC namespace from the server:

We have run the previous configurations in one of our testing machines (Xeon E5-2643v2, GNU/Linux Kernel 3.19, Docker 1.5) and here you have what the results look like:

redis-benchmark
PAYLOAD=128B, PIPELINE=2 PAYLOAD=4096B, PIPELINE=2
SET (req/s) % speedup GET (req/s) % speedup SET (req/s) % speedup GET (req/s) % speedup
Baseline 259201.66 277315.59 198412.7 209030.11
Speedus Lite** 426985.47 64.73 469924.81 69.45 305997.56 54.22 332225.91 58.94
Speedus EP** 717360.06 176.76 815660.69 194.13 414593.72 108.96 460405.16 120.26

Use Case #2: ZeroMQ

docker-zmq

Unfortunately, I did not find any preconfigured ZeroMQ container in the hub which suited our needs, so we built one from scratch using a centos6 image and ZeroMQ 4.1.0 (we will publish it in the hub, we promise!). In this case, we are going to run a quick latency test between two containers using the built-in performance benchmarks bundled with ZeroMQ.

First, here is how we run the ZeroMQ benchmark with the default configuration:

Secondly, we plug in Speedus Lite** explicitly in both local_lat.sh and remote_lat.sh:

Last but not least, we specify the ‘–ipc’ option and plug in Speedus Extreme Performance**:

As you can see, these steps are similar to the ones we followed with Redis, so it should be straightforward to apply these tips with any other application. In this case, we obtained the next results:

zeromq-lat
PAYLOAD=128B PAYLOAD=1024B
latency (us) % speedup latency (us) % speedup
Baseline 18.803 18.976
Speedus Lite** 9.083 207.01 9.418 201.49
Speedus EP** 4.234 444.10 6.138 309.16

Although this test is not exhaustive at all, if you are interested in more numbers, in this post you can see the ones we achieved running ZeroMQ in a non virtualized environment, which are quite similar to the results we just obtained with Docker.

Final words

Out-of-the-box communications among Docker containers are OK, but if you want the best in class then Speedus is what you are looking for. You can grab Speedus for free from our website and from the Docker Hub and do not hesitate in letting us know about your experience at info@torusware.com.

NOTE

** Currently we have no longer Lite and Extreme Perfomance versions. The EP version is available for free as Speedus, and the paid features are full support and more customization among other changes.