As default, docker runs your container as root user (uid=0). Although docker isolates your filesystem to protect docker host, but running processes as root is redundant and increasing attacking surface. Even it can make its written files having root owner, which can mess your docker-host filesystem permission.
Alpine linux becomes the most base image for docker images because it’s lightweight and handful package manager apk. Sometimes, you create an image that downloads the binary file but can not execute it. It shows something like this:
/entrypoint.sh: line ***: [your binary]: not found
The problem is your binary built within shared libraries, so it can’t run without shared libraries dependencies. To findout which libraries is missing, use this
$ ldd [your binary path]
This is sample result
1 2 3 4 5 6 7 8
/usr/local/bin # ldd hugo /lib64/ld-linux-x86-64.so.2 (0x7fa852f2a000) libpthread.so.0 => /lib64/ld-linux-x86-64.so.2 (0x7fa852f2a000) Error loading shared library libstdc++.so.6: No such file or directory (needed by hugo) libdl.so.2 => /lib64/ld-linux-x86-64.so.2 (0x7fa852f2a000) libm.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7fa852f2a000) Error loading shared library libgcc_s.so.1: No such file or directory (needed by hugo) libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7fa852f2a000)
So we need to install libstdc++ and libc6-compat before run the binary
As an software engineering developer, you know that automated CI testing is one of keys to improve software release life-cycle.
But sometimes reality is not as good as you think, CI testing speed is slow (3-10 minutes / build) and it slows the release cycle speed down. And you try to look into your build logs to find out what causes the problem. Then you got it, it’s mostly the DATABASE service (MySQL, Postgres, MongoDB, …)
I will summarize some stages of your database in a testing build:
First, it initializes the data, loads config and listens to the connections (takes around 10-45 seconds)
Second, that you import your testing database into the server (including schemas and initialized data) takes around 20-60 seconds
Then, on each test case, it needs to clear all data then re-imports fixture data (takes around 30-120 seconds)
So how to make these servers run as fast as possible like some Key-Value databases do? (Redis, Memcached). The main different point is the MEMORY! What if we put all data inside memory??
All of we know that RAM speed with 150 times lower latency is technically better than SSD and HDD speed. And as a matter of fact, Linux is a good OS that supports a lot of filesystems, specially tmpfs, which you can mount files into your RAM memory.
However, nothing is perfect and this is not an exception. Actually, it is not a good option for persistent data which is not necessary for testing database. What it really needs is speed only, so it fits in.
That’s my idea, now I will try to test it on my CI environment (I use DroneCI using Docker). In new version 0.8+ of DroneCI, they support us to run docker containers within tmpfs mount.
So I just add this line into my drone config
1 2 3 4 5 6 7 8 9
services: testdatabase: thumbnail:mysql:5.7 # Add this 2 lines below to boost your database container tmpfs: -/var/lib/mysql environment: -MYSQL_DATABASE=testdb -MYSQL_ROOT_PASSWORD=passwd
MySQL service initializes in 3 seconds instead of 25 seconds
Import testing database using mysql client takes below 1 second instead of 17 seconds
My test cases run 20-30% faster (I have few testcases using database)
Original MySQL docker image uses a script to generate ssl certificates for service. Sometime we don’t really need it (connect via a docker network link or need a fast enough database service to build a automated test).
We can reduce init time by removing the script from original Docker image
1 2 3 4
# Remove mysql_ssl_rsa_setup to ignore setup SSL certs RUN rm -f /usr/bin/mysql_ssl_rsa_setup
Docker is great tool to management linux containers. It brings DevOps to next level, from development to production environment. And of course, before deploy anything to production, software should be tested carefully and automatically.
That’s why Drone, a new lightweight CI server built-on top Go lang and Docker, will help us to resolve the testing problems in simple and fast way.
This guide will assume you already have Docker and Docker Compose tool. And of course, root permission ;)