VPS Dedicated CPU showdown : Linode vs DigitalOcean

This Feb, Linode releases new plans (Dedicated CPU VPS plans, which means the CPUs run on KVM host without sharing to other neighbors). So I make this benchmark to show how good it is compared with DigitalOcean (DO) in the same plan.

Price

Linode’s price has 25% lower than DO’s.

Linode price

linode dedicated cpu price

DigitalOcean price

digitalocean dedicated cpu price

Boot time

Linode VPS boots a bit slower than DO. (115 processes vs 96 processes)

Benchmark

I used this script (https://github.com/n-st/nench) to benchmark the minimum plan (2 vCPUs and 4GB RAM). I configured vm.swappiness = 0 to disable swap.

Linode

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
Processor:    Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
CPU cores: 2
Frequency: 2499.984 MHz
RAM: 3.9G
Swap: 511M
Kernel: Linux 4.15.0-43-generic x86_64

Disks:
sda 24.5G HDD
sdb 512M HDD

CPU: SHA256-hashing 500 MB
3.339 seconds
CPU: bzip2-compressing 500 MB
5.649 seconds
CPU: AES-encrypting 500 MB
1.535 seconds

ioping: seek rate
min/avg/max/mdev = 60.1 us / 87.6 us / 8.98 ms / 104.1 us
ioping: sequential read speed
generated 25.2 k requests in 5.00 s, 6.15 GiB, 5.04 k iops, 1.23 GiB/s

dd: sequential write speed
1st run: 953.67 MiB/s
2nd run: 1239.78 MiB/s
3rd run: 1144.41 MiB/s
average: 1112.62 MiB/s

IPv4 speedtests
your IPv4: 139.162.5.xxxx

Cachefly CDN: 54.08 MiB/s
Leaseweb (NL): 6.64 MiB/s
Softlayer DAL (US): 3.18 MiB/s
Online.net (FR): 1.74 MiB/s
OVH BHS (CA): 0.29 MiB/s

IPv6 speedtests
your IPv6: 2400:8901::xxxx

Leaseweb (NL): 5.60 MiB/s
Softlayer DAL (US): 0.00 MiB/s
Online.net (FR): 10.43 MiB/s
OVH BHS (CA): 0.85 MiB/s
-------------------------------------------------

Digital Ocean

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
Processor:    Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz
CPU cores: 2
Frequency: 2693.670 MHz
RAM: 3.9G
Swap: -
Kernel: Linux 4.15.0-45-generic x86_64

Disks:
vda 25G HDD

CPU: SHA256-hashing 500 MB
2.427 seconds
CPU: bzip2-compressing 500 MB
4.252 seconds
CPU: AES-encrypting 500 MB
0.964 seconds

ioping: seek rate
min/avg/max/mdev = 114.8 us / 185.5 us / 4.33 ms / 48.3 us
ioping: sequential read speed
generated 4.13 k requests in 5.00 s, 1.01 GiB, 826 iops, 206.6 MiB/s

dd: sequential write speed
1st run: 410.08 MiB/s
2nd run: 386.24 MiB/s
3rd run: 417.71 MiB/s
average: 404.68 MiB/s

IPv4 speedtests
your IPv4: 178.128.214.xxxx

Cachefly CDN: 13.14 MiB/s
Leaseweb (NL): 13.56 MiB/s
Softlayer DAL (US): 7.82 MiB/s
Online.net (FR): 9.86 MiB/s
OVH BHS (CA): 0.44 MiB/s

IPv6 speedtests
your IPv6: 2400:6180:0:xxxx

Leaseweb (NL): 9.57 MiB/s
Softlayer DAL (US): 0.00 MiB/s
Online.net (FR): 4.95 MiB/s
OVH BHS (CA): 0.30 MiB/s
-------------------------------------------------

Conclusion

DigitalOcean allows you run on newer hardwares so its CPU is better (~43%) than Linode’s. (You can compare the result in CPU hashing section). By the way, Linode has the better price so it’s your choices! :)

Hope Linode will upgrade all hardwares soon. Updated : they have new CPUs in some datacenters which are Intel Xeon Gold 6148 and AMD EPYC 7501, so stay tuned!

#TIL : ab failed responses

When benchmarking a HTTP application server using ab tool, you shouldn’t only care about how many requests per second, but percentage of Success responses.

A notice that you must have the same content-length in responses, because ab tool will assume response having different content-length from Document Length (in ab result) is failed response.

Example

Webserver using Flask

1
2
3
4
5
6
7
8
9
10
from flask import Flask
from random import randint
app = Flask(__name__)

@app.route("/")
def hello():
return "Hello" * randint(1,3)

if __name__ == "__main__":
app.run()

Benchmark using ab

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
$ ab -n 1000 -c 5 http://127.0.0.1:5000/

This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software: Werkzeug/0.12.1
Server Hostname: 127.0.0.1
Server Port: 5000

Document Path: /
Document Length: 10 bytes

Concurrency Level: 5
Time taken for tests: 0.537 seconds
Complete requests: 1000
Failed requests: 683
(Connect: 0, Receive: 0, Length: 683, Exceptions: 0)
Total transferred: 164620 bytes
HTML transferred: 9965 bytes
Requests per second: 1862.55 [#/sec] (mean)
Time per request: 2.684 [ms] (mean)
Time per request: 0.537 [ms] (mean, across all concurrent requests)
Transfer rate: 299.43 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 1 3 0.7 2 11
Waiting: 1 2 0.6 2 11
Total: 1 3 0.7 3 11
WARNING: The median and mean for the processing time are not within a normal deviation
These results are probably not that reliable.

Percentage of the requests served within a certain time (ms)
50% 3
66% 3
75% 3
80% 3
90% 3
95% 3
98% 5
99% 6
100% 11 (longest request)

In this example, first response content-length is 10 (“hello” x 2), so every responses has content length is 5 or 15, will be assumed a failed response.