Problem In the previous post about Probability in migration new services to prevent traffic flood , I try to apply Probability to solve problem. This post explains more about how I handle large-n-heavy traffics in short period of time. Btw, I called it’s heavy because it needs cpu and memory resize image on cloud (download original image => load to memory => transform format => resize). So if I spin 2, or 4, or even 10 servers to handle this workload, these are some cons: Devops Costs : you have to know to manage large of VMs and put it behind Load Balancer Timing Costs : manual scaling take time (certainly) Resource Costs : you have to pay for idle time Complexity Costs : your system will be a mess Solution All challenges your met mainly because you have to have many servers, so solution is
Problem I have a imgproxy server instance to resize cloud image on-demand using signed url then saving response cache to disk. To the day I want to move to another domain point to another server instance, then I remember the last time I changed immediately to new domain, my websites lost many images because it needs to resize all the images in a short period of time (my server is crash). So this time, I need another strategy :) Probability First try : random (looks okay) $is_new = rand(1, 4) == 1; $domain = $is_new ? 'new_domain' : 'old_domain'; This strategy looks okay but it has loophole when you have high traffic websites.
I learned on 2019-04-03 about apt, devops, packagemanager
I learned on 2018-02-23 about devops, docker
I learned on 2017-11-22 about ci, devops, docker
I learned on 2017-10-13 about ci, devops, netcat
I learned on 2017-08-04 about devops, free, heroku
I learned on 2017-05-19 about testing, bash, devops, automated
Kick start Docker environment on Ubuntu in production
Kick start a LEMP stack on Ubuntu 16.04 in production
It’s creepy when supervisor kills all processes of programs and starts them again.