1. Keep learning forward

    To be updated ...
  2. #TIL : Enable reverse proxy in CentOS

    CentOS with SELinux enabled by default will block any http proxy connection. So you have to enable this permission.

    Temporary enable

    $ /usr/sbin/setsebool httpd_can_network_connect 1

    Permanent enable

    $ /usr/sbin/setsebool -P httpd_can_network_connect 1
  3. #TIL : Create SSH tunnel manually

    SSH Tunnel is a fast way to transfer traffic through unsafe internet today. It would be used in MySQL connect, FTP connect or HTTP connect, ...

    Syntax :

    $ ssh -L [local_port]:[remote_endpoint]:[remote_port] [ssh_user]:[ssh_ip]

    Example :

    Lets say you have a EC2 instance (123.45.67.89) and remote DB instance (98.76.54.32) listening port 3306

    $ ssh -L 3307:98.76.54.32:3306 root@123.45.67.89

    Testing ssh tunnel

    $ telnet 127.0.0.1 3307
    $ # or
    $ mysql -h 127.0.0.1 -P 3307 -u root -p
  4. #TIL : Scope and Closure

    Run this code

    for (var i=1; i<=5; i++) {
    	setTimeout( function timer(){
    		console.log( i );
    	}, i*1000 );
    }

    What you expected

    1
    2
    3
    4
    5
    

    But, result is

    6
    6
    6
    6
    6
    

    Solution is

    for (var i = 1; i <= 5; i++) {
        setTimeout((function timer(j) {
            return function() {
                console.log(j);
            }
        })(i), i * 1000);
    }

    or

    for (var i=1; i<=5; i++) {
    	(function(j){
    		setTimeout( function timer(){
    			console.log( j );
    		}, j*1000 );
    	})(i);
    }
  5. #TIL : Eval function and with block

    JS code will be slower if engine detects any of 'eval' function or 'with' block b/c compiler stop optimizing the code

  6. #TIL : Remap Capslock to Control key

    Edit file /etc/default/keyboard and set

    XKBOPTIONS="ctrl:nocaps"
    

    Then, logout and log in again to impact

  7. #TIL : Ping Google to crawl updated content

    When you post new content to your website, the fastest way is ping search engines to notify them. After that, they will try to crawl and index your page.

    One way to ping search engines is using XMLRPC ping

    This is a example XMLRPC request (HTTP POST request with xml body)

    Request

    > POST /ping/RPC2 HTTP/1.1
    > Host: blogsearch.google.com
    > User-Agent: curl/7.47.0
    > Accept: */*
    > content-type: application/xml
    > Content-Length: 239
    > 
    <?xml version="1.0" encoding="UTF-8"?>
    <methodCall>
       <methodName>weblogUpdates.extendedPing</methodName>
       <params>
          <param>
             <value>Page Title</value>
          </param>
          <param>
             <value>http://example.com/helloworld.html</value>
          </param>
       </params>
    </methodCall>
    

    Response

    < HTTP/1.1 200 OK
    < Content-Type: text/xml; charset=ISO-8859-1
    < X-Content-Type-Options: nosniff
    < Date: Tue, 08 Aug 2017 05:04:01 GMT
    < Server: psfe
    < Cache-Control: private
    < X-XSS-Protection: 1; mode=block
    < X-Frame-Options: SAMEORIGIN
    < Accept-Ranges: none
    < Vary: Accept-Encoding
    < Transfer-Encoding: chunked
    < 
    <?xml version="1.0"?>
    <methodResponse><params>
      <param><value><struct>
        <member>
          <name>flerror</name><value><boolean>0</boolean></value>
        </member>
        <member>
          <name>message</name><value>Thanks for the ping.</value>
        </member>
      </struct></value></param>
    </params></methodResponse>

    Popular XML Servers

    http://blogsearch.google.com/ping/RPC2
    http://api.moreover.com/ping
    http://bblog.com/ping.php
    http://bitacoras.net/ping
    http://blog.goo.ne.jp/XMLRPC
    http://blogmatcher.com/u.php
    http://coreblog.org/ping/
    http://mod-pubsub.org/kn_apps/blogchatt
    http://www.lasermemory.com/lsrpc/
    http://ping.amagle.com/
    http://ping.cocolog-nifty.com/xmlrpc
    http://ping.exblog.jp/xmlrpc
    http://ping.feedburner.com
    http://ping.myblog.jp
    http://ping.rootblog.com/rpc.php
    http://ping.syndic8.com/xmlrpc.php
    http://ping.weblogalot.com/rpc.php
    http://pingoat.com/goat/RPC2
    http://rcs.datashed.net/RPC2/
    http://rpc.blogrolling.com/pinger/
    http://rpc.pingomatic.com
    http://rpc.technorati.com/rpc/ping
    http://rpc.weblogs.com/RPC2
    http://www.blogpeople.net/servlet/weblogUpdates
    http://www.blogroots.com/tb_populi.blog?id=1
    http://www.blogshares.com/rpc.php
    http://www.blogsnow.com/ping
    http://www.blogstreet.com/xrbin/xmlrpc.cgi
    http://xping.pubsub.com/ping/
    
  8. #TIL : Runing old java applets on brower

    Mostly morden browser has stop support Java plugins, so you can't run Java applet on browser.

    Temporary way :

    • run in IE or Safari
    • run in an old Firefox (version 23)

    And what if old java applet can't be runned on Java 8 because of weak signature algorithm. Try this

    • Open java.security file :
      • In MacOS, located in /Library/Java/JavaVirtualMachines/jdk[jdk-version].jdk/Contents/Home/jre/lib/security
      • In Windows, located in C:\Program File x86\Java\jre\lib\security
    • Comment this line, jdk.certpath.disabledAlgorithms=MD2, MD5, RSA keySize < 1024
    • Rerun applet
  9. #TIL : realpath function

    If you pass a non-exists path to function realpath, it returns empty string. So please don't do something like :

    function storage_path($folder) {
    	return realpath(__DIR__.'/storage/'.$folder);
    }

    if you expect it return full path of new folder !

  10. #TIL : Cleaning up old linux kernels

    Last day, I try to reboot a production server which has out-of-space /boot (I upgraded many kernels without rebooting, so system doesn't clean up old ones). And in the end, doom day had come ! It installed new kernel failed and booting to that kernel. My system crashed !

    So, I learned from it :

    • Never ever upgrade kernel without cleaning up old ones (just reboot)
    • Never ever reboot a production without backup
    • MORE IMPORTANT, NEVER do 2 above things at same time in the weekend !!!

    Solution :

    • Check current kernel : uname -r

    • List all kernels : dpkg --list | grep linux-image

    • Remove a kernel : sudo apt-get purge linux-image-x.x.x-x-generic

    • Finally, update grub after removing all old kernels : sudo update-grub2

    • YOLO command for DEBIAN distros (to remove all of old kernels in 1 line), from AskUbuntu

    dpkg --list | grep linux-image | awk '{ print $2 }' | sort -V | sed -n '/'`uname -r`'/q;p' | xargs sudo apt-get -y purge

    THEN, sudo reboot

  11. #TIL : HTTP2 supported for python requests library

    The sophisticated http client in Python is requests, it has simple API but powerful features. You can use it for crawling, sending request to third-party API or writing tests.

    Btw, at this moment it doesn't support HTTP/2 protocol (actually we often doesn't need its Server Push or Multi resource stream features). But sometime the API endpoint only supports HTTP/2 like Akamai Load Balacing service.

    The hero is new library named hyper, it has been developing to support full HTTP/2 specs. But if all we need is requesting single request to a HTTP/2 server. It works like a charm.

    Installation

    $ pip install requests
    $ pip install hyper
    

    Usage

    import requests
    from hyper.contrib import HTTP20Adapter
    s = requests.Session()
    s.mount('https://', HTTP20Adapter())
    r = s.get('https://cloudflare.com/')
    print(r.status_code)
    print(r.url)

    This mean any url has prefix https:// will be hanlded by HTTP20Adaper of hyper library

    Notice

    If you run above example, you will see the result

    200
    https://cloudflare.com/
    

    While you expected it would auto-follow redirect to the page https://www.cloudflare.com/

    We can fix it by using the newer version than 0.7.0 to fix the header key bytestring issue

    $ pip uninstall hyper
    $ pip install https://github.com/Lukasa/hyper/archive/development.zip
    

    Then try it out !!!

  12. #TIL : Free sandbox server for development

    We can use Heroku as a forever-free sandbox solution for testing or hosting micro service. Adding a credit card to have 1000 free computing hours.

    Heroku will make a service down if no received request come. We can use a cronjob-like service to check service health and keep it live !!! ;)

    Cronjob check health SASS : pingdom, statuscake, port-monitor, uptimerobot

    Btw, I don't recommend you keep service live but no use, it makes Heroku infrastructure heavy and THAT'S NOT FAIR for them !

  13. #TIL : Gearman bash worker and client

    Gearman is a awesome job queue service that helps you scale your system. In smaller context, it can help us to run a background woker for minor tasks like backup data, cleaning system.

    Install :

    $ sudo apt install gearman-job-server gearman-tools

    Create a worker bash script

    worker.sh

    #!/bin/bash
    
    echo $1
    echo $2

    Run worker, -w means run as worker mode , -f test means function name will be test

    $ chmod +x worker.sh
    $ gearman -w -f test xargs ./worker.sh

    Sending job

    $ gearman -f test "hello" "hogehoge"

    Sending background job

    $ gearman -b -f test "hello" "hogehoge"
  14. #TIL : Resolving conflict like a boss

    When using git merge new branch to old branch, you just want use all ours or theirs version but be lazy to update every conflicted file.

    grep -lr '<<<<<<<' . | xargs git checkout --ours

    Or

    grep -lr '<<<<<<<' . | xargs git checkout --theirs

    Explain : these commands will find any file contains <<<<<<< string (conflicted file) and run git checkout --[side]

  15. #TIL : Reducing docker image the right way

    When building an image, Docker engine commit file system layer on every command (RUN, ADD, COPY). So next time you installing packages from package manager likes apt, yum, pacman, ...remember clean their cache in same line.

    BAD WAY

    RUN apt-get update
    RUN apt-get install git
    
    # Something here
    
    # End of file
    RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
    

    RIGHT WAY

    RUN apt-get update && apt-get install -y git zip unzip && apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
    
  16. #TIL : Changing channel from alpha to stable will remove ALL DATA

    On MacOS, changing Docker channel will remove all data (includes volumes, images, networks and ... everything).

    Because Docker on Mac using a minimal Linux machine to host docker engine, so changing machine means discarding all old data. So BECAREFUL !

  17. #TIL : zcat : decompressing pipe tool

    zcat is a tool that creates a pipe from gz file. It makes command cleaner and faster (maybe). You don't have to decompress gz file before using next tool.

    Examples :

    Finding string in gzip text file

    $ zcat secret.gz | grep '42'

    Importing SQL backup file

    $ mysqldump -u root -p db_name1 | gzip > db_name.sql.gz
    $ zcat db_name.sql.gz | mysql -u root -p db_name_2
  18. #TIL : Using BSD find util to find and exec command on file and folder

    Simple syntax of find

    $ find [find_path] -type [file_type] -exec [command] {} \;

    Add filename matching pattern to filter the result

    $ find [find_path] -name "*.php" -type [file_type] -exec [command] {} \;

    Where file_type is :

    • b : block special
    • c : character special
    • d : directory
    • f : regular file
    • l : symbolic link
    • p : FIFO
    • s : socket

    Examples:

    Fix common file and directory permissions

    $ find . -type f -exec chmod 644 {} \;
    $ find . -type d -exec chmod 755 {} \;

    Check syntax all PHP files

    $ find . -type f -name "*.php" -exec php -l {} \; | grep -v 'No syntax errors detected'

    Removed all log files

    $ find . -type f -name "*.log" -exec rm -f {} \;

    WANT MORE ???

    $ man find
  19. #TIL : wget Output flag

    -O means output

    $ # output file will be index.html or based on header filename
    $ wget -O www.abc.xyz
    $ # output file will be filename.html
    $ wget -O filename.html www.abc.xyz
    $ # output to stdout
    $ wget -O- www.abc.xyz
    $ wget -O- https://gist.githubusercontent.com/khanhicetea/4fa9f5103cd7fbc2d2270abce05c9c2b/raw/helloworld.sh | bash
  20. #TIL : Checking forced push conflicts on source code in auto testing

    Using automated CI solution likes Travis, Jenkins, DroneCI, ... is good solution to ensure quality of software and no breaks in deployment.

    Sometimes, developers force push conflicts part to production branch of source code. If the CI tests only backend (python, ruby, php, go, ..) and forget about frontend code, then your application will be exploded !

    So checking the conflicts code is required step before testing backend and deployment.

    I used grep tool to checking conflicts code in current dir

    Create a file name conflict_detector.sh in root dir of source code

    #!/bin/bash
    
    grep -rli --exclude=conflict_detector.sh --exclude-dir={.git,vendor,venv,node_modules} "<<<<<<< HEAD" .

    Then mini tool print list of conflicted files. If exit code not equal 0 then testing will be failed !

  21. #TIL : Grant user to use sudo without password

    This is bad practice but it's kind of hacky thing if you YOLO

    # Create a user with home dir and bash shell (if you don't have yet)
    $ useradd -m YOURUSERNAME -s /bin/bash
    $ sudo vi /etc/sudoers

    Add this line below root ALL=(ALL:ALL) ALL (User privilege specification section)

    $ YOUR_USERNAME     ALL=(ALL:ALL) NOPASSWD:ALL

    Then press :wq! to force saving the file

    make me a sandwich sudo

    Enjoy sudo !

  22. #TIL : Cloudflare Error 522 Connection Time out

    If you are using Cloudflare as a proxied web server, it will provide many benefits about performance (assets caching, prevent DDOS and cheap CDN). But sometimes, you will face to this error "522 Connection Time out".

    The problems caused by :

    • Networking (CF can't touch origin server : Firewall blocking, Network Layer #1,#2,#3 issue)
    • Timeout (origin server process too long than 90 seconds)
    • Empty or invalid response from origin server
    • No or big HTTP headers (> 8Kb)
    • Failed TCP handshake

    Ref:

  23. #TIL : Mysql dumping only table structure

    Adding -D to dump only data structure

    Example :

    $ mysqldump -h 127.0.0.1 -u root -p"something" -D database1 > db.sql
  24. #TIL : Compressing and Extracting files with rar in Linux

    zip and tar disadvantages

    All unicode filename will be transform to weird character, so it makes broken paths and broken links

    Notice

    rar and unrar in Linux isn't same version and so don't use unrar to extract archived file by rar (It causes invalid full paths)

    Installation

    Ubuntu :

    $ sudo apt install rar

    Redhat ( using RPMForge )

    $ sudo yum install rar

    Compressing files, folder

    Compressing files

    $ rar a result.rar file1 file2 file3 fileN

    Compressing dir and its subdirs (remember with trailing slash in the end)

    $ rar a -r result.rar folder1/

    Locking RAR file with password (adding -p"THE_PASSWORD_YOU_WANT")

    $ rar a -p"0cOP@55w0rD" result.rar file1 file2 file3 fileN
    $ rar a -p"0cOP@55w0rD" -r result.rar folder1/

    Extracting file

    Listing content of RAR file

    $ rar l result.rar

    Extracting RAR file to current dir

    $ rar e result.rar

    Extracting RAR file to current dir with fullpath

    $ rar x result.rar

    WANT MORE ?

    Asking it !

    $ rar -?

    BONUS

    WHAT IF I TOLD U THAT A RAR FILE BIGGER 35 TIMES THAN ITS ORIGINAL FILE ?

    $  echo 'a' > a.txt
    $  rar a a.rar a.txt
    
    RAR 3.80   Copyright (c) 1993-2008 Alexander Roshal   16 Sep 2008
    Shareware version         Type RAR -? for help
    
    Evaluation copy. Please register.
    
    Creating archive a.rar
    
    Adding    a.txt                                                       OK 
    Done
    $  ls -al
    total 72
    -rw-r--r-- 1 root root    77 May 17 14:18 a.rar
    -rw-r--r-- 1 root root     2 May 17 14:17 a.txt

    bus rar

  25. Lightning thought #1 : MAGIC !

    Random quote

    “Insanity: doing the same thing over and over again and expecting different results.” - Albert Einstein

    It's true in LOGIC ! But sometimes, it goes wrong in computer science and ... life.

    What does computer program do ?

    We learnt from Computer Science courses this phisolophy :

    PROGRAM takes INPUT and produces OUTPUT

    So, same PROGRAM + same INPUT = same OUTPUT

    And that's the basis of every testing techniques. We expect specified OUTPUT for the specified INPUT. If not, it fails !

    What happens in reality ?

    magic in computer science

    IT's MAGIC !

    magic

    Sometimes it works, sometime it doesn't ! This is common situation in developer's life and human's life

    But, have you ever think the root of it ? Why ? How ? It happened ?

    I'm drunk when writing this, but this is my random thoughts :

    • Time : of course, time affects everything it touched but I seperate to 2 reasons
      • Randomization : any random thing depends on timing. At A, it was X. But at B, it will be Y. So the program or life depends on 1 random thing is unstable, unpredictable and magic !
      • Limitation : everything has its limitation, once you go over that, you will be blocked or have to wait.
    • Dependencies : anything has dependencies, even NOTHING depends on EVERY dependencies.
      • Unavailable : dead, down-time, overloaded
      • Break Changes : you need X but dependency has Y

    How about human life ?

    If you keep doing the same thing but different attitude, magic can happen !

    That's why machines can't win human !

    Because human is unpredictable !


    Ref:

    • Images from Googe Search Photos
  26. #TIL : Basics about sqlite command line tool

    We can use sqlite3 command line tool to run SQL statement in sqlite3 file.

    View all table : .tables

    Truncate table : delete from [table_name]; then run vacuum; to clear space

    Close : press Ctrl ^ D to escape

    $ sqlite3 database.sqlite
    SQLite version 3.8.10.2 2015-05-20 18:17:19
    Enter ".help" for usage hints.
    sqlite> .tables
    auth_group                  backend_church
    auth_group_permissions      backend_masstime
    auth_permission             django_admin_log
    auth_user                   django_content_type
    auth_user_groups            django_migrations
    auth_user_user_permissions  django_session
    backend_area
    sqlite> select * from auth_user;
    1|pbkdf2_sha256$30000$QQSOJMiXmNly$mWUlYwZnaQGsv9UVZcdTb29P7IHrgnd7ja3T/uwFqvw=|2017-03-25 15:06:40.528549|1|||hi@khanhicetea.com|1|1|2017-03-25 15:06:23.822489|admin
    sqlite> describe auth_user;
    Error: near "describe": syntax error
    sqlite> select * from django_session;
    4nmyjqpw292bmdnb5oxasi74v9rdhzoc|MzcwZDMxMzk5MGZkZTg2MjY4YWYyNmZiMzRkNWQwOTVjYzczODk5OTp7Il9hdXRoX3VzZXJfaGFzaCI6IjhlZTZjM2NhOGJjNWU4ODU0ZGE3NTYzYmQ4M2FkYzA0MGI4NTI4NzgiLCJfYXV0aF91c2VyX2JhY2tlbmQiOiJkamFuZ28uY29udHJpYi5hdXRoLmJhY2tlbmRzLk1vZGVsQmFja2VuZCIsIl9hdXRoX3VzZXJfaWQiOiIxIn0=|2017-04-08 15:06:40.530786
    sqlite> delete from django_session;
    sqlite> vacuum;
    sqlite> ^D
  27. #TIL : Base 64 encode and decode builtin tool

    Browsers have helpers function to encode and decode base64 :

    • btoa : base64 encode
    • atob : base64 decode
    > btoa('Hello world')
    "SGVsbG8gV29ybGQgIQ=="
    
    > atob('SW4gR29kIFdlIFRydXN0ICE=')
    "In God We Trust !"
    
  28. #TIL : ab failed responses

    When benchmarking a HTTP application server using ab tool, you shouldn't only care about how many requests per second, but percentage of Success responses.

    A notice that you must have the same content-length in responses, because ab tool will assume response having different content-length from Document Length (in ab result) is failed response.

    Example

    Webserver using Flask

    from flask import Flask
    from random import randint
    app = Flask(__name__)
    
    @app.route("/")
    def hello():
        return "Hello" * randint(1,3)
    
    if __name__ == "__main__":
        app.run()

    Benchmark using ab

    $ ab -n 1000 -c 5 http://127.0.0.1:5000/
    
    This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
    Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
    Licensed to The Apache Software Foundation, http://www.apache.org/
    
    Benchmarking 127.0.0.1 (be patient)
    Completed 100 requests
    Completed 200 requests
    Completed 300 requests
    Completed 400 requests
    Completed 500 requests
    Completed 600 requests
    Completed 700 requests
    Completed 800 requests
    Completed 900 requests
    Completed 1000 requests
    Finished 1000 requests
    
    
    Server Software:        Werkzeug/0.12.1
    Server Hostname:        127.0.0.1
    Server Port:            5000
    
    Document Path:          /
    Document Length:        10 bytes
    
    Concurrency Level:      5
    Time taken for tests:   0.537 seconds
    Complete requests:      1000
    Failed requests:        683
       (Connect: 0, Receive: 0, Length: 683, Exceptions: 0)
    Total transferred:      164620 bytes
    HTML transferred:       9965 bytes
    Requests per second:    1862.55 [#/sec] (mean)
    Time per request:       2.684 [ms] (mean)
    Time per request:       0.537 [ms] (mean, across all concurrent requests)
    Transfer rate:          299.43 [Kbytes/sec] received
    
    Connection Times (ms)
                  min  mean[+/-sd] median   max
    Connect:        0    0   0.0      0       0
    Processing:     1    3   0.7      2      11
    Waiting:        1    2   0.6      2      11
    Total:          1    3   0.7      3      11
    WARNING: The median and mean for the processing time are not within a normal deviation
            These results are probably not that reliable.
    
    Percentage of the requests served within a certain time (ms)
      50%      3
      66%      3
      75%      3
      80%      3
      90%      3
      95%      3
      98%      5
      99%      6
     100%     11 (longest request)

    In this example, first response content-length is 10 ("hello" x 2), so every responses has content length is 5 or 15, will be assumed a failed response.

  29. #TIL : Persistent connection to MySQL

    When a PHP process connects to MySQL server, the connection can be persistent if your PHP config has mysql.allow_persistent or mysqli.allow_persistent. (PDO has the attribute ATTR_PERSISTENT)

    $dbh = new PDO('DSN', 'KhanhDepZai', 'QuenMatKhauCMNR', [PDO::ATTR_PERSISTENT => TRUE]);

    Object destruction

    PHP destruct an object automatically when an object lost all its references.

    Example code:

    <?php
    
    $x = null;
    
    function klog($x) {
        echo $x . ' => ';
    }
    
    class A {
        private $k;
        function __construct($k) {
            $this->k = $k;
        }
    
        function b() {
            klog('[b]');
        }
    
        function __destruct() {
            klog("[{$this->k} has been killed]");
        }
    }
    
    function c($k) {
        return new A($k);
    }
    
    function d() {
        c('d')->b();
    }
    
    function e() {
        global $x;
        $x = c('e');
        $x->b();
        klog('[e]');
    }
    
    function f() {
        klog('[f]');
    }
    
    d();
    e();
    f();
    

    Result:

    [b] => [d has been killed] => [b] => [e] => [f] => [e has been killed] =>
    

    Reducing PDO persistent connections in PHP long-run process (connect to multiples databases)

    Instead of using a service object, we should use a factory design pattern for each job (each connection). PHP will close MySQL connection because it destructs object PDO. Then we can reduce the number of connections to MySQL at a same time.

    I learned this case when implement a web-consumer (long-run process) to run database migration for multiples databases.

    Before fixing this, our MySQL server had been crashed because of a huge opened connections.

    Now, everything works like a charm !

    Bring it on

  30. #TIL : Using VarDumper in PHPUnit

    The trick is writing the output to STDERR stream, I wrote a helper function below

    function phpunit_dump() {
        $cloner = new \Symfony\Component\VarDumper\Cloner\VarCloner();
        $dumper = new \Symfony\Component\VarDumper\Dumper\CliDumper(STDERR);
        foreach (func_get_args() as $var) {
            $dumper->dump($cloner->cloneVar($var));
        }
    }

    How to use it ?

    // Something magic here :D
    
    phpunit_dump($magic_var1, $magic_var2, $magic_of_magic);
    
    // So much magic below, can't understand anymore

    Magic

  31. #TIL : UNION vs UNION ALL

    The difference is UNION command will sort and remove duplicated rows (RETURNED ONLY DISTINCT ROWS)

    Examples :

    mysql> select '1', '2' union select '2', '1' union select '3', '4' union select '1', '2';
    +---+---+
    | 1 | 2 |
    +---+---+
    | 1 | 2 |
    | 2 | 1 |
    | 3 | 4 |
    +---+---+
    3 rows in set (0.00 sec)
    
    mysql> select '1', '2' union select '2', '1' union select '3', '4' union select '1', '3';
    +---+---+
    | 1 | 2 |
    +---+---+
    | 1 | 2 |
    | 2 | 1 |
    | 3 | 4 |
    | 1 | 3 |
    +---+---+
    4 rows in set (0.00 sec)
    
    mysql> select '1', '2' union all select '2', '1' union all select '3', '4' union all select '1', '2';
    +---+---+
    | 1 | 2 |
    +---+---+
    | 1 | 2 |
    | 2 | 1 |
    | 3 | 4 |
    | 1 | 2 |
    +---+---+
    4 rows in set (0.00 sec)

    Tips

    In case there will be no duplicates, using UNION ALL will tell the server to skip that (useless, expensive) step.