{"id":1,"date":"2023-02-13T10:42:10","date_gmt":"2023-02-13T10:42:10","guid":{"rendered":"https:\/\/tivuli-it.com\/?p=1"},"modified":"2023-02-17T16:32:23","modified_gmt":"2023-02-17T16:32:23","slug":"infopage","status":"publish","type":"post","link":"https:\/\/tivuli-it.com\/index.php\/2023\/02\/13\/infopage\/","title":{"rendered":"PostgreSQL\/ TimescaleDB cluster using Patroni"},"content":{"rendered":"\n<p>To create this cluster we will need three virtual machines although only two will be used for the database. Machines will be Centos v7 although not much is changed if using Centos v8:<\/p>\n\n\n\n<p>pg1.localdomain 192.168.122.21<\/p>\n\n\n\n<p>pg2.localdomain 192.168.122.22<\/p>\n\n\n\n<p>VIP address will be 192.168.122.200<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"511\" src=\"https:\/\/tivuli-it.com\/wp-content\/uploads\/2023\/02\/db_anon-1024x511.png\" alt=\"\" class=\"wp-image-40\" srcset=\"https:\/\/tivuli-it.com\/wp-content\/uploads\/2023\/02\/db_anon-1024x511.png 1024w, https:\/\/tivuli-it.com\/wp-content\/uploads\/2023\/02\/db_anon-300x150.png 300w, https:\/\/tivuli-it.com\/wp-content\/uploads\/2023\/02\/db_anon-768x383.png 768w, https:\/\/tivuli-it.com\/wp-content\/uploads\/2023\/02\/db_anon.png 1361w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>To install software first add repositories on both nodes: <\/p>\n\n\n\n<pre class=\"wp-block-code\"><code># sudo yum install -y https:\/\/download.postgresql.org\/pub\/repos\/yum\/reporpms\/EL-7-x86_64\/pgdg-redhat-repo-latest.noarch.rpm\n\n# tee \/etc\/yum.repos.d\/timescale_timescaledb.repo &lt;&lt;EOL\n&#91;timescale_timescaledb]\nname=timescale_timescaledb\nbaseurl=https:\/\/packagecloud.io\/timescale\/timescaledb\/el\/$(rpm -E %{rhel})\/\\$basearch\nrepo_gpgcheck=1\ngpgcheck=0\nenabled=1\ngpgkey=https:\/\/packagecloud.io\/timescale\/timescaledb\/gpgkey\nsslverify=1\nsslcacert=\/etc\/pki\/tls\/certs\/ca-bundle.crt\nmetadata_expire=300\nEOL<\/code><\/pre>\n\n\n\n<p>Update packages and install software on both nodes:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code># yum update -y\n\n# yum install -y timescaledb-2-postgresql-13<\/code><\/pre>\n\n\n\n<p>We also need to add repository (Extras) and install etcd on both database nodes and another extra, we used a node running grafana:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code># yum install -y etcd<\/code><\/pre>\n\n\n\n<p>Update configuration in \/etc\/etcd\/etcd.conf on pg1 &#8211; we must use IP addresses:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>name: etcd1\ndata-dir: \/var\/lib\/etcd\nlisten-peer-urls: http:\/\/192.168.122.21:2380\nlisten-client-urls: http:\/\/localhost:2379,http:\/\/192.168.122.21:2379\ninitial-advertise-peer-urls: http:\/\/192.168.122.21:2380\nadvertise-client-urls: http:\/\/192.168.122.21:2379\ninitial-cluster: etcd1=http:\/\/192.168.122.21:2380,etcd2=http:\/\/192.168.122.22:2380,etcd3=http:\/\/192.168.122.23:2380\ninitial-cluster-token: pg-cluster1\ninitial-cluster-state: new\nenable-v2: true<\/code><\/pre>\n\n\n\n<p>Update configuration on pg2:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>name: etcd2\ndata-dir: \/var\/lib\/etcd\nlisten-peer-urls: http:\/\/192.168.122.22:2380\nlisten-client-urls: http:\/\/localhost:2379,http:\/\/192.168.122.22:2379\ninitial-advertise-peer-urls: http:\/\/192.168.122.22:2380\nadvertise-client-urls: http:\/\/192.168.122.22:2379\ninitial-cluster: etcd1=http:\/\/192.168.122.21:2380,etcd2=http:\/\/192.168.122.22:2380,etcd3=http:\/\/192.168.122.23:2380\ninitial-cluster-token: pg-cluster1\ninitial-cluster-state: new\nenable-v2: true<\/code><\/pre>\n\n\n\n<p>Update configuration on the third node too:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>name: etcd3\ndata-dir: \/var\/lib\/etcd\nlisten-peer-urls: http:\/\/192.168.122.23:2380\nlisten-client-urls: http:\/\/localhost:2379,http:\/\/192.168.122.23:2379\ninitial-advertise-peer-urls: http:\/\/192.168.122.23:2380\nadvertise-client-urls: http:\/\/192.168.122.23:2379\ninitial-cluster: etcd1=http:\/\/192.168.122.21:2380,etcd2=http:\/\/192.168.122.22:2380,etcd3=http:\/\/192.168.122.23:2380\ninitial-cluster-token: pg-cluster1\ninitial-cluster-state: new\nenable-v2: true<\/code><\/pre>\n\n\n\n<p>You can now start etcd on all nodes and then change initial-cluster-state from new to existing.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>### if you don't use systemd:\n\/root\/etcd\/etcd --config-file \/etc\/etcd\/etcd.conf\n\n### or you can create your own \/etc\/systemd\/system\/etcd.service\n&#91;Unit]\nDescription=etcd key-value store\nDocumentation=https:\/\/github.com\/etcd-io\/etcd\nAfter=network-online.target local-fs.target remote-fs.target\nWants=network-online.target local-fs.target remote-fs.target\n\n&#91;Service]\nUser=etcd\nType=simple\nEnvironment=ETCD_DATA_DIR=\/var\/lib\/etcd\nExecStart=\/usr\/local\/bin\/etcd --config-file \/etc\/etcd\/etcd.conf \nStandardOutput=syslog\nStandardError=syslog\nSyslogIdentifier=etcd\nRestart=always\nRestartSec=10s\nLimitNOFILE=40000\n\n&#91;Install]\nWantedBy=multi-user.target<\/code><\/pre>\n\n\n\n<p>Next we have to install patroni on both database nodes:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>### First the prerequisites\nyum install net-tools python3 gcc python3-devel psycopg2\n\n# pip3 install --upgrade setuptools\n# pip3 install python-etcd\n# pip3 install psycopg2-binary\n# pip3 install patroni<\/code><\/pre>\n\n\n\n<p>If you need to you can create your own \/etc\/systemd\/system\/patroni.service<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>&#91;Unit]\nDescription=Runners to orchestrate a high-availability PostgreSQL\nAfter=syslog.target network.target etcd.service\n\n&#91;Service]\nType=simple\nUser=postgres\nGroup=postgres\nExecStart=\/bin\/patroni \/etc\/patroni.yml\nStandardOutput=syslog\nStandardError=syslog\nSyslogIdentifier=patroni\nKillMode=process\nTimeoutSec=30\nRestart=no\nLimitNOFILE=10000\n\n&#91;Install]\nWantedBy=multi-user.target\n<\/code><\/pre>\n\n\n\n<p>Create or update \/etc\/patroni.yml on both nodes (adjust IP addresses for pg2):<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>scope: postgres\nname: pg01\nlevel: INFO\ndir: \/var\/log\/patroni\n\nrestapi:\n    listen: 192.168.122.21:8008\n    connect_address: 192.168.122.21:8008\n\netcd:\n    host: 192.168.122.21:2379\n\nbootstrap:\n    dcs:\n        ttl: 30\n        loop_wait: 10\n        retry_timeout: 10\n        maximum_lag_on_failover: 1048576\n        postgresql:\n            use_pg_rewind: true\n            use_slots: true\n            parameters:\n\n    initdb:\n    - encoding: UTF8\n    - data-checksums\n\n    pg_hba:\n    - host replication replicator 127.0.0.1\/32 md5\n    - host replication replicator 192.168.122.21\/0 md5\n    - host replication replicator 192.168.122.22\/0 md5\n    - host all all 0.0.0.0\/0 md5\n\n    users:\n        admin:\n            password: PASSW0RD\n            options:\n                - createrole\n                - createdb\n\npostgresql:\n    listen: 192.168.122.21:5432\n    bin_dir: \/usr\/pgsql-13\/bin\n    connect_address: 192.168.122.21:5432\n    data_dir: \/var\/lib\/pgsql\/13\/data\/\n    pgpass: \/app\/pgsql\/pgpass\n    authentication:\n        replication:\n            username: replicator\n            password: PASSW0RD\n        superuser:\n            username: postgres\n            password: PASSW0RD\n    parameters:\n        unix_socket_directories: '.'\n\ntags:\n    nofailover: false\n    noloadbalance: false\n    clonefrom: false\n    nosync: false<\/code><\/pre>\n\n\n\n<p>You should now be able to start patroni which in turn should create the database instance. First start it on pg1, wait for it to succeed then start on pg2:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>### command to check status:\n&#91;root@pg2 etc]# patronictl -c \/usr\/local\/etc\/patroni.yml list member postgres\n+ Cluster: member (uninitialized) --+-----------+\n| Member | Host | Role | State | TL | Lag in MB |\n+--------+------+------+-------+----+-----------+\n+--------+------+------+-------+----+-----------+\n+ Cluster: postgres (6925420485239186222) ----+----+-----------+\n| Member | Host           | Role    | State   | TL | Lag in MB |\n+--------+----------------+---------+---------+----+-----------+\n| pg01   | 192.168.122.21 | Leader  | running |  1 |           |\n| pg02   | 192.168.122.22 | Replica | running |  1 |         0 |\n+--------+----------------+---------+---------+----+-----------+<\/code><\/pre>\n\n\n\n<p>You can now connect to pg1 and create a database: (we will create a timescaledb database):<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>&#91;root@pg1 data]# psql -U postgres -W -h pg1\nPassword: \npsql (12.5)\nType \"help\" for help.\n\npostgres=# create database timescle;\nCREATE DATABASE\npostgres=# \\c timescle;\nPassword: \nYou are now connected to database \"timescle\" as user \"postgres\".\ntimescle=# create extension if not exists timescaledb;\nWARNING:  \nWELCOME TO\n _____ _                               _     ____________  \n|_   _(_)                             | |    |  _  \\ ___ \\ \n  | |  _ _ __ ___   ___  ___  ___ __ _| | ___| | | | |_\/ \/ \n  | | | |  _ ` _ \\ \/ _ \\\/ __|\/ __\/ _` | |\/ _ \\ | | | ___ \\ \n  | | | | | | | | |  __\/\\__ \\ (_| (_| | |  __\/ |\/ \/| |_\/ \/\n  |_| |_|_| |_| |_|\\___||___\/\\___\\__,_|_|\\___|___\/ \\____\/<\/code><\/pre>\n\n\n\n<p>We need to install HAproxy on both nodes to be able to always connect to master database:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>yum install haproxy\n\n### update \/etc\/haproxy\/haproxy.cfg\nglobal\n        maxconn 1000\n        log     127.0.0.1:514 local2\n\ndefaults\n        log global\n        mode tcp\n        retries 2\n        timeout client 30m\n        timeout connect 4s\n        timeout server 30m\n        timeout check 5s\n\nlisten stats\n        mode http\n        bind *:7070\n        stats enable\n        stats uri \/\n\nlisten postgres\n        bind *:5000\n        option httpchk GET \/patroni HTTP\/1.0\n        http-check expect rstring \\\"role\\\":\\ \\\"master\\\"\n        default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions\n        server postgresql_pg01_5432 192.168.122.21:5432 maxconn 1000 check port 8008\n        server postgresql_pg02_5432 192.168.122.22:5432 maxconn 1000 check port 8008<\/code><\/pre>\n\n\n\n<p>In case you didn&#8217;t disable SELinux you will have to set up some extra rules:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>semanage port -a -t PORT_TYPE -p tcp 7070\nsetsebool -P nis_enabled 1\nsetsebool -P haproxy_connect_any 1<\/code><\/pre>\n\n\n\n<p>Start haproxy on both servers and from now on you should access the database on port 5000 and you will be automatically redirected to the master database. We still need to install keepalived and configure it to raise a VIP address we will use to connect which will be available even if pg1 machine is down.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>yum install keepalived\n\n### update configuration in \/etc\/keepalived\/keepalived.conf\n\nvrrp_script chk_haproxy {\n  script \" \/opt\/scripts\/check_haproxy.sh\"\n  interval 2 # every 2 seconds\n  weight 2 # add 2 points if OK\n}\n\nvrrp_instance VI_1 {\n  interface ens224 # interface to monitor\n  state MASTER # MASTER on ha1, BACKUP on ha2\n  virtual_router_id 51\n  priority 101 # 101 on pg1, 100 on pg2\n  virtual_ipaddress {\n    192.168.122.200 # virtual ip address\n  }\n  track_script {\n    chk_haproxy\n  }\n}<\/code><\/pre>\n\n\n\n<p>On pg2 change state to BACKUP and priority to 100.  Create check_haproxy.sh script:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>#!\/bin\/sh\nset -e\n\n\/usr\/bin\/killall -0 haproxy\n<\/code><\/pre>\n\n\n\n<p>This script will return error if haproxy isn&#8217;t up and nothing if the process is up and running. We should now connect to the database using VIP and port 5000. To control the cluster use patronictl command as previously showed:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>### status\npatronictl -c \/etc\/patroni.yml show members postgres\n\n# restart node\npatronictl -c \/etc\/patroni.yml restart postgres pg01\n\n# manual failover\npatronictl -c \/etc\/patroni.yml failover postgres<\/code><\/pre>\n","protected":false},"excerpt":{"rendered":"<p>Create postgresql cluster using patroni<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[11],"tags":[9,8,7,4,5,10,6],"class_list":["post-1","post","type-post","status-publish","format-standard","hentry","category-article","tag-cluster","tag-haproxy","tag-keepalived","tag-patroni","tag-postgresql","tag-replication","tag-timescaledb"],"_links":{"self":[{"href":"https:\/\/tivuli-it.com\/index.php\/wp-json\/wp\/v2\/posts\/1","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/tivuli-it.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/tivuli-it.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/tivuli-it.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/tivuli-it.com\/index.php\/wp-json\/wp\/v2\/comments?post=1"}],"version-history":[{"count":8,"href":"https:\/\/tivuli-it.com\/index.php\/wp-json\/wp\/v2\/posts\/1\/revisions"}],"predecessor-version":[{"id":41,"href":"https:\/\/tivuli-it.com\/index.php\/wp-json\/wp\/v2\/posts\/1\/revisions\/41"}],"wp:attachment":[{"href":"https:\/\/tivuli-it.com\/index.php\/wp-json\/wp\/v2\/media?parent=1"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/tivuli-it.com\/index.php\/wp-json\/wp\/v2\/categories?post=1"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/tivuli-it.com\/index.php\/wp-json\/wp\/v2\/tags?post=1"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}