MariaDB Galera Cluster set up problems - mysql

I am trying to get a mariadb cluster up and running but it is not working out for me. Right now I am using MariaDB Galera 5.5.36 on a 64 bit red hat ES6 machine. I installed mariadb through this repo here:
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/5.5-galera/rhel6-amd64/
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
In the server.conf I have the following in server 1:
[mariadb]
log_error=/var/log/mariadb.log
query_cache_size=0
query_cache_type=0
binlog_format=ROW
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address=gcomm://192.168.211.133
wsrep_cluster_name='cluster'
wsrep_node_address='192.168.211.132'
wsrep_node_name='cluster1'
wsrep_sst_method=rsync
and on server 2 I have
[mariadb]
log_error=/var/log/mariadb.log
query_cache_size=0
query_cache_type=0
binlog_format=ROW
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address=gcomm://192.168.211.132
wsrep_cluster_name='cluster'
wsrep_node_address='192.168.211.133'
wsrep_node_name='cluster2'
wsrep_sst_method=rsync
When I start server 1 with the following command: sudo service mysql start --wsrep-new-cluster it starts up just fine, if I open up mysql and check the status of wsrep it says everything is up and running which is good but when I try to do sudo service mysql start on the second server I get the following in the error logs:
140609 14:47:55 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
140609 14:47:56 mysqld_safe WSREP: Running position recovery with --log_error='/var/lib/mysql/wsrep_recovery.i5qfm2' --pid-file='/var/lib/mysql/localhost.localdomain-recover.pid'
140609 14:47:57 mysqld_safe WSREP: Recovered position 85448d73-ebe8-11e3-9c20-fbc1995fee11:0
140609 14:47:57 [Note] WSREP: wsrep_start_position var submitted: '85448d73-ebe8-11e3-9c20-fbc1995fee11:0'
140609 14:47:57 [Note] WSREP: Read nil XID from storage engines, skipping position init
140609 14:47:57 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib64/galera/libgalera_smm.so'
140609 14:47:57 [Note] WSREP: wsrep_load(): Galera 25.3.2(r170) by Codership Oy <info#codership.com> loaded successfully.
140609 14:47:57 [Note] WSREP: CRC-32C: using hardware acceleration.
140609 14:47:57 [Note] WSREP: Found saved state: 85448d73-ebe8-11e3-9c20-fbc1995fee11:-1
140609 14:47:57 [Note] WSREP: Passing config to GCS: base_host = 192.168.211.133; base_port = 4567; cert.log_conflicts = no; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.size = 128M; gcs.fc_debug = 0; gcs.fc_factor = 1; gcs.fc_limit = 16; gcs.fc_master_slave = NO; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = NO; repl.causal_read_timeout = PT30S; repl.commit_order = 3; repl.key_format = FLAT8; repl.proto_max = 5
140609 14:47:57 [Note] WSREP: Assign initial position for certification: 0, protocol version: -1
140609 14:47:57 [Note] WSREP: wsrep_sst_grab()
140609 14:47:57 [Note] WSREP: Start replication
140609 14:47:57 [Note] WSREP: Setting initial position to 85448d73-ebe8-11e3-9c20-fbc1995fee11:0
140609 14:47:57 [Note] WSREP: protonet asio version 0
140609 14:47:57 [Note] WSREP: Using CRC-32C (optimized) for message checksums.
140609 14:47:57 [Note] WSREP: backend: asio
140609 14:47:57 [Note] WSREP: GMCast version 0
140609 14:47:57 [Note] WSREP: (0c085f34-efe5-11e3-9f6b-8bfd1706e2a4, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
140609 14:47:57 [Note] WSREP: (0c085f34-efe5-11e3-9f6b-8bfd1706e2a4, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
140609 14:47:57 [Note] WSREP: EVS version 0
140609 14:47:57 [Note] WSREP: PC version 0
140609 14:47:57 [Note] WSREP: gcomm: connecting to group 'cluster', peer '192.168.211.132:,192.168.211.134:'
140609 14:48:00 [Warning] WSREP: no nodes coming from prim view, prim not possible
140609 14:48:00 [Note] WSREP: view(view_id(NON_PRIM,0c085f34-efe5-11e3-9f6b-8bfd1706e2a4,1) memb {
0c085f34-efe5-11e3-9f6b-8bfd1706e2a4,0
} joined {
} left {
} partitioned {
})
140609 14:48:01 [Warning] WSREP: last inactive check more than PT1.5S ago (PT3.50775S), skipping check
140609 14:48:31 [Note] WSREP: view((empty))
140609 14:48:31 [ERROR] WSREP: failed to open gcomm backend connection: 110: failed to reach primary view: 110 (Connection timed out)
at gcomm/src/pc.cpp:connect():141
140609 14:48:31 [ERROR] WSREP: gcs/src/gcs_core.c:gcs_core_open():196: Failed to open backend connection: -110 (Connection timed out)
140609 14:48:31 [ERROR] WSREP: gcs/src/gcs.c:gcs_open():1291: Failed to open channel 'cluster' at 'gcomm://192.168.211.132,192.168.211.134': -110 (Connection timed out)
140609 14:48:31 [ERROR] WSREP: gcs connect failed: Connection timed out
140609 14:48:31 [ERROR] WSREP: wsrep::connect() failed: 7
140609 14:48:31 [ERROR] Aborting
140609 14:48:31 [Note] WSREP: Service disconnected.
140609 14:48:32 [Note] WSREP: Some threads may fail to exit.
140609 14:48:32 [Note] /usr/sbin/mysqld: Shutdown complete
140609 14:48:32 mysqld_safe mysqld from pid file /var/lib/mysql/localhost.localdomain.pid ended
I am at a loss as to why the second server cannot detect that a cluster is up and running. These machines can communicate with each other just fine, I can SSH from one to the other and they can ping each other. I tried deleted the galera cache, tried downgrading my version of mariadb galera, tried disabling SELinux, tried running the mysql service as a different user, verified that the correct ports are open, tried running them on 2 VMs on separate computers with different IP addresses, etc. Does anyone have any idea what is going on here because I have been searching for 3 days trying to fix this but no solution seems to work with me.

Here is how I fixed my similar issue.
CentOS 7 w/ MariaDB Galera 10.1.
Node2 I saw this:
016-12-27 15:40:38 140703512762624 [Warning] WSREP: no nodes coming from prim view, prim not possible
After doing some reading, I tried running this on node1.
service mysql start --wsrep-new-cluster
But this failed, and in the logs, I found this...
2016-12-27 15:44:08 140438853814528 [ERROR] WSREP: It may not be safe to bootstrap the cluster from this node. It was not the last one to leave the cluster and may not contain all the updates. To force cluster bootstrap with this node, edit the grastate.dat file manually and set safe_to_bootstrap to 1 .
So I edited the file /var/lib/mysql/grastate.dat, changing safe_to_bootstrap to 1.
I was then able to start the Primary node using:
service mysql start --wsrep-new-cluster
Then on the others, I just used
service mysql start
Note: This was in a demo pre-production environment. I promptly broke it after getting everything to work by rebooting all servers at the same time :P, but I knew there were no writes, and that the DB's were in sync. If you are in produciton and this happens, you can use the following to figure out which node to run "new-cluster" on, which is akin to saying, make me primary.
mysqld_safe --wsrep-recover
If this is a production issue, I highly reccomend reading this article and making a backup w/ CloneZilla before throwing commands at the broken clients!
https://www.percona.com/blog/2014/09/01/galera-replication-how-to-recover-a-pxc-cluster/

The cluster must start with this command on primary node:
galera_new_cluster
after starting first node, you can start other nodes in the cluster successfully.

I believe you need to list all the IPs in the wsrep_cluster_address parameter.
wsrep_cluster_address=gcomm://192.168.211.132,192.168.211.133
This should be done on both hosts. BTW you likely want three nodes not two as to avoid split brain scenarios.

Related

mariadb, add 4th galera node failed

I have three node setup and running perfectly for the past months.
Recently I want to add another node in a different location but somehow I keep on getting errors.
At first, I was just following this tutorial (where I setup the first time few months ago) https://www.howtoforge.com/tutorial/how-to-install-and-configure-galera-cluster-on-ubuntu-1604/ I did not start all the nodes again from the beginning, I just has to find the file of /mysql/conf.d/galera.cnf in the other three nodes I added the new nodes ip into the previous three. So for the forth one I had the /etc/mysql/conf.d/galera.cnf setup like...
[mysqld]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0
# Galera Provider Configuration
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so
# Galera Cluster Configuration
wsrep_cluster_name="galera_cluster"
wsrep_cluster_address="gcomm://node1_ip,node2_ip,node3_ip,node4_ip"
# Galera Synchronization Configuration
wsrep_sst_method=rsync
# Galera Node Configuration
wsrep_node_address="xx.xx.xxx.xxx"
wsrep_node_name="Node4"
somehow I am getting this HUGE error,
Group state: e3ade7e7-e682-11e7-8d16-be7d28cda90e:36273
Local state: 00000000-0000-0000-0000-000000000000:-1
[Note] WSREP: New cluster view: global state: e3ade7e7-e682-11e7-8d16-be7d28cda90e:36273, view# 122: Primary, number of nodes: 4, my
[Warning] WSREP: Gap in state sequence. Need state transfer.
[Note] WSREP: Running: 'wsrep_sst_rsync --role 'joiner' --address 'xxx.node.4.ip' --datadir '/var/lib/mysql/' --parent '22828' ''
rsyncd version 3.1.1 starting, listening on port 4444
[Note] WSREP: Prepared SST request: rsync|xxx.node.4.ip:4444/rsync_sst
[Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
[Note] WSREP: REPL Protocols: 7 (3, 2)
[Note] WSREP: Assign initial position for certification: 36273, protocol version: 3
[Note] WSREP: Service thread queue flushed.
[Warning] WSREP: Failed to prepare for incremental state transfer: Local state UUID (00000000-0000-0000-0000-000000000000) does not
at galera/src/replicator_str.cpp:prepare_for_IST():482. IST will be unavailable.
[Note] WSREP: Member 0.0 (Node4) requested state transfer from '*any*'. Selected 1.0 (Node1)(SYNCED) as donor.
[Note] WSREP: Shifting PRIMARY -> JOINER (TO: 36273)
[Note] WSREP: Requesting state transfer: success, donor: 1
[Note] WSREP: GCache history reset: 00000000-0000-0000-0000-000000000000:0 -> e3ade7e7-e682-11e7-8d16-be7d28cda90e:36273
[Note] WSREP: (7642cf37, 'tcp://0.0.0.0:4567') connection to peer 7642cf37 with addr tcp://xxx.node.4.ip:4567 timed out, no messages
[Note] WSREP: (7642cf37, 'tcp://0.0.0.0:4567') turning message relay requesting off
mariadb.service: Start operation timed out. Terminating.
Terminated
WSREP_SST: [INFO] Joiner cleanup. rsync PID: 22875
sent 0 bytes received 0 bytes total size 0
WSREP_SST: [INFO] Joiner cleanup done.
[ERROR] WSREP: Process was aborted.
[ERROR] WSREP: Process completed with error: wsrep_sst_rsync --role 'joiner' --address 'xxx.node.4.ip' --datadir '/var/lib/mysql/'
[ERROR] WSREP: Failed to read uuid:seqno and wsrep_gtid_domain_id from joiner script.
[ERROR] WSREP: SST failed: 2 (No such file or directory)
[ERROR] Aborting
Error in my_thread_global_end(): 1 threads didn't exit
mariadb.service: Main process exited, code=exited, status=1/FAILURE
Failed to start MariaDB 10.1.33 database server.
P.S for the older 3 nodes maria db version is 10.1.29 and the new node is 10.1.33
Thanks in advance for any suggestions.

Full SST on galera node doesn't start ("WSREP: Prepared SST request" missing)

I have a galera cluster (10.0.27) with 3 nodes, each on a dedicated server.
After the reboot of one of the servers, the node cannot join the cluster anymore neither perform a full SST.
Actually, it's like mysql 'misses' to launch some commands.
I have a second 'development' cluster with the same configuration working perfectly, I have no problem to add a node. I noticed a difference between the working cluster and not working when I add back a node for a full SST :
Node joining on working cluster :
11:44:52 mysqld: 170628 11:44:52 [Note] WSREP: Quorum results:
11:44:52 mysqld: #011version = 4,
11:44:52 mysqld: #011component = PRIMARY,
11:44:52 mysqld: #011conf_id = 8,
11:44:52 mysqld: #011members = 2/3 (joined/total),
11:44:52 mysqld: #011act_id = 906976,
11:44:52 mysqld: #011last_appl. = -1,
11:44:52 mysqld: #011protocols = 0/7/3 (gcs/repl/appl),
11:44:52 mysqld: #011group UUID = 27ba4c4f-9b78-11e6-824c-f3b1e60fa202
11:44:52 mysqld: 170628 11:44:52 [Note] WSREP: Flow-control interval: [28, 28]
11:44:52 mysqld: 170628 11:44:52 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 906976)
11:44:52 mysqld: 170628 11:44:52 [Note] WSREP: State transfer required:
11:44:52 mysqld: #011Group state: 27ba4c4f-9b78-11e6-824c-f3b1e60fa202:906976
11:44:52 mysqld: #011Local state: 00000000-0000-0000-0000-000000000000:-1
11:44:52 mysqld: 170628 11:44:52 [Note] WSREP: New cluster view: global state: 27ba4c4f-9b78-11e6-824c-f3b1e60fa202:906976, view# 9: Primary, number of nodes: 3, my index: 2, protocol version 3
11:44:52 mysqld: 170628 11:44:52 [Warning] WSREP: Gap in state sequence. Need state transfer.
11:44:52 mysqld: 170628 11:44:52 [Note] WSREP: Running: 'wsrep_sst_rsync --role 'joiner' --address '192.***.***.**2' --datadir '/var/lib/mysql/' --defaults-file '/etc/mysql/my.cnf' --defaults-group-suffix '' --parent '16472' --binlog '/var/log/mysql/mariadb-bin' '
**11:44:52 rsyncd[16514]: rsyncd version 3.1.1 starting, listening on port 4444**
11:44:52 mysqld: 170628 11:44:52 [Note] WSREP: Prepared SST request: rsync|192.***.***.**2:4444/rsync_sst
11:44:52 mysqld: 170628 11:44:52 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
11:44:52 mysqld: 170628 11:44:52 [Note] WSREP: REPL Protocols: 7 (3, 2)
11:44:52 mysqld: 170628 11:44:52 [Note] WSREP: Assign initial position for certification: 906976, protocol version: 3
11:44:52 mysqld: 170628 11:44:52 [Note] WSREP: Service thread queue flushed.
11:44:52 mysqld: 170628 11:44:52 [Warning] WSREP: Failed to prepare for incremental state transfer: Local state UUID (00000000-0000-0000-0000-000000000000) does not match group state UUID (27ba4c4f-9b78-11e6-824c-f3b1e60fa202): 1 (Operation not permitted)
11:44:52 mysqld: #011 at galera/src/replicator_str.cpp:prepare_for_IST():482. IST will be unavailable.
11:44:52 mysqld: 170628 11:44:52 [Note] WSREP: Member 2.0 (server-3) requested state transfer from '*any*'. Selected 0.0 (server1)(SYNCED) as donor.
11:44:52 mysqld: 170628 11:44:52 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 906977)
11:44:52 mysqld: 170628 11:44:52 [Note] WSREP: Requesting state transfer: success, donor: 0
11:44:52 mysqld: 170628 11:44:52 [Note] WSREP: GCache history reset: old(00000000-0000-0000-0000-000000000000:0) -> new(27ba4c4f-9b78-11e6-824c-f3b1e60fa202:906976)
11:44:52 rsyncd[16531]: name lookup failed for 192.***.***.**1: Name or service not known
11:44:52 rsyncd[16531]: connect from UNKNOWN (192.***.***.**1)
11:44:52 rsyncd[16531]: rsync to rsync_sst/ from UNKNOWN (192.***.***.**1)
11:44:52 rsyncd[16531]: receiving file list
11:44:54 rsyncd[16553]: name lookup failed for 192.***.***.**1: Name or service not known
11:44:54 rsyncd[16553]: connect from UNKNOWN (192.***.***.**1)
11:44:54 rsyncd[16531]: sent 114 bytes received 146847600 bytes total size 146810880
11:44:54 rsyncd[16553]: rsync to rsync_sst-log_dir/ from UNKNOWN (192.***.***.**1)
11:44:54 rsyncd[16553]: receiving file list
11:44:54 rsyncd[16553]: sent 63 bytes received 100688095 bytes total size 100663296
11:44:54 rsyncd[16559]: name lookup failed for 192.***.***.**1: Name or service not known
11:44:54 rsyncd[16559]: connect from UNKNOWN (192.***.***.**1)
11:44:54 rsyncd[16560]: name lookup failed for 192.***.***.**1: Name or service not known
11:44:54 rsyncd[16560]: connect from UNKNOWN (192.***.***.**1)
11:44:54 rsyncd[16561]: name lookup failed for 192.***.***.**1: Name or service not known
11:44:54 rsyncd[16561]: connect from UNKNOWN (192.***.***.**1)
11:44:54 rsyncd[16562]: name lookup failed for 192.***.***.**1: Name or service not known
11:44:54 rsyncd[16562]: connect from UNKNOWN (192.***.***.**1)
11:44:54 rsyncd[16559]: rsync to rsync_sst/./db_1 from UNKNOWN (192.***.***.**1)
11:44:54 rsyncd[16562]: rsync to rsync_sst/./db_2 from UNKNOWN (192.***.***.**1)
11:44:54 rsyncd[16560]: rsync to rsync_sst/./db_3 from UNKNOWN (192.***.***.**1)
11:44:54 rsyncd[16561]: rsync to rsync_sst/./db_3 from UNKNOWN (192.***.***.**1)
11:44:54 rsyncd[16560]: receiving file list
...
Node joining on non-working cluster :
13:36:28 mysqld: 170630 13:36:28 [Note] WSREP: Quorum results:
13:36:28 mysqld: #011version = 4,
13:36:28 mysqld: #011component = PRIMARY,
13:36:28 mysqld: #011conf_id = 514,
13:36:28 mysqld: #011members = 2/3 (joined/total),
13:36:28 mysqld: #011act_id = 242914778,
13:36:28 mysqld: #011last_appl. = -1,
13:36:28 mysqld: #011protocols = 0/7/3 (gcs/repl/appl),
13:36:28 mysqld: #011group UUID = 8119e584-9f83-11e6-b292-7a8102156c2d
13:36:28 mysqld: 170630 13:36:28 [Note] WSREP: Flow-control interval: [28, 28]
13:36:28 mysqld: 170630 13:36:28 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 242914778)
13:36:28 mysqld: 170630 13:36:28 [Note] WSREP: State transfer required:
13:36:28 mysqld: #011Group state: 8119e584-9f83-11e6-b292-7a8102156c2d:242914778
13:36:28 mysqld: #011Local state: 00000000-0000-0000-0000-000000000000:-1
13:36:28 mysqld: 170630 13:36:28 [Note] WSREP: New cluster view: global state: 8119e584-9f83-1
1e6-b292-7a8102156c2d:242914778, view# 515: Primary, number of nodes: 3, my index: 2, protocol version 3
13:36:28 mysqld: 170630 13:36:28 [Warning] WSREP: Gap in state sequence. Need state transfer.
13:36:28 mysqld: 170630 13:36:28 [Note] WSREP: Running: 'wsrep_sst_rsync --role 'joiner' --add
ress '192.***.***.*11' --datadir '/var/lib/mysql/' --defaults-file '/etc/mysql/my.cnf' --defaults-group-suffix '' --pare
nt '13253' --binlog '/var/log/mysql/mariadb-bin' '
13:36:28 rsyncd[13316]: rsyncd version 3.1.1 starting, listening on port 4444
13:36:32 mysqld: 170630 13:36:32 [Note] WSREP: (85c5aae8, 'tcp://0.0.0.0:4567') turning messag
e relay requesting off
13:36:56 /etc/init.d/mysql[14935]: 0 processes alive and '/usr/bin/mysqladmin --defaults-file=
/etc/mysql/debian.cnf ping' resulted in
The difference is just after the line :
rsyncd[13316]: rsyncd version 3.1.1 starting, listening on port 4444
On the working cluster, the following line is
WSREP: Prepared SST request: rsync|192.***.***.**2:4444/rsync_sst
On the not working cluster this line doesn't appear, it's like the SST request is not made.
I can provide more information about the configuration if you think it can help to find the issue.
Thanks for your help !
Had the same issue, that's what I found:
The wsrep_sst_rsync is stuck in an endless loop. In my case, because the output of lsof -i :$rsync_port is empty. For some (unknown) reason, the lsof had the setgid bit set:
[dbserver1:~]# ls -l /usr/bin/lsof
-rwxr-sr-x 1 root root 163224 Oct 28 2015 /usr/bin/lsof
this caused an endless loop of wsrep_sst_rsync, as it checks if rsync could be started. Removing the flag causes the script to continue which eventually starts an SST.
The flag can be removed using:
[dbserver1:~]# chmod g-s /usr/bin/lsof

Percona mysql xtradb cluster doesn't start properly and node restarts don't work

tl;dr
When starting a fresh percona cluster of 3 kubernetes pods, the grastate.dat seq_no is set at -1 and doesn't change. On deleting one pod and watching it restart, expecting it to rejoin the cluster, it sets it's inital position to 00000000-0000-0000-0000-000000000000:-1 and tries to connect to itself (it's former ip), maybe because it'd been the first pod in the cluster? It then timeouts in it's erroneous connection to itself:
2017-03-26T08:38:05.374058Z 0 [Note] WSREP: (b7571ff8, 'tcp://0.0.0.0:4567') connection to peer 00000000 with addr tcp://10.52.0.26:4567 timed out, no messages seen in PT3S
The cluster doesn't get started properly and I'm unable to successfully restart pods in the cluster.
Full
When I start the cluster from scratch. With blank data directories and a fresh etcd cluster, everything seems to come up. However I look at the grastate.dat and I find that the seq_no for each pod is -1:
root#gluster-3:/mnt/gfs/gluster_vol-1/mysql# cat percona-0/grastate.dat
# GALERA saved state
version: 2.1
uuid: a91f70f2-11f8-11e7-8f3d-86c2e58790ac
seqno: -1
safe_to_bootstrap: 0
root#gluster-3:/mnt/gfs/gluster_vol-1/mysql# cat percona-1/grastate.dat
# GALERA saved state
version: 2.1
uuid: a91f70f2-11f8-11e7-8f3d-86c2e58790ac
seqno: -1
safe_to_bootstrap: 0
root#gluster-3:/mnt/gfs/gluster_vol-1/mysql# cat percona-2/grastate.dat
# GALERA saved state
version: 2.1
uuid: a91f70f2-11f8-11e7-8f3d-86c2e58790ac
seqno: -1
safe_to_bootstrap: 0
At this point I can do mysql -h percona -u wordpress -p and connect and wordpress works too.
Scenario:
I have 3 percona pods
/ # jonathan#ubuntu:~/Projects/k8wp$ kubectl get pods
NAME READY STATUS RESTARTS AGE
etcd-0 1/1 Running 1 12h
etcd-1 1/1 Running 0 12h
etcd-2 1/1 Running 3 12h
etcd-3 1/1 Running 1 12h
percona-0 1/1 Running 0 8m
percona-1 1/1 Running 0 57m
percona-2 1/1 Running 0 57m
When I try to restart percona-0 it gets kicked out of the cluster on restarting, percona-0's gvwstate.dat file shows
root#gluster-3:/mnt/gfs/gluster_vol-1/mysql# cat percona-0/gvwstate.dat
my_uuid: b7571ff8-11f8-11e7-bd2d-8b50487e1523
#vwbeg
view_id: 3 b7571ff8-11f8-11e7-bd2d-8b50487e1523 3
bootstrap: 0
member: b7571ff8-11f8-11e7-bd2d-8b50487e1523 0
member: bd05a643-11f8-11e7-9dab-1b4fc20eaf6a 0
member: c33d6a73-11f8-11e7-9e86-fe1cf3d3367a 0
#vwend
The other 2 pods in the cluster show:
root#gluster-3:/mnt/gfs/gluster_vol-1/mysql# cat percona-1/gvwstate.dat
my_uuid: bd05a643-11f8-11e7-9dab-1b4fc20eaf6a
#vwbeg
view_id: 3 bd05a643-11f8-11e7-9dab-1b4fc20eaf6a 4
bootstrap: 0
member: bd05a643-11f8-11e7-9dab-1b4fc20eaf6a 0
member: c33d6a73-11f8-11e7-9e86-fe1cf3d3367a 0
#vwend
root#gluster-3:/mnt/gfs/gluster_vol-1/mysql# cat percona-2/gvwstate.dat
my_uuid: c33d6a73-11f8-11e7-9e86-fe1cf3d3367a
#vwbeg
view_id: 3 bd05a643-11f8-11e7-9dab-1b4fc20eaf6a 4
bootstrap: 0
member: bd05a643-11f8-11e7-9dab-1b4fc20eaf6a 0
member: c33d6a73-11f8-11e7-9e86-fe1cf3d3367a 0
#vwend
Here are what I think are the relevant errors from percona-0's startup:
2017-03-26T08:37:58.370605Z 0 [Note] WSREP: Setting initial position to 00000000-0000-0000-0000-000000000000:-1
2017-03-26T08:37:58.372537Z 0 [Note] WSREP: gcomm: connecting to group 'wordpress-001', peer '10.52.0.26:'
2017-03-26T08:38:01.373345Z 0 [Note] WSREP: (b7571ff8, 'tcp://0.0.0.0:4567') connection to peer 00000000 with addr tcp://10.52.0.26:4567 timed out, no messages seen in PT3S
2017-03-26T08:38:01.373682Z 0 [Warning] WSREP: no nodes coming from prim view, prim not possible
2017-03-26T08:38:01.373750Z 0 [Note] WSREP: view(view_id(NON_PRIM,b7571ff8,5) memb {
b7571ff8,0
} joined {
} left {
} partitioned {
})
2017-03-26T08:38:01.373838Z 0 [Note] WSREP: gcomm: connected
2017-03-26T08:38:01.373872Z 0 [Note] WSREP: Changing maximum packet size to 64500, resulting msg size: 32636
2017-03-26T08:38:01.373987Z 0 [Note] WSREP: Shifting CLOSED -> OPEN (TO: 0)
2017-03-26T08:38:01.374012Z 0 [Note] WSREP: Opened channel 'wordpress-001'
2017-03-26T08:38:01.374108Z 0 [Note] WSREP: Waiting for SST to complete.
2017-03-26T08:38:01.374417Z 0 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1
2017-03-26T08:38:01.374469Z 0 [Note] WSREP: Flow-control interval: [16, 16]
2017-03-26T08:38:01.374491Z 0 [Note] WSREP: Received NON-PRIMARY.
2017-03-26T08:38:01.374560Z 1 [Note] WSREP: New cluster view: global state: :-1, view# -1: non-Primary, number of nodes: 1, my index: 0, protocol version -1
The ip it's trying to connect to 10.52.0.26 in 2017-03-26T08:37:58.372537Z 0 [Note] WSREP: gcomm: connecting to group 'wordpress-001', peer '10.52.0.26:' is actually that pods previous ip, here's the listing of keys in etcd I did before deleting percona-0
/ # etcdctl ls --recursive
/pxc-cluster
/pxc-cluster/wordpress
/pxc-cluster/queue
/pxc-cluster/queue/wordpress
/pxc-cluster/queue/wordpress-001
/pxc-cluster/wordpress-001
/pxc-cluster/wordpress-001/10.52.1.46
/pxc-cluster/wordpress-001/10.52.1.46/ipaddr
/pxc-cluster/wordpress-001/10.52.1.46/hostname
/pxc-cluster/wordpress-001/10.52.2.33
/pxc-cluster/wordpress-001/10.52.2.33/ipaddr
/pxc-cluster/wordpress-001/10.52.2.33/hostname
/pxc-cluster/wordpress-001/10.52.0.26
/pxc-cluster/wordpress-001/10.52.0.26/hostname
/pxc-cluster/wordpress-001/10.52.0.26/ipaddr
After kubectl delete pods/percona-0:
/ # etcdctl ls --recursive
/pxc-cluster
/pxc-cluster/queue
/pxc-cluster/queue/wordpress
/pxc-cluster/queue/wordpress-001
/pxc-cluster/wordpress-001
/pxc-cluster/wordpress-001/10.52.1.46
/pxc-cluster/wordpress-001/10.52.1.46/ipaddr
/pxc-cluster/wordpress-001/10.52.1.46/hostname
/pxc-cluster/wordpress-001/10.52.2.33
/pxc-cluster/wordpress-001/10.52.2.33/ipaddr
/pxc-cluster/wordpress-001/10.52.2.33/hostname
/pxc-cluster/wordpress
Also during the restart percona-0 tried to register to etcd with:
{"action":"create","node":{"key":"/pxc-cluster/queue/wordpress-001/00000000000000009886","value":"10.52.0.27","expiration":"2017-03-26T08:38:57.980325718Z","ttl":60,"modifiedIndex":9886,"createdIndex":9886}}
{"action":"set","node":{"key":"/pxc-cluster/wordpress-001/10.52.0.27/ipaddr","value":"10.52.0.27","expiration":"2017-03-26T08:38:28.01814818Z","ttl":30,"modifiedIndex":9887,"createdIndex":9887}}
{"action":"set","node":{"key":"/pxc-cluster/wordpress-001/10.52.0.27/hostname","value":"percona-0","expiration":"2017-03-26T08:38:28.037188157Z","ttl":30,"modifiedIndex":9888,"createdIndex":9888}}
{"action":"update","node":{"key":"/pxc-cluster/wordpress-001/10.52.0.27","dir":true,"expiration":"2017-03-26T08:38:28.054726795Z","ttl":30,"modifiedIndex":9889,"createdIndex":9887},"prevNode":{"key":"/pxc-cluster/wordpress-001/10.52.0.27","dir":true,"modifiedIndex":9887,"createdIndex":9887}}
which doesn't work.
From the second member of the cluster percona-1:
2017-03-26T08:37:44.069583Z 0 [Note] WSREP: (bd05a643, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://10.52.0.26:4567
2017-03-26T08:37:45.069756Z 0 [Note] WSREP: (bd05a643, 'tcp://0.0.0.0:4567') reconnecting to b7571ff8 (tcp://10.52.0.26:4567), attempt 0
2017-03-26T08:37:48.570332Z 0 [Note] WSREP: (bd05a643, 'tcp://0.0.0.0:4567') connection to peer 00000000 with addr tcp://10.52.0.26:4567 timed out, no messages seen in PT3S
2017-03-26T08:37:49.605089Z 0 [Note] WSREP: evs::proto(bd05a643, GATHER, view_id(REG,b7571ff8,3)) suspecting node: b7571ff8
2017-03-26T08:37:49.605276Z 0 [Note] WSREP: evs::proto(bd05a643, GATHER, view_id(REG,b7571ff8,3)) suspected node without join message, declaring inactive
2017-03-26T08:37:50.104676Z 0 [Note] WSREP: declaring c33d6a73 at tcp://10.52.2.33:4567 stable
New Info:
I restarted percona-0 again, and this time it somehow came up! After a few tries I realised the pod needs to restarted twice to come up i.e. after deleting it the first time, it comes up with the above errors, after deleting it the second time it comes up okay and syncs with the other members. Could this be because it was the first pod in the cluster?
I've tested deleting the other pods but they all come back up okay.
The issue only lies with percona-0.
Also;
Taking down all the pods at once, if my node was to crash, that's the situation where the pods don't come back up at all! I suspect it's because no state is saved to grastate.dat , i.e. seq_no remains -1 even though the global id may change, the pods exit with mysqld shutdown, and the following errors:
jonathan#ubuntu:~/Projects/k8wp$ kubectl logs percona-2 | grep ERROR
2017-03-26T11:20:25.795085Z 0 [ERROR] WSREP: failed to open gcomm backend connection: 110: failed to reach primary view: 110 (Connection timed out)
2017-03-26T11:20:25.795276Z 0 [ERROR] WSREP: gcs/src/gcs_core.cpp:gcs_core_open():208: Failed to open backend connection: -110 (Connection timed out)
2017-03-26T11:20:25.795544Z 0 [ERROR] WSREP: gcs/src/gcs.cpp:gcs_open():1437: Failed to open channel 'wordpress-001' at 'gcomm://10.52.2.36': -110 (Connection timed out)
2017-03-26T11:20:25.795618Z 0 [ERROR] WSREP: gcs connect failed: Connection timed out
2017-03-26T11:20:25.795645Z 0 [ERROR] WSREP: wsrep::connect(gcomm://10.52.2.36) failed: 7
2017-03-26T11:20:25.795693Z 0 [ERROR] Aborting
jonathan#ubuntu:~/Projects/k8wp$ kubectl logs percona-1 | grep ERROR
2017-03-26T11:20:27.093780Z 0 [ERROR] WSREP: failed to open gcomm backend connection: 110: failed to reach primary view: 110 (Connection timed out)
2017-03-26T11:20:27.093977Z 0 [ERROR] WSREP: gcs/src/gcs_core.cpp:gcs_core_open():208: Failed to open backend connection: -110 (Connection timed out)
2017-03-26T11:20:27.094145Z 0 [ERROR] WSREP: gcs/src/gcs.cpp:gcs_open():1437: Failed to open channel 'wordpress-001' at 'gcomm://10.52.1.49': -110 (Connection timed out)
2017-03-26T11:20:27.094200Z 0 [ERROR] WSREP: gcs connect failed: Connection timed out
2017-03-26T11:20:27.094227Z 0 [ERROR] WSREP: wsrep::connect(gcomm://10.52.1.49) failed: 7
2017-03-26T11:20:27.094247Z 0 [ERROR] Aborting
jonathan#ubuntu:~/Projects/k8wp$ kubectl logs percona-0 | grep ERROR
2017-03-26T11:20:52.040214Z 0 [ERROR] WSREP: failed to open gcomm backend connection: 110: failed to reach primary view: 110 (Connection timed out)
2017-03-26T11:20:52.040279Z 0 [ERROR] WSREP: gcs/src/gcs_core.cpp:gcs_core_open():208: Failed to open backend connection: -110 (Connection timed out)
2017-03-26T11:20:52.040385Z 0 [ERROR] WSREP: gcs/src/gcs.cpp:gcs_open():1437: Failed to open channel 'wordpress-001' at 'gcomm://10.52.2.36': -110 (Connection timed out)
2017-03-26T11:20:52.040437Z 0 [ERROR] WSREP: gcs connect failed: Connection timed out
2017-03-26T11:20:52.040471Z 0 [ERROR] WSREP: wsrep::connect(gcomm://10.52.2.36) failed: 7
2017-03-26T11:20:52.040508Z 0 [ERROR] Aborting
grastate.dat on deleting all pods:
root#gluster-3:/mnt/gfs/gluster_vol-1/mysql# cat percona-0/grastate.dat
# GALERA saved state
version: 2.1
uuid: a91f70f2-11f8-11e7-8f3d-86c2e58790ac
seqno: -1
safe_to_bootstrap: 0
root#gluster-3:/mnt/gfs/gluster_vol-1/mysql# cat percona-1/grastate.dat
# GALERA saved state
version: 2.1
uuid: a91f70f2-11f8-11e7-8f3d-86c2e58790ac
seqno: -1
safe_to_bootstrap: 0
root#gluster-3:/mnt/gfs/gluster_vol-1/mysql# cat percona-2/grastate.dat
# GALERA saved state
version: 2.1
uuid: a91f70f2-11f8-11e7-8f3d-86c2e58790ac
seqno: -1
safe_to_bootstrap: 0
No, gvwstate.dat
Fixed it with changing the entrypoint in the container to the following script:
#!/bin/bash
sed -i \"s|safe_to_bootstrap.*:.*|safe_to_bootstrap:1|1\" /var/lib/mysql/grastate.dat;
/entrypoint.sh --wsrep-new-cluster;
Thanks to https://www.claudiokuenzler.com/blog/494/galera-cluster-mysql-not-starting-failed-to-open-channel-reach-primary#.WNesDiF97Qo
The issue is, when restarting the 3 pods from a crash, they all hit the following error:
[ERROR] WSREP: failed to open gcomm backend connection: 110: failed to reach primary view: 110 (Connection timed out)
What that means (summarizing from the the link), is that since all the pods are down the first pod (the pods are managed by a statefulset) comes up and tries to reconnect to the cluster but doesn't find any other pods it can connect to, so it goes down, the next pod comes up tries the same thing, hits the same error and goes down to etc etc
The solution is for the first pod to start a new cluster when it comes up then all the subsequent will come up and find a node to connect to. It'll still come up with all the data.
So with percona xtradb the docker container's entrypoint looks like:
exec mysqld --user=mysql --wsrep_cluster_name=$CLUSTER_NAME --wsrep_cluster_address="gcomm://$cluster_join" --wsrep_sst_method=xtrabackup-v2 --wsrep_sst_auth="xtrabackup:$XTRABACKUP_PASSWORD" --log-error=${DATADIR}error.log $CMDARG
So all I have to do to get the setup running is pass the earlier argument --wsrep-new-cluster to the /entrypoint.sh file like so:
/entrypoint.sh --wsrep-new-cluster
PS//
I tried the above at first alone but I ran into an error stating that to force a new cluster and bootstrap with that node I had to set safe_to_bootstrap from 0 to 1 in /var/lib/mysql/grastate.dat

MariaDB/MySQL resource limit was exceeded

I'm trying to connect to MariaDB/Mysql installed on my CentOS 7 and get the following error when trying to connect using: mysql -u root -p.
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (111)
I have tried connecting to it by specifying the IP Address instead of using localhost but I get the same error.
When I try to get the MariaDB status, I get the following message (/bin/systemctl status mariadb.service):
mariadb.service - MariaDB database server
Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; vendor preset: disabled)
Active: failed (Result: resources)
And starting it yields the following error (/bin/systemctl start mariadb.service):
Job for mariadb.service failed because a configured resource limit was exceeded. See "systemctl status mariadb.service" and "journalctl -xe" for details.
I also looked into the logs located at /var/log/mariadb/mariadb.log
160408 12:21:00 mysqld_safe mysqld from pid file /var/run/mariadb/mariadb.pid ended
160408 16:11:01 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
160408 16:11:01 [Note] /usr/libexec/mysqld (mysqld 5.5.47-MariaDB) starting as process 3054 ...
160408 16:11:02 InnoDB: The InnoDB memory heap is disabled
160408 16:11:02 InnoDB: Mutexes and rw_locks use GCC atomic builtins
160408 16:11:02 InnoDB: Compressed tables use zlib 1.2.7
160408 16:11:02 InnoDB: Using Linux native AIO
160408 16:11:02 InnoDB: Initializing buffer pool, size = 128.0M
160408 16:11:02 InnoDB: Completed initialization of buffer pool
160408 16:11:02 InnoDB: highest supported file format is Barracuda.
160408 16:11:04 InnoDB: Waiting for the background threads to start
160408 16:11:05 Percona XtraDB (http://www.percona.com) 5.5.46-MariaDB-37.6 started; log sequence number 54018416776
160408 16:11:06 [Note] Plugin 'FEEDBACK' is disabled.
160408 16:11:07 [Note] Server socket created on IP: '0.0.0.0'.
160408 16:11:07 [Note] Event Scheduler: Loaded 0 events
160408 16:11:07 [Note] /usr/libexec/mysqld: ready for connections.
Version: '5.5.47-MariaDB' socket: '/var/lib/mysql/mysql.sock' port: 3306 MariaDB Server
160409 6:26:06 InnoDB: Error: Write to file ./ibdata1 failed at offset 9 615514112.
InnoDB: 1048576 bytes should have been written, only 585728 were written.
InnoDB: Operating system error number 28.
InnoDB: Check that your OS and file system support files of this size.
InnoDB: Check also that the disk is not full or a disk quota exceeded.
InnoDB: Error number 28 means 'No space left on device'.
InnoDB: Some operating system error numbers are described at
InnoDB: http://dev.mysql.com/doc/refman/5.5/en/operating-system-error-codes.html
160409 6:26:06 InnoDB: Assertion failure in thread 140463216400128 in file os0file.c line 4377
Anyone has any ideas on how to fix this error?
Thank you :)
> perror 28
OS error code 28: No space left on device
Need I say more?

Additional Nodes on Galera MySQL failing to add

Ok so I have a second node I am trying to add to a working galera mysql server as another node...configs here
Node A(working)
[server]
[mysqld]
[embedded]
[mysqld-5.5]
[mariadb]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
query_cache_size=0
query_cache_type=0
bind-address=172.16.1.20
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_name="controller_cluster"
wsrep_cluster_address="gcomm://"
wsrep_sst_receive_addres="172.16.1.20"
wsrep_slave_threads=1
wsrep_certify_nonPK=1
wsrep_max_ws_rows=131072
wsrep_max_ws_size=1073741824
wsrep_debug=0
wsrep_convert_LOCK_to_trx=0
wsrep_retry_autocommit=1
wsrep_auto_increment_control=1
wsrep_drupal_282555_workaround=0
wsrep_causal_reads=0
wsrep_notify_cmd=
wsrep_sst_method=rsync
wsrep_sst_auth=wsrep_sst:password
[mariadb-5.5]
Node B(wont start)
[server]
[mysqld]
skip-name-resolve
log = /var/log/mysqld.log
log-error = /var/log/mysqld.error.log
[embedded]
[mysqld-5.5]
[mariadb]
log = /var/log/mysqld.log
log-error = /var/log/mysqld.error.log
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
query_cache_size=0
query_cache_type=0
bind-address=172.16.1.21
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_name="controller_cluster"
wsrep_cluster_address="gcomm://172.16.1.20"
wsrep_sst_receive_addres="172.16.1.21"
wsrep_slave_threads=1
wsrep_certify_nonPK=1
wsrep_max_ws_rows=131072
wsrep_max_ws_size=1073741824
wsrep_debug=0
wsrep_convert_LOCK_to_trx=0
wsrep_retry_autocommit=1
wsrep_auto_increment_control=1
wsrep_drupal_282555_workaround=0
wsrep_causal_reads=0
wsrep_notify_cmd=
wsrep_sst_method=rsync
wsrep_sst_auth=wsrep_sst:password
[mariadb-5.5]
Node A Permissions on /var/lib/mysql
-rw-rw----. 1 mysql mysql 16384 Mar 4 23:54 aria_log.00000001
-rw-rw----. 1 mysql mysql 52 Mar 4 23:54 aria_log_control
-rw-r-----. 1 mysql root 283162 Mar 5 17:49 db01.deg.pod1.err
-rw-rw----. 1 mysql mysql 5 Mar 4 23:54 db01.deg.pod1.pid
-rw-------. 1 mysql mysql 134219040 Mar 5 17:48 galera.cache
-rw-rw----. 1 mysql mysql 104 Mar 5 17:48 grastate.dat
-rw-rw----. 1 mysql mysql 12582912 Mar 4 23:54 ibdata1
-rw-rw----. 1 mysql mysql 5242880 Mar 4 23:54 ib_logfile0
-rw-rw----. 1 mysql mysql 5242880 Mar 4 22:30 ib_logfile1
drwx------. 2 mysql mysql 4096 Mar 4 22:59 mysql
srwxrwxrwx. 1 mysql mysql 0 Mar 4 23:54 mysql.sock
drwx------. 2 root root 4096 Mar 4 22:59 performance_schema
-rw-r--r--. 1 mysql mysql 124 Mar 4 22:11 RPM_UPGRADE_HISTORY
-rw-r--r--. 1 mysql mysql 124 Mar 4 22:11 RPM_UPGRADE_MARKER-LAST
drwxr-xr-x. 2 mysql mysql 4096 Mar 4 22:11 test
drwx------. 2 mysql mysql 4096 Mar 5 17:35 tt
Node B Permissions on /var/lib/mysql
-rw-rw----. 1 mysql mysql 16384 Mar 5 17:49 aria_log.00000001
-rw-rw----. 1 mysql mysql 52 Mar 5 17:49 aria_log_control
-rw-r-----. 1 mysql root 0 Mar 5 17:49 db02.deg.pod1.err
-rw-------. 1 mysql mysql 134219040 Mar 5 17:49 galera.cache
-rw-rw----. 1 mysql mysql 104 Mar 5 17:49 grastate.dat
-rw-rw----. 1 mysql mysql 12582912 Mar 5 17:49 ibdata1
-rw-rw----. 1 mysql mysql 5242880 Mar 5 17:49 ib_logfile0
-rw-rw----. 1 mysql mysql 5242880 Mar 5 17:49 ib_logfile1
drwx------. 2 mysql mysql 4096 Mar 4 23:10 mysql
srwxrwxrwx 1 mysql mysql 0 Mar 5 17:49 mysql.sock
-rw-------. 1 root root 107 Mar 4 23:10 nohup.out
-rw-r--r-- 1 root root 269455 Mar 5 03:42 out.log
drwx------ 2 root root 4096 Mar 5 03:20 performance_schema
-rw-r--r--. 1 mysql mysql 124 Mar 4 22:14 RPM_UPGRADE_HISTORY
-rw-r--r--. 1 mysql mysql 124 Mar 4 22:14 RPM_UPGRADE_MARKER-LAST
drwxr-xr-x. 2 mysql mysql 4096 Mar 4 22:14 test
drwx------ 2 mysql mysql 4096 Mar 5 17:36 tt
-rw------- 1 root root 0 Mar 5 03:52 wsrep_recovery.hh4i9
Password is also the same on both ends for mysql user and for root user.
** Failure log on Node B **
140305 17:49:39 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
140305 17:49:39 mysqld_safe WSREP: Running position recovery with --log_error='/var/lib/mysql/wsrep_recovery.A8nOzH' --pid-file='/var/lib/mysql/db02.deg.pod1-recover.pid'
nohup: ignoring input
140305 17:49:39 [Warning] The syntax '--log' is deprecated and will be removed in a future release. Please use '--general-log'/'--general-log-file' instead.
140305 17:49:39 [Warning] The syntax '--log' is deprecated and will be removed in a future release. Please use '--general-log'/'--general-log-file' instead.
140305 17:49:41 mysqld_safe WSREP: Recovered position bce8f04b-a41a-11e3-b010-4ba4a408598c:0
140305 17:49:41 [Warning] The syntax '--log' is deprecated and will be removed in a future release. Please use '--general-log'/'--general-log-file' instead.
140305 17:49:41 [Warning] The syntax '--log' is deprecated and will be removed in a future release. Please use '--general-log'/'--general-log-file' instead.
140305 17:49:41 [Note] WSREP: wsrep_start_position var submitted: 'bce8f04b-a41a-11e3-b010-4ba4a408598c:0'
140305 17:49:41 [Note] WSREP: Read nil XID from storage engines, skipping position init
140305 17:49:41 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib/galera/libgalera_smm.so'
140305 17:49:41 [Note] WSREP: wsrep_load(): Galera 25.3.2(r170) by Codership Oy <info#codership.com> loaded successfully.
140305 17:49:41 [Note] WSREP: CRC-32C: using hardware acceleration.
140305 17:49:41 [Note] WSREP: Found saved state: 00000000-0000-0000-0000-000000000000:-1
140305 17:49:41 [Note] WSREP: Passing config to GCS: base_host = 172.16.1.21; base_port = 4567; cert.log_conflicts = no; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.size = 128M; gcs.fc_debug = 0; gcs.fc_factor = 1; gcs.fc_limit = 16; gcs.fc_master_slave = NO; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 2147483647; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = NO; repl.causal_read_timeout = PT30S; repl.commit_order = 3; repl.key_format = FLAT8; repl.proto_max = 5
140305 17:49:41 [Note] WSREP: Assign initial position for certification: -1, protocol version: -1
140305 17:49:41 [Note] WSREP: wsrep_sst_grab()
140305 17:49:41 [Note] WSREP: Start replication
140305 17:49:41 [Note] WSREP: Setting initial position to 00000000-0000-0000-0000-000000000000:-1
140305 17:49:41 [Note] WSREP: protonet asio version 0
140305 17:49:41 [Note] WSREP: Using CRC-32C (optimized) for message checksums.
140305 17:49:41 [Note] WSREP: backend: asio
140305 17:49:41 [Note] WSREP: GMCast version 0
140305 17:49:41 [Note] WSREP: (7036f7c8-a4b8-11e3-97c3-866382997e69, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
140305 17:49:41 [Note] WSREP: (7036f7c8-a4b8-11e3-97c3-866382997e69, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
140305 17:49:41 [Note] WSREP: EVS version 0
140305 17:49:41 [Note] WSREP: PC version 0
140305 17:49:41 [Note] WSREP: gcomm: connecting to group 'controller_cluster', peer '172.16.1.20:'
140305 17:49:41 [Note] WSREP: declaring 3f183cba-a422-11e3-b1c7-52f230abd39f stable
140305 17:49:41 [Note] WSREP: Node 3f183cba-a422-11e3-b1c7-52f230abd39f state prim
140305 17:49:41 [Note] WSREP: view(view_id(PRIM,3f183cba-a422-11e3-b1c7-52f230abd39f,48) memb {
3f183cba-a422-11e3-b1c7-52f230abd39f,0
7036f7c8-a4b8-11e3-97c3-866382997e69,0
} joined {
} left {
} partitioned {
})
140305 17:49:42 [Note] WSREP: gcomm: connected
140305 17:49:42 [Note] WSREP: Changing maximum packet size to 64500, resulting msg size: 32636
140305 17:49:42 [Note] WSREP: Shifting CLOSED -> OPEN (TO: 0)
140305 17:49:42 [Note] WSREP: Opened channel 'controller_cluster'
140305 17:49:42 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 1, memb_num = 2
140305 17:49:42 [Note] WSREP: Waiting for SST to complete.
140305 17:49:42 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID.
140305 17:49:42 [Note] WSREP: STATE EXCHANGE: sent state msg: 52478b99-a4b8-11e3-9283-8207651d087e
140305 17:49:42 [Note] WSREP: STATE EXCHANGE: got state msg: 52478b99-a4b8-11e3-9283-8207651d087e from 0 (db01.deg.pod1)
140305 17:49:42 [Note] WSREP: STATE EXCHANGE: got state msg: 52478b99-a4b8-11e3-9283-8207651d087e from 1 (db02.deg.pod1)
140305 17:49:42 [Note] WSREP: Quorum results:
version = 3,
component = PRIMARY,
conf_id = 47,
members = 1/2 (joined/total),
act_id = 1,
last_appl. = -1,
protocols = 0/5/2 (gcs/repl/appl),
group UUID = bce8f04b-a41a-11e3-b010-4ba4a408598c
140305 17:49:42 [Note] WSREP: Flow-control interval: [23, 23]
140305 17:49:42 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 1)
140305 17:49:42 [Note] WSREP: State transfer required:
Group state: bce8f04b-a41a-11e3-b010-4ba4a408598c:1
Local state: 00000000-0000-0000-0000-000000000000:-1
140305 17:49:42 [Note] WSREP: New cluster view: global state: bce8f04b-a41a-11e3-b010-4ba4a408598c:1, view# 48: Primary, number of nodes: 2, my index: 1, protocol version 2
140305 17:49:42 [Warning] WSREP: Gap in state sequence. Need state transfer.
140305 17:49:44 [Note] WSREP: Running: 'wsrep_sst_rsync --role 'joiner' --address '172.16.1.21' --auth 'wsrep_sst:password' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --parent '13639''
140305 17:49:44 [Note] WSREP: Prepared SST request: rsync|172.16.1.21:4444/rsync_sst
140305 17:49:44 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
140305 17:49:44 [Note] WSREP: REPL Protocols: 5 (3, 1)
140305 17:49:44 [Note] WSREP: Assign initial position for certification: 1, protocol version: 3
140305 17:49:44 [Note] WSREP: Service thread queue flushed.
140305 17:49:44 [Warning] WSREP: Failed to prepare for incremental state transfer: Local state UUID (00000000-0000-0000-0000-000000000000) does not match group state UUID (bce8f04b-a41a-11e3-b010-4ba4a408598c): 1 (Operation not permitted)
at galera/src/replicator_str.cpp:prepare_for_IST():445. IST will be unavailable.
140305 17:49:44 [Note] WSREP: Node 1.0 (db02.deg.pod1) requested state transfer from '*any*'. Selected 0.0 (db01.deg.pod1)(SYNCED) as donor.
140305 17:49:44 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 1)
140305 17:49:44 [Note] WSREP: Requesting state transfer: success, donor: 0
140305 17:49:45 [Warning] WSREP: 0.0 (db01.deg.pod1): State transfer to 1.0 (db02.deg.pod1) failed: -1 (Operation not permitted)
140305 17:49:45 [ERROR] WSREP: gcs/src/gcs_group.c:gcs_group_handle_join_msg():723: Will never receive state. Need to abort.
140305 17:49:45 [Note] WSREP: gcomm: terminating thread
140305 17:49:45 [Note] WSREP: gcomm: joining thread
140305 17:49:45 [Note] WSREP: gcomm: closing backend
140305 17:49:46 [Note] WSREP: view(view_id(NON_PRIM,3f183cba-a422-11e3-b1c7-52f230abd39f,48) memb {
7036f7c8-a4b8-11e3-97c3-866382997e69,0
} joined {
} left {
} partitioned {
3f183cba-a422-11e3-b1c7-52f230abd39f,0
})
140305 17:49:46 [Note] WSREP: view((empty))
140305 17:49:46 [Note] WSREP: gcomm: closed
140305 17:49:46 [Note] WSREP: /usr/sbin/mysqld: Terminated.
140305 17:49:46 mysqld_safe mysqld from pid file /var/lib/mysql/db02.deg.pod1.pid ended
WSREP_SST: [ERROR] Parent mysqld process (PID:13639) terminated unexpectedly. (20140305 17:49:47.601)
WSREP_SST: [INFO] Joiner cleanup. (20140305 17:49:47.603)
WSREP_SST: [INFO] Joiner cleanup done. (20140305 17:49:48.110)
One possible reason for nodes failing to add is that the existing nodes don't expect them. In your configuration files, you may just need to point each node to the rest.
For example, try using wsrep_cluster_address="gcomm://"172.16.1.20,172.16.1.21" in both galera.cnf files.