MariaDB Galera Cluster is not syncing - mysql

I have deployed MariaDB Cluster before, and this problem only comes out recently (I don't have this problem before and I don't know why).
I have server 1, 2 and 3. I executed an INSERT command at server 3, however, the tables at server 1 and 2 remains unchanged.
3 servers are at different parts of the world. After the INSERT command, the state uuid remains the same.
Here is the status of server 1:
MariaDB [mysql]> show status like 'wsrep_%';
+------------------------------+----------------------------------------------------------+
| Variable_name | Value |
+------------------------------+----------------------------------------------------------+
| wsrep_local_state_uuid | c4f9e2e2-fee1-11e5-8648-a22b867b5a6e |
| wsrep_protocol_version | 7 |
| wsrep_last_committed | 205 |
| wsrep_replicated | 170 |
| wsrep_replicated_bytes | 160481 |
| wsrep_repl_keys | 664 |
| wsrep_repl_keys_bytes | 9222 |
| wsrep_repl_data_bytes | 140379 |
| wsrep_repl_other_bytes | 0 |
| wsrep_received | 46 |
| wsrep_received_bytes | 26150 |
| wsrep_local_commits | 170 |
| wsrep_local_cert_failures | 0 |
| wsrep_local_replays | 1 |
| wsrep_local_send_queue | 0 |
| wsrep_local_send_queue_max | 1 |
| wsrep_local_send_queue_min | 0 |
| wsrep_local_send_queue_avg | 0.000000 |
| wsrep_local_recv_queue | 0 |
| wsrep_local_recv_queue_max | 1 |
| wsrep_local_recv_queue_min | 0 |
| wsrep_local_recv_queue_avg | 0.000000 |
| wsrep_local_cached_downto | 1 |
| wsrep_flow_control_paused_ns | 0 |
| wsrep_flow_control_paused | 0.000000 |
| wsrep_flow_control_sent | 0 |
| wsrep_flow_control_recv | 0 |
| wsrep_cert_deps_distance | 7.482927 |
| wsrep_apply_oooe | 0.009756 |
| wsrep_apply_oool | 0.000000 |
| wsrep_apply_window | 1.009756 |
| wsrep_commit_oooe | 0.000000 |
| wsrep_commit_oool | 0.000000 |
| wsrep_commit_window | 1.000000 |
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
| wsrep_cert_index_size | 28 |
| wsrep_causal_reads | 0 |
| wsrep_cert_interval | 0.009756 |
| wsrep_incoming_addresses | server1:3306,server2:3306,server3:3306 |
| wsrep_evs_delayed | |
| wsrep_evs_evict_list | |
| wsrep_evs_repl_latency | 0.200155/0.201113/0.201752/0.000614937/4 |
| wsrep_evs_state | OPERATIONAL |
| wsrep_gcomm_uuid | c4f91b4f-fee1-11e5-8c4f-6e451c332f79 |
| wsrep_cluster_conf_id | 3 |
| wsrep_cluster_size | 3 |
| wsrep_cluster_state_uuid | c4f9e2e2-fee1-11e5-8648-a22b867b5a6e |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_local_bf_aborts | 6 |
| wsrep_local_index | 0 |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy <info#codership.com> |
| wsrep_provider_version | 25.3.14(r3560) |
| wsrep_ready | ON |
| wsrep_thread_count | 2 |
+------------------------------+----------------------------------------------------------+
Status of server 2:
MariaDB [(none)]> show status like 'wsrep_%';
+------------------------------+----------------------------------------------------------+
| Variable_name | Value |
+------------------------------+----------------------------------------------------------+
| wsrep_local_state_uuid | c4f9e2e2-fee1-11e5-8648-a22b867b5a6e |
| wsrep_protocol_version | 7 |
| wsrep_last_committed | 225 |
| wsrep_replicated | 35 |
| wsrep_replicated_bytes | 25700 |
| wsrep_repl_keys | 119 |
| wsrep_repl_keys_bytes | 1757 |
| wsrep_repl_data_bytes | 21703 |
| wsrep_repl_other_bytes | 0 |
| wsrep_received | 187 |
| wsrep_received_bytes | 177793 |
| wsrep_local_commits | 35 |
| wsrep_local_cert_failures | 0 |
| wsrep_local_replays | 1 |
| wsrep_local_send_queue | 0 |
| wsrep_local_send_queue_max | 1 |
| wsrep_local_send_queue_min | 0 |
| wsrep_local_send_queue_avg | 0.000000 |
| wsrep_local_recv_queue | 0 |
| wsrep_local_recv_queue_max | 4 |
| wsrep_local_recv_queue_min | 0 |
| wsrep_local_recv_queue_avg | 0.032086 |
| wsrep_local_cached_downto | 9 |
| wsrep_flow_control_paused_ns | 0 |
| wsrep_flow_control_paused | 0.000000 |
| wsrep_flow_control_sent | 0 |
| wsrep_flow_control_recv | 0 |
| wsrep_cert_deps_distance | 7.193548 |
| wsrep_apply_oooe | 0.004630 |
| wsrep_apply_oool | 0.000000 |
| wsrep_apply_window | 1.004630 |
| wsrep_commit_oooe | 0.000000 |
| wsrep_commit_oool | 0.000000 |
| wsrep_commit_window | 1.000000 |
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
| wsrep_cert_index_size | 28 |
| wsrep_causal_reads | 0 |
| wsrep_cert_interval | 0.009217 |
| wsrep_incoming_addresses | server1:3306,server2:3306,server3:3306 |
| wsrep_evs_delayed | |
| wsrep_evs_evict_list | |
| wsrep_evs_repl_latency | 0.200138/0.201917/0.203696/0.00177914/2 |
| wsrep_evs_state | OPERATIONAL |
| wsrep_gcomm_uuid | d562e272-fee1-11e5-b2a2-d3a6b5579aab |
| wsrep_cluster_conf_id | 3 |
| wsrep_cluster_size | 3 |
| wsrep_cluster_state_uuid | c4f9e2e2-fee1-11e5-8648-a22b867b5a6e |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_local_bf_aborts | 0 |
| wsrep_local_index | 1 |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy <info#codership.com> |
| wsrep_provider_version | 25.3.14(r3560) |
| wsrep_ready | ON |
| wsrep_thread_count | 2 |
+------------------------------+----------------------------------------------------------+
57 rows in set (0.01 sec)
Status of server3 (As you can see, the latency shows all 0 but I don't know why)
MariaDB [(none)]> show status like 'wsrep_%';
+------------------------------+----------------------------------------------------------+
| Variable_name | Value |
+------------------------------+----------------------------------------------------------+
| wsrep_local_state_uuid | c4f9e2e2-fee1-11e5-8648-a22b867b5a6e |
| wsrep_protocol_version | 7 |
| wsrep_last_committed | 245 |
| wsrep_replicated | 5 |
| wsrep_replicated_bytes | 4350 |
| wsrep_repl_keys | 11 |
| wsrep_repl_keys_bytes | 203 |
| wsrep_repl_data_bytes | 3827 |
| wsrep_repl_other_bytes | 0 |
| wsrep_received | 226 |
| wsrep_received_bytes | 208559 |
| wsrep_local_commits | 1 |
| wsrep_local_cert_failures | 0 |
| wsrep_local_replays | 0 |
| wsrep_local_send_queue | 0 |
| wsrep_local_send_queue_max | 1 |
| wsrep_local_send_queue_min | 0 |
| wsrep_local_send_queue_avg | 0.000000 |
| wsrep_local_recv_queue | 0 |
| wsrep_local_recv_queue_max | 1 |
| wsrep_local_recv_queue_min | 0 |
| wsrep_local_recv_queue_avg | 0.000000 |
| wsrep_local_cached_downto | 19 |
| wsrep_flow_control_paused_ns | 0 |
| wsrep_flow_control_paused | 0.000000 |
| wsrep_flow_control_sent | 0 |
| wsrep_flow_control_recv | 0 |
| wsrep_cert_deps_distance | 7.022026 |
| wsrep_apply_oooe | 0.000000 |
| wsrep_apply_oool | 0.000000 |
| wsrep_apply_window | 1.000000 |
| wsrep_commit_oooe | 0.000000 |
| wsrep_commit_oool | 0.000000 |
| wsrep_commit_window | 1.000000 |
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
| wsrep_cert_index_size | 28 |
| wsrep_causal_reads | 0 |
| wsrep_cert_interval | 0.008811 |
| wsrep_incoming_addresses | server1:3306,server2:3306,server3:3306 |
| wsrep_evs_delayed | |
| wsrep_evs_evict_list | |
| wsrep_evs_repl_latency | 0/0/0/0/0 |
| wsrep_evs_state | OPERATIONAL |
| wsrep_gcomm_uuid | fd022144-fee1-11e5-a7a3-f23274fef9c3 |
| wsrep_cluster_conf_id | 3 |
| wsrep_cluster_size | 3 |
| wsrep_cluster_state_uuid | c4f9e2e2-fee1-11e5-8648-a22b867b5a6e |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_local_bf_aborts | 0 |
| wsrep_local_index | 2 |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy <info#codership.com> |
| wsrep_provider_version | 25.3.14(r3560) |
| wsrep_ready | ON |
| wsrep_thread_count | 2 |
+------------------------------+----------------------------------------------------------+
57 rows in set (0.00 sec)
iptables at all three servers are set to ACCEPT all input and output traffics.
The log shows that all servers have joined and synced with the cluster.
Does anyone know why? Thanks.

I finally found that is is the app's problem that use MyISAM as storage engine, which causes the error. There is no error after change back to InnoDB.

I suggest you to check the table engines, Because the Galera cluster support InnoDB engine not MyISAM.
So here is a easy way how to migrate Mysql database with MyISAM tables into Galera and InnoDB:
Make sure your db schema doesn't contain FULLTEXT indexes or any other constructions, which are not supported by InnoDB engine
Dump schema of your database
replace in the dump string "MYISAM" with "INNODB"
Dump data
Prepare db user in Galera Cluster (mysql.user table is not replicated across cluster, so you have to insert db user into each of your mariadb servers)
Import schema (with innodb engine)
Import data
Cleanup dump files
Thanks from https://support.qualityunit.com/718375-Migrate-MySQL-Database-with-Myisam-engine-to-MariaDB-Galera-Cluster

Related

Explain performance schema count_star

Can you please explain MySQL 8
performance_schema events_statements_summary_by_digest count_start
?
+------------+----------------+
| COUNT_STAR | SUM_TIMER_WAIT |
+------------+----------------+
| 21562 | 1422617337000 |
| 5134 | 231538954000 |
| 5134 | 42625981791000 |
| 184 | 6224664000 |
| 65 | 67034144000 |
| 48 | 80661283000 |
| 39 | 25631638000 |
| 32 | 1131746000 |
| 32 | 1206939000 |
| 32 | 5997462000 |
| 20 | 2349761000 |
| 8 | 284036000 |
| 8 | 2976134000 |
| 7 | 1254130000 |
| 6 | 1792915000 |
| 6 | 784386000 |
| 6 | 821875000 |
| 6 | 1102248000 |
| 6 | 1079227000 |
| 5 | 11042289000 |
| 5 | 4207319000 |
| 5 | 4012671000 |
| 5 | 6844824000 |
+------------+----------------+
I found this in documentation https://dev.mysql.com/doc/mysql-perfschema-excerpt/8.0/en/wait-summary-tables.html but it's hard to understand for me.
COUNT_STAR
The number of summarized events. This value includes all events, whether timed or nontimed.
Thanks!

Transpose rows to columsn in SQL

I would like to transpose the rows to columns in sql.
My Table looks like this:
+-------+--------+--------------+---------+--------------+---------+----------------+---------+---------+---------+---------+---------+
| ID | Desk | Reason1 | Amount1 | Reason2 | Amount2 | Reason3 | Amount3 | Reason4 | Amount4 | Reason5 | Amount5 |
+-------+--------+--------------+---------+--------------+---------+----------------+---------+---------+---------+---------+---------+
| 34850 | Desk1 | nktp | 2 | sectors | 1 | auc | 1 | thr | -13 | other | -3 |
| 34851 | Desk2 | TOC Reb | 5 | SG & HK ETF | 5 | | 0 | | 0 | | 0 |
| 34853 | Desk3 | China | -5 | HK | 0 | CNH | 0 | HK2 | 35 | | 0 |
| 34854 | Desk4 | ETFs | 2 | KSTA Opening | 6 | KSTA Rebalance | 14 | | 0 | | 0 |
| 34855 | Desk5 | BTC | 5 | | 0 | | 0 | | 0 | | 0 |
| 34856 | Desk6 | Sales | 10 | Delta | 5 | | 0 | | 0 | | 0 |
| 34857 | Desk7 | ES | 1 | HSI | 0 | | 0 | | 0 | | 0 |
| 34858 | Desk8 | OTC | 10 | SPREADS | 10 | | 0 | | 0 | | 0 |
| 34859 | Desk9 | MES/ZTW | 10 | O/N Spreads | -20 | | 0 | | 0 | | 0 |
| 34860 | Desk10 | CBBC TENCENT | 4 | CBBC HSI | 1 | | 0 | | 0 | | 0 |
+-------+--------+--------------+---------+--------------+---------+----------------+---------+---------+---------+---------+---------+
How do I transpose the table in SQL where the reasons are the rows and the desk are columns?
Output wanted:
+
----------------+---------+--------+-------------+--------+-------+--------+
| | Desk1 | Amount | Desk2 | Amount | Desk3 | Amount |
+----------------+---------+--------+-------------+--------+-------+--------+
| Reason1 | nktp | 2 | TOC Reb | 5 | China | -5 |
| Reason2 | sectors | 1 | SG & HK ETF | 5 | HK | 0 |
| Reason3 | auc | 1 | | | CNH | 0 |
| Reason4 | thr | -13 | | | HK2 | 35 |
| Reason5 | other | -3 | | | | |
| General_Remark | | | | | | |
+----------------+---------+--------+-------------+--------+-------+--------+
A normalized design might look something like this:
reasons
+-----------+---------+----------------+--------+
| reason_id | desk_id | reason | amount |
+-----------+---------+----------------+--------+
| 1 | 34850 | nktp | 2 |
| 2 | 34851 | TOC Reb | 5 |
| 3 | 34853 | China | -5 |
| 4 | 34854 | ETFs | 2 |
| 5 | 34855 | BTC | 5 |
| 6 | 34856 | Sales | 10 |
| 7 | 34857 | ES | 1 |
| 8 | 34858 | OTC | 10 |
| 9 | 34859 | MES/ZTW | 10 |
| 10 | 34860 | CBBC TENCENT | 4 |
| 11 | 34850 | sectors | 1 |
| 12 | 34851 | SG & HK ETF | 5 |
| 13 | 34853 | HK | 0 |
| 14 | 34854 | KSTA Opening | 6 |
| 15 | 34856 | Delta | 5 |
| 16 | 34857 | HSI | 0 |
| 17 | 34858 | SPREADS | 10 |
| 18 | 34859 | O/N Spreads | -20 |
| 19 | 34860 | CBBC HSI | 1 |
| 20 | 34850 | auc | 1 |
| 21 | 34853 | CNH | 0 |
| 22 | 34854 | KSTA Rebalance | 14 |
| 23 | 34850 | thr | -13 |
| 24 | 34853 | HK2 | 35 |
| 25 | 34850 | other | -3 |
+-----------+---------+----------------+--------+
desks
+---------+------------+
| desk_id | Desk_name |
+---------+------------+
| 34850 | Desk1 |
| 34851 | Desk2 |
| 34853 | Desk3 |
| 34854 | Desk4 |
| 34855 | Desk5 |
| 34856 | Desk6 |
| 34857 | Desk7 |
| 34858 | Desk8 |
| 34859 | Desk9 |
| 34860 | Desk10 |
+---------+------------+
If it was me, I'd start from here.

MYSQL can't load character into set

I have this file:
1.nothing
2.o,s,f,d
3.f,d
4.o,s
5.s,f,d
6.s
7.nothing
8.s,f,d
9.o,d
10.s,f
And a table:
describe delete_me;
+------------+----------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+------------+----------------------+------+-----+---------+-------+
| id | int(11) | YES | | NULL | |
| privileges | set('o','s','f','d') | YES | | NULL | |
+------------+----------------------+------+-----+---------+-------+
When I try to:
LOAD DATA INFILE 'privileges.txt' INTO TABLE delete_me FIELDS TERMINATED BY '.';
I get this:
+------+------------+
| id | privileges |
+------+------------+
| 0 | |
| 2 | f,s,o |
| 3 | f |
| 4 | o |
| 5 | f,s |
| 6 | |
| 7 | |
| 8 | f,s |
| 9 | o |
| 10 | s |
| 11 | o |
| 12 | |
| 13 | o |
| 14 | o |
| 15 | s |
| 16 | |
| 17 | o |
| 18 | o |
| 19 | s,o |
| 20 | f |
+------+------------+
The letter d just disappears. Why?

Why MySQL Group Replication slave nodes have high delay with write node?

MySQL 5.7.17 MGR deploy in single-primary mode, 3 node all one one machine, the same configuration.
And then we test insert on the primary node and observe that the slave nodes have high delay with the write node, even primary node finish the insert test, slave node data is incresing!
Why MySQL Group Replication slave nodes have high delay with write node?
here is the my.cnf:
[mysqld]
datadir=/dba/mysql/data/s1
basedir=/dba/mysql/mysql-5.7/
port=24801
socket=/dba/mysql/data/s1/s1.sock
server_id=1
gtid_mode=ON
enforce_gtid_consistency=ON
master_info_repository=TABLE
relay_log_info_repository=TABLE
binlog_checksum=NONE
log_slave_updates=ON
log_bin=binlog
binlog_format=ROW
transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
loose-group_replication_start_on_boot=off
loose-group_replication_local_address= "127.0.0.1:24901"
loose-group_replication_group_seeds= "127.0.0.1:24901,127.0.0.1:24902,127.0.0.1:24903"
loose-group_replication_bootstrap_group= off
loose-group_replication_single_primary_mode=true
loose-group_replication_enforce_update_everywhere_checks=false
slave_parallel_type=LOGICAL_CLOCK
slave_preserve_commit_order=1
slave_parallel_workers=4
and the MGR config :
mysql> show variables like '%group_replication%';
+----------------------------------------------------+-------------------------------------------------+
| Variable_name | Value |
+----------------------------------------------------+-------------------------------------------------+
| group_replication_allow_local_disjoint_gtids_join | OFF |
| group_replication_allow_local_lower_version_join | OFF |
| group_replication_auto_increment_increment | 7 |
| group_replication_bootstrap_group | OFF |
| group_replication_components_stop_timeout | 31536000 |
| group_replication_compression_threshold | 1000000 |
| group_replication_enforce_update_everywhere_checks | OFF |
| group_replication_flow_control_applier_threshold | 25000 |
| group_replication_flow_control_certifier_threshold | 25000 |
| group_replication_flow_control_mode | QUOTA |
| group_replication_force_members | |
| group_replication_group_name | aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa |
| group_replication_group_seeds | 127.0.0.1:24901,127.0.0.1:24902,127.0.0.1:24903 |
| group_replication_gtid_assignment_block_size | 1000000 |
| group_replication_ip_whitelist | AUTOMATIC |
| group_replication_local_address | 127.0.0.1:24901 |
| group_replication_poll_spin_loops | 0 |
| group_replication_recovery_complete_at | TRANSACTIONS_APPLIED |
| group_replication_recovery_reconnect_interval | 60 |
| group_replication_recovery_retry_count | 10 |
| group_replication_recovery_ssl_ca | |
| group_replication_recovery_ssl_capath | |
| group_replication_recovery_ssl_cert | |
| group_replication_recovery_ssl_cipher | |
| group_replication_recovery_ssl_crl | |
| group_replication_recovery_ssl_crlpath | |
| group_replication_recovery_ssl_key | |
| group_replication_recovery_ssl_verify_server_cert | OFF |
| group_replication_recovery_use_ssl | OFF |
| group_replication_single_primary_mode | ON |
| group_replication_ssl_mode | DISABLED |
| group_replication_start_on_boot | OFF |
+----------------------------------------------------+-------------------------------------------------+
32 rows in set (0.01 sec)

How to include current row(i.e last recent rows) in new database?

I want to insert last recent three rows in new database?
+------------+----------+---------+--------------+--------------+-----------+--------------+--------+------+--------------+
| time | userid | groupid | jobs_running | jobs_pending | job_limit |
+------------+----------+---------+--------------+--------------+-----------+--------------+--------+------+--------------+
| 1476274005 | achandra | | 4 | 0 | 0 |
| 1476274005 | akawle | | 52 | 48 | 0 |
| 1476274005 | apatil2 | | 20 | 6 | 0 |
| 1476274793 | snagnoor | | 17 | 67 | 0 |
| 1476274793 | snatara2 | | 0 | 54 | 0 |
| 1476274793 | sthykkoo | | 9 | 476 | 0 |
Expected Output:
| 1476274793 | snagnoor | | 17 | 67 | 0 |
| 1476274793 | snatara2 | | 0 | 54 | 0 |
| 1476274793 | sthykkoo | | 9 | 476 | 0 |
I think this query will work.
db2 is new database and db1 is old from where you have to copy the table
insert into db2.`new_table_name` select * from db1.old_table order by `time` desc limit 3
ps. This code is untested.