Etcd: how to check that each node can see each other - zabbix

I have 3 etcd nodes on VMs (not k8s).
There was such problem that nodes are alive but can't see each other, error "connection timeout" during health check. But every single node has "alive" status and zabbix with "etcd by http" template doesn't generate any alerts.
Is there any way to check nodes visibility and to monitor it using zabbix?

Depending upon the version you run, here's an example to do this with 3.5.2
Command
ETCDCTL_API=3 ./bin/etcdctl endpoint status --cluster -w table --endpoints="member1.etcd:2384,member2.etcd:2384,member3.etcd:2384"
Output:
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+--------------------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| http://member1.etcd:2384 | 17ef476d9d7fec5f | 3.5.2 | 1.5 MB | false | false | 7 | 20033 | 20033 | |
| http://member2.etcd:2384 | 31e0ca30ec3c9d94 | 3.5.2 | 1.5 MB | false | false | 7 | 20033 | 20033 | |
| http://member3.etcd:2384 | 721948abbb0522bd | 3.5.2 | 1.5 MB | false | false | 7 | 20033 | 20033 | |
+--------------------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

Related

How to make galera node run independently

I have 2 nodes of MySQL Galera cluster. I want to make it run independently when it's disconnected from each other. But it seems that it's becoming an initialized state even after I enabled below option:
SET GLOBAL wsrep_provider_options='pc.ignore_sb=TRUE';
It's showing below when the node is disconnected from the network:
| wsrep_local_state_comment | Initialized |
| wsrep_cert_index_size | 14 |
| wsrep_causal_reads | 0 |
| wsrep_cert_interval | 0.086022 |
| wsrep_open_transactions | 0 |
| wsrep_open_connections | 0 |
| wsrep_incoming_addresses | 10.201.127.76:3306 |
| wsrep_cluster_weight | 0 |
| wsrep_desync_count | 0 |
| wsrep_evs_delayed | |
| wsrep_evs_evict_list |
Any idea how to make it run independently even if the network is down?
Thanks!

Connection lost while building primary key. Fix or punt?

This question is about possible future improvements to a task I'm almost done with.
I have loaded a MySQL database with a subset of the Universal Medical Language System's Metathesaurus. I used a Java application called MetaMorphoSys, which generated a Bash wrapper, one SQL script for defining tables and importing data from text files, and another for indexing.
Loading and indexing a small UMLS subset (3.3 M rows in table MRSAT) goes to completion without errors. Loading a larger subset (39.4 M rows in MRSAT) is also successful, but then the indexing fails at this step after 1500 to 1800 seconds:
ALTER TABLE MRSAT ADD CONSTRAINT X_MRSAT_PK PRIMARY KEY BTREE (ATUI)
Error Code: 2013. Lost connection to MySQL server during query
My only use for the MySQL database is converting the relational rows to RDF triples. This conversion is performed by a single python script, which does seem to access the MRSAT table, but doesn't appear to use the ATUI column. At this point, I have extracted almost all of the data I want.
How can I tell if the absence of the primary key is detrimental to the performance of the RDF-generation queries?
I have increased some timeouts but haven't made all of the changes in suggested in other answers to that question.
The documentation from the provider suggests MySQL 5.5 over 5.6 due to disk space usage issues. I am using 5.6 anyway (as I have done in the past) on a generous AWS x1e.2xlarge instance running Ubuntu 18.
The documentation provides tuning suggestions for 5.5, but I don't see equivalent settings names in the 5.6 documentation. I have applied these:
bulk_insert_buffer_size = 100M
join_buffer_size = 100M
myisam_sort_buffer_size = 200M
query_cache_limit = 3M
query_cache_size = 100M
read_buffer_size = 200M
sort_buffer_size = 500M
For key_buffer = 600M I did key_buffer_size= 600M. I didn't do anything for table_cache = 300
The primary key is supposed to be set on the alphanumerical column ATUI
mysql> select * from MRSAT limit 9;
+----------+----------+----------+-----------+-------+---------+-------------+-------+--------+-----+------------+----------+------+
| CUI | LUI | SUI | METAUI | STYPE | CODE | ATUI | SATUI | ATN | SAB | ATV | SUPPRESS | CVF |
+----------+----------+----------+-----------+-------+---------+-------------+-------+--------+-----+------------+----------+------+
| C0000005 | L0000005 | S0007492 | A26634265 | AUI | D012711 | AT212456753 | NULL | TH | MSH | UNK (19XX) | N | NULL |
| C0000005 | L0000005 | S0007492 | A26634265 | AUI | D012711 | AT212480766 | NULL | TERMUI | MSH | T037573 | N | NULL |
| C0000005 | L0000005 | S0007492 | A26634265 | SCUI | D012711 | AT60774257 | NULL | RN | MSH | 0 | N | NULL |
| C0000005 | L0270109 | S0007491 | A26634266 | AUI | D012711 | AT212327137 | NULL | TERMUI | MSH | T037574 | N | NULL |
| C0000005 | L0270109 | S0007491 | A26634266 | AUI | D012711 | AT212456754 | NULL | TH | MSH | UNK (19XX) | N | NULL |
| C0000005 | NULL | NULL | NULL | CUI | NULL | AT00368929 | NULL | DA | MTH | 19900930 | N | NULL |
| C0000005 | NULL | NULL | NULL | CUI | NULL | AT01344283 | NULL | MR | MTH | 20020910 | N | NULL |
| C0000005 | NULL | NULL | NULL | CUI | NULL | AT02319637 | NULL | ST | MTH | R | N | NULL |
| C0000039 | L0000035 | S0007560 | A26674543 | AUI | D015060 | AT212481191 | NULL | TH | MSH | UNK (19XX) | N | NULL |
+----------+----------+----------+-----------+-------+---------+-------------+-------+--------+-----+------------+----------+------+

Mysql table does not accept any update request

I am using Server version: 5.6.27-log MySQL Community Server (GPL) and I have a problem with a table.
I tried to update some fields with a GUI software, but when I came back to the command line, the lines I tried to update where not updated.
I tried to see if the table was locked using SHOW OPEN TABLES as stated in various other questions. But my table does not appear to be locked:
+--------------------+-------------------------------------------------+--------+-------------+
| Database | Table | In_use | Name_locked |
+--------------------+-------------------------------------------------+--------+-------------+
| arcdev | SCHEDULED_COMMAND | 0 | 0 |
And as soon as I try to make an update like:
UPDATE SCHEDULED_COMMAND SET field = 1;
The server just keeps loading and nothing happen. I tried on other tables and everything worked just fine.
I also tried some DELETE requests and even a DROP TABLE and nothing work so far...
What am I missing?
Thank you for your precious help!
EDIT: Here is the result of the SHOW PROCESSLIST command while a request is hanging:
+--------+----------+---------------------------------------+-----------+---------+------+----------+-------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+--------+----------+---------------------------------------+-----------+---------+------+----------+-------------------------------+
| 282588 | rdsadmin | localhost:17966 | mysql | Sleep | 2 | | NULL |
| 534575 | arc | XXXXXX:49376 | arcdev | Sleep | 17 | | NULL |
| 534579 | arc | XXXXXX:49443 | arcdev | Query | 0 | init | SHOW PROCESSLIST |
| 534659 | arc | XXXXXX:49836 | arcdev | Query | 14 | updating | DELETE FROM SCHEDULED_COMMAND |
+--------+----------+---------------------------------------+-----------+---------+------+----------+-------------------------------+

Nodejs module for mysql - Connections in pool not showing in status

I want to use mySQL nodeJs connection pooling in the felixge Node module.
It appears to work fine except that I am not sure how to find if its really working, as intended. The pool connection params, passed to mysql.createPool() are:
dbConnectionParams = {
connectionLimit: 20,
host: 'localhost',
user: 'jdoe',
password:'somepasswd',
database: 'myDB'
};
All queries using connections from the pool work fine. However, when I try to see the actual connections using "show processlist" I see about 4 to 8 connections at any time, never 20. Should these not be listed too? Is there any other mySQL statement to see them or are they not opened until we actually need them? If so, is there any way to force open them so that later, when the connection is actually needed, there is no time lost in opening them?
I have read the documentation which states that "connections are lazily created by the pool." I thought that the whole idea of pooling is to circumvent this so they are not opened on an as-needed (lazy?) basis but are pre-opened.
UPDATE: Here is the output. Am trying to co-relate it to connectionLimit parameter upon new requests coming in.
+----+------+-----------------+------+---------+------+-------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+------+-----------------+-------------+---------+------+-------+------------------+
| 38 | jdoe | localhost:50716 | myDB | Sleep | 669 | | NULL |
| 39 | jdoe | localhost | myDB | Query | 0 | NULL | show processlist |
| 41 | jdoe | localhost:50718 | myDB | Sleep | 4 | | NULL |
| 44 | jdoe | localhost:50721 | myDB | Sleep | 4 | | NULL |
| 45 | jdoe | localhost:50722 | myDB | Sleep | 4 | | NULL |
| 46 | jdoe | localhost:50723 | myDB | Sleep | 5 | | NULL |
| 47 | jdoe | localhost:50724 | myDB | Sleep | 4 | | NULL |
| 48 | jdoe | localhost:50725 | myDB | Sleep | 4 | | NULL |
+----+------+-----------------+-------------+---------+------+-------+------------------+

MySQL Connections causing server went away, nothing in processlist

I have a large amount of connections but when I issue a show full processlist I am not showing anything close to the connections I see. Are these connections orphans of some sort? I tried the flush hosts command and the connections persist, even with a reboot of the server and also restarting the mysql server.
I believe these connections are causing issues with making new connections to the database. User's are getting a "server went away" error. How do I clear these?
See commands below:
mysql> show status like '%onn%';
+--------------------------+-------+
| Variable_name | Value |
+--------------------------+-------+
| Aborted_connects | 5 |
| Connections | 11743 |
| Max_used_connections | 24 |
| Ssl_client_connects | 0 |
| Ssl_connect_renegotiates | 0 |
| Ssl_finished_connects | 0 |
| Threads_connected | 6 |
+--------------------------+-------+
7 rows in set (0.00 sec)
mysql> show full processlist;
+-------+---------+----------------------+--------------------+---------+-------+-------+-----------------------+
| Id | User | Host | db | Command | Time | State | Info |
+-------+---------+----------------------+--------------------+---------+-------+-------+-----------------------+
| 4494 | rode | localhost:43411 | NULL | Sleep | 11159 | | NULL |
| 4506 | rode | localhost:43423 | information_schema | Sleep | 11159 | | NULL |
| 4554 | rode | localhost:43511 | performance_schema | Sleep | 11112 | | NULL |
| 11500 | ass | serv:1243 | Home-Tech | Sleep | 0 | | NULL |
| 11743 | root | localhost | NULL | Query | 0 | NULL | show full processlist |
| 11744 | ass | out:6070 | Home-Tech | Sleep | 4 | | NULL |
| 11745 | ass | out:6074 | HTGlobal | Sleep | 8 | | NULL
The MySQL server has gone away (error 2006) has two main causes
Server timed out and closed the connection. To fix, check that “wait_timeout” mysql variable in your my.cnf configuration file is large enough.
Server dropped an incorrect or too large packet. If mysqld gets a packet that is too large or incorrect, it assumes that something has gone wrong with the client and closes the connection. To fix, you can increase the maximal packet size limit “max_allowed_packet” in my.cnf file, eg. set max_allowed_packet = 128M, then sudo /etc/init.d/mysql restart.
there are two main ways to fix this. if the above change doesn't there may be an issue with your linux or windows mysql database server; you either need to increase ram on your server or watch it's process.
is this on a windows or linux box?