If all nodes in a 3-nodes Percona Cluster have shutdown (gracefully shutdown or crash), from this blog, it says that when the nodes could reach each other, the cluster could recover automatically. However, starting the nodes in such a situation seems a difficult task.
So is there a reliable and operable method to do cluster recovery in this situation?
Examine the grastate.dat file on all 3 nodes. Which node has the highest sequence number? You should bootstrap that node. Wait for it to come online. Then start node2. It should IST from the bootstrap node. Then start node3.
Golden rule: You must always bootstrap the very first node of any cluster. Bootstrapping does not erase data; it only starts a new cluster.
Depending on the version, you may need to set safe_to_bootstrap in the grastate file to 1 manually.
Another thing you can try to check which is most advanced node
run below command on every node and check which node has largest committed transaction value.
mysqld_safe --wsrep-recover
start First node which has max committed value then second and third
Related
What is the Difference between node unresponsive and node failover in Couchbase while I was monitoring my cluster I can see one node is in unresponsive state what is the exact difference between both
An "unresponsive" node is one that is not responding to requests. A node could be unresponsive because of a network or hardware problem, or an internal server error.
Failover is what you do to an unresponsive node to forcibly remove it from the cluster.
Further reading:
Remove a node and rebalance (a graceful way to remove a node that IS reponsive)
Fail a Node Over and Rebalance (a less graceful way that works with unresponsive nodes and can potentially result in loss of data that hasn't yet been replicated from the failed node)
Recovery (after you fail over a node, if it starts behaving again you can add it back into the cluster)
So I have a mysql Docker up and running with 3 log files (general, error, slow-query log) enabled, that are written to /var/log/mysql/ (path inside the mysql container), which actually is a directory on the docker host (named 'log') and mounted into the container as a volume specified in the docker-compose.yml.
We chose this way, because we didn't want general and slow-query logs combined on stdout and we prefer a daily rotation of the 3 separate log files, since it seems more comfortable to us to find a certain query that was issued - let's say - 4 days ago.
Since the mysql Docker (afaik) doesn't come with logrotate and/or cron, we decided to have another service in the docker-compose.yml named logrotator, which starts cron in it's entrypoint, which in turn regularly runs logrotate with a given logrotate.conf. The 'log' directory is also mounted into the logrotator container, so it can do it's rotation job on the mysql log files.
Now it seems like mysql needs a "mysqladmin flush-logs" after each rotation to start writing into a new file descriptor, but the logrotator container cannot issue this command inside the mysql container.
To make it short(er): I'm sure there are better ways to accomplish separate log files with log rotation. Just how? Any ideas are much appreciated. Thanks.
Update:
Since we're using mysql 5.7 as of now, and hence probably cannot solve our issue by the solution as proposed by #buaacss (which might absolutely work), we decided to stay with a "cron" container. Additionally we installed docker.io inside the cron container and mounted the docker host's /var/run/docker.sock into the cron container. This allows us to use "docker exec" to issue commands (in this case 'mysqladmin flush-logs') from the cron container to be executed in the mysql container. Problem solved.
you can indeed use SIGHUP instead of flush log statement based on doc
https://dev.mysql.com/doc/refman/5.6/en/log-file-maintenance.html
but may have some undesired effects, i.e. write huge report information to the error log.
so, as I mentioned in comment, they developed a light version of SIGHUP, i.e. SIGUSR1 to accomplish functions below
FR1: When SIGUSR1 is sent to the server, it must flush the error log.
FR2: When SIGUSR1 is sent to the server, it must flush the general log.
FR3: When SIGUSR1 is sent to the server, it must flush the slow query log.
FR4: SIGUSR1 must not send MySQL status report.
Currently when SIGHUP is sent to the server a large report of information is
printed to stdout, the status report.
FR5: The server must not fail when SIGUSR1 is sent, even though slow log is not
enabled.
FR6: The server must not fail when SIGUSR1 is sent, even though slow log output
is set to a table (log_output).
FR7: The server must not fail when SIGUSR1 is sent, even though general log is
set to OFF.
NFR1: SIGALRM must be undisguisable from how SIGUSR1 behaved before.
unfortunately such signal is only available in MySQL 8 or above
im experiencing this NDB CLUSTER Error for a while now.
It started after a cluster shutdown for 2 days due to production shutdown.
Any insights will help. TIA
899 means that the rowid is already allocated.
It is a problem due to the distributed nature of
NDB Cluster. Normally it is a temporary problem
that goes away after a few microseconds.
If it stays then probably some bug have caused the
primary replica and backup replica to be inconsistent.
If that is the scenario the best method to get
back to normal operation is to do the following:
1) Take a backup
2) Perform an initial node restart of one of the data nodes
(presuming that you have 2 data nodes).
The problem should hopefully go away after this.
The backup is simply to ensure that you have the
latest possible backup if something more happens.
I followed this kubernetes example to create a wordpress and mysql with persistent data
I followed everything from the tutorial from creation of the disk to deployment and on the first try deletion as well
1st try
https://s3-ap-southeast-2.amazonaws.com/dorward/2017/04/git-cmd_2017-04-03_08-25-33.png
Problem: persistent volumes does not bind to the persistent volume claim. It remains at pending status both for the creation of the pod and the volume claim. Volume status remains at Released state as well.
Had to delete everything as describe in the example and try again. This time I mounted the created volumes to an instance in the cluster, formatted the disk using ext4 fs then unmounted the disks.
2nd try
https://s3-ap-southeast-2.amazonaws.com/dorward/2017/04/git-cmd_2017-04-03_08-26-21.png
Problem: After formatting the volumes, they are now bound to the claims yay! unfortunately mysql pod doesn't run with status crashLoopback off. Eventually the wordpress pod crashed as well.
https://s3-ap-southeast-2.amazonaws.com/dorward/2017/04/git-cmd_2017-04-03_08-27-22.png
Did anyone else experience this? I'm wondering if I did something wrong or if something has changed from the write up of the exam til now that made the example break. How do I go around fixing it?
Any help is appreciated.
Get logs for pods:
kubectl logs pod-name
If log indicates the pods are not even starting (crashloopback) investigate the events in k8s:
kubectl get events
The event log indicates the node running out of memory (OOM):
LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
1m 7d 1555 gke-hostgeniuscom-au-default-pool-xxxh Node Warning SystemOOM {kubelet gke-hostgeniuscom-au-default-pool-xxxxxf-qmjh} System OOM encountered
Trying a larger instance size should solve the issue.
I have a cluster with 3 ndbd nodes.
How many MySQLd nodes do I need. What is a good rule.
One MySQLd for 3 ndbd?
Depends on too many factors too give a short answer.
But I would put a mysqld node in each application server so application always access mysqld through localhost. It will be as easy to scale as your application is.