To recover from a blackout I need to start the Galera cluster when the system boots and I can only do this with the following:
service mysql start --wsrep-new-cluster
"service mysql start" will get launched on boot but will fail because it is the only one in the cluster. How do I get the cluster to start from boot and not fail if it is the only one there?
EDIT
Looks like I have to leave gcomm:// blank for it to start but it is not the best solution as if another server came online first then it would fail.
#galera settings
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_name="my_wsrep_cluster"
wsrep_cluster_address="gcomm://"
wsrep_sst_method=rsync
wsrep_provider_options="pc.bootstrap=true"
My solution is to edit init scripts - This is solution for debian - location my init script is /etc/init.d/mysql
then I found this line:
/usr/bin/mysqld_safe "${#:2}" > /dev/null 2>&1 &
and I added parameter --wsrep-new-cluster
/usr/bin/mysqld_safe --wsrep-new-cluster "${#:2}" > /dev/null 2>&1 &
and it is working after boot.
I've been through this before. The following is the procedure I documented for my co-workers:
First we will determine the node with the latest change
On each node go to /var/lib/mysql and examine the grastate.dat file
We are looking for the node with the highest seqno and a uuid that is not all zeros
On the node that captured the latest change startup the cluster in bootstrap mode
service mysql bootstrap
Startup the other nodes via the usual startup command
service mysql start
Check each node if they have the same DB list
mysql -u root -p
show databases;
On any node run the command to check for the status of the cluster and ensure you see something like the following
wsrep_local_state_comment | Synced <-- cluster is synced
wsrep_incoming_addresses | 10.0.0.9:3306,10.0.0.11:3306,10.0.0.13:3306 <-- all nodes are providers
wsrep_cluster_size | 3 <-- cluster consists of 3 nodes
wsrep_ready | ON <-- good :)
Related
i have "CentOS 6" VPS and i wanted to start mysql service automatically at Startup of Server when it is restarted. so i used this command in putty
chkconfig --level 345 mysqld on
this command is working and mysql starts on every startup automatically.
BUT how can i now stop this? what if i want to start Mysql manually on every startup, then what command should i use?
also what is the File where i can see the list of programs that are running automatically on every setup.
Thanks
You can turn off auto-start with this command:
chkconfig --level 345 mysqld off
To see what is configured for auto-start, you can run:
chkconfig --list
See more info on chkconfig here:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/s2-services-chkconfig.html
snmptrapd doesn't log in mysql
ISSUE - net-snmp does not log traps into the mysql database - Installed on Ubuntu
Net-snmp was configured with the following as per the tutorial - http://www.net-snmp.org/wiki/index.php/Net-Snmp_on_Ubuntu
I configured snmpdtrapd as mentioned on the following page.
http://www.net-snmp.org/wiki/index.php/Snmptrapd
My mysql installation was running with no issues, however it did not contain mysql_config file - so I ran the following install
sudo apt-get install libmysqlclient-dev – will get mysql_config file
Mysql continues to run with no issues
net-snmp configuration was run with the following command successfully
./configure --with-defaults --with-mysql
the config output showed that mysql logging was enabled.
cat snmptrapd.conf ---------------
authCommunity log public
# maximum number of traps to queue before forced flush
# set to 1 to immediately write to the database
sqlMaxQueue 1
# seconds between periodic queue flushes
sqlSaveInterval 1
cat snmpd.conf - contains as its line1 & line 2 -------------------
rwcommunity public localhost
linux#lin-850:~$ cat my.cnf
[snmptrapd]
user=root
password=qbcdfee
host=localhost
The following command runs well with appropriate output
snmpwalk -v 1 -c public localhost
db schema was made as per - /net-snmp-5.7.3/dist/schema-snmptrapd.sql
Where did I go wrong - pls help. Thanks in advance
regs
George
I'm running (from Windows 8.1) a Vagrant VM for CoreOS (yungsang/coreos).
I installed kubernetes according to the guide I found here and created the json for the pod using my images.
When I execute sudo ./kubecfg list /pods I get the following error:
F0909 06:03:04.626251 01933 kubecfg.go:182] Got request error: Get http://localhost:8080/api/v1beta1/pods?labels=: dial tcp 127.0.0.1:8080: connection refused
Same goes for sudo ./kubecfg -h http://127.0.0.1:8080 -c /vagrant/app.json create /pods
EDIT: Update
Instead of running the commands myself I integrated into the vagrant file (as such) .
This makes kubernetes work fine. HOWEVER after some time my vagrant ssh connection gets closed off. I reconnect and any kubernetes commands I specify result in the same error as above.
EDIT 2: Update
I managed to get it to run again, however I am unsure if it will run smoothly
I had to re-execute the following commands.
sudo systemctl start etcd
sudo systemctl start download-kubernetes
sudo systemctl start apiserver
sudo systemctl start controller-manager
sudo systemctl start kubelet
sudo systemctl start proxy
I believe it is in fact the apiserver that needs restarting
What is the source of this "timeout"? (Where are any logs I can find for this matter)
Kubernetes development is moving insanely fast right now so this could be out of date by tomorrow. With that in mind, the kubernetes folks recommend following one of their official installation guides. The best advice would be to start over fresh with one of the new installation guides but there are a few tips that I have learned doing this myself.
The first thing to note is that Kubecfg is being deprecated in favor of kubectl. So for future reference if you want to get info about a pod you would run something like:
./kubectl get pods.
With kubectl you will also need to set an env variable so kubectl know how to talk to the apiserver:
KUBERNETES_MASTER=http://IPADDRESS:8080.
The easiest way to debug exactly what is going on if you are using CoreOS is to tail the logs for the service you are interested in. So if you have a kube-apiserver unit you can look at what's goin on by running:
journalctl -f -u kube-apiserver
from the node that is running the apiserver. If that service isn't running, which may be the case, you can start it with:
systemctl start kube-apiserver
On CoreOS you should look at the logs using journalctl.
For example, if you wish to see etcd logs, which Kubernetes relies on for storing the state of it's minions, run journalctl _COMM=etcd, and similarly journalctl _COMM=apiserver will show you the logs from the apiserver, one of key components in Kubernetes.
You also get last few log entries if you run systemctl status apiserver.
Based on errordevelopers advice, my recent installation ran against a similar problem.
Using systemctl status apiserver and sudo systemctl start apiserver I managed to get the environment up and running again.
Does "service mysql start" start the mysql server or client?
I have done as much searching on this topic as I can, and the answers do seem all over the place. Some sites state that the "service mysql start" starts the server, while others state that one must use "service mysqld start": e.g.: http://theos.in/desktop-linux/tip-that-matters/how-do-i-restart-mysql-server/
To elaborate some more - my understanding is that "mysql" is the process that represents the client interface that connects to a mysql server (either remote or local) and "mysqld" is the process for the server. I would assume that "service mysql start" would only start the mysql client (not the server) and I can use this client to connect to any mysql server. And if I haven't used "service mysqld start", no server would have been started on the local host and therefore I can't use the mysql client to connect to any local mysql server. Is my understanding correct?
Also, I am using a Red Hat server.
Any clarifications and explanations most appreciated - Thanks!
mysql client is never (AFAIK) run as a service, so
service mysql start
will start mysql server. To be precise, this will start service that is described in /etc/init.d/mysql script.
Some distributions name their init script differently, for example mysqld. So you should just check your /etc/init.d/
You can check what exactly that scipt is doing, even if you don't know bash.
First few lines should contain short description, in case of my ubuntu installation it is:
# cat /etc/init.d/mysql
#!/bin/bash
#
### BEGIN INIT INFO
# Provides: mysql
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Should-Start: $network $time
# Should-Stop: $network $time
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Start and stop the mysql database server daemon
# Description: Controls the main MySQL database server daemon "mysqld"
# and its wrapper script "mysqld_safe".
### END INIT INFO
So as you can see, mysql service script takes care of mysqld daemon (process). As opposed to mysql binary, found for example in /usr/bin which is client program and doesn't require any service to be running on your OS.
To sum everything up:
# service mysql start
will start server (daemon/service), then you can connect to it with
$ mysql -u root -p
If you are using the command "service mysqld start" this will start the mysql server in RHEL.
Actally the name "mysqld" depends on init.d scripts name. If it's mentioned as "mysql" in init.d scripts "mysql" will work. This variates from distro to distro.
in my case
systemctl restart rh-mysql57-mysqld.service
I am having a bit of an issue that popped up over the past weekend.
One of my servers was rebooted and when the server came it started a default instance of mysql that is configured upon installation. It uses port 3306 as a default and blocks one of my instances from coming up.
How can I remove this default instance from booting and instead boot my instances in /etc/my.cnf ?
I think what is happening is it is going to /var/lib/mysql and starting an instance based off some default configuration as there is not a my.cnf file located here, but I find this code in init.d:
#Set some defaults
mysqld_pid_file_path=
if test -z "$basedir"
then
basedir=/usr
bindir=/usr/bin
if test -z "$datadir"
then
datadir=/var/lib/mysql
fi
But I don't see any my.cnf file at that location that it could be pulling configuration options from.
My data directories change per instance and they are all specified in /etc/my.cnf
I appreciate any effort spent helping with this issue.
Try this:
$ my_print_defaults --defaults-file=/etc/my.cnf mysqld
This will show you what it thinks datadir is set to, according to your config file.
I've seen config files get confused as people edit them, or even automated tools may edit the config file and append new config entries. Keep in mind if the config file has more than one line defining datadir, the last such line in the file takes precedence.
If you have an instance of mysqld starting up automatically at boot time, I'd use chkconfig to find out when that's happening. For example, here's a command run on my VM:
$ chkconfig
...
mysql 0:off 1:off 2:on 3:on 4:on 5:on 6:off
...
The numbers 0 through 6 are runlevels, and "on" means that when the given runlevel starts, the /etc/init.d/mysql service script is run by init.
You can also use chkconfig to modify which runlevels a given service starts under, and even to disable the service at all runlevels, so that it won't start automatically ever.
$ chkconfig --level 2345 mysql off
Refer to man chkconfig for more uses.