Mysql server on kubernetes wont restart - mysql

im trying to restart my mysql server. The server is on a Kubernetes/fedora container. I have tried to use # /etc/init.d/mysqld restart and # systemctl restart mysqld. The problem is that there is no files in the init.d.
When running # /etc/init.d/mysqld restart bash says No such file, obviously as there is no such file. When running # systemctl restart mysqld it responds "bash: systemctl: Command not found"
The mysql-servier is running fine and i can log into it, however i cant restart it. Please help.

To restart a server on Kubernetes you simply need to delete the pod with kubectl delete pod <id>. If you didn't create pod manually, but rather with a deployment, it will restart and come back online automatically.
Deleting a pod is a correct way of shutting down servers. First Kubernetes will send mysql a TERM signal that will politely ask it to shutdown. Then after some time (configurable) it will shoot it with KILL if it doesn't comply.

The mysql-servier is running fine and i can log into it, however i cant restart it.
You have 2 options and both have it's 'dangers' to address:
More likely, container is started with either CMD given in docker image or command directive in kubernetes manifest that is actually starting your mysql. In that case, regardless of the way you manage to do termination (during restart) of running instance of mysql on that container you will also - terminate the whole container. Kubernetes will then restart that container, or whole pod.
Less likely, but possible is that container was started with some other 'main' command, and mysql is started as part of a separate script. In that case inspecting docker file or kubernetes manifest will give you details about start/stop procedure and only in that case you can restart mysql without actually killing the container it is running onto.
Dangers: data persistence. If you don't have proper data persistence in place killing running container (either through restart or refresh) will also destroy any ephemeral data with it.
Additional note: have you tried service mysql status and service mysql restart?

Related

Is MariaDB data lost after Docker setting change?

I've setup a basic MariaDB instance running in Docker - basically from starting the container using the Kitematic UI, changing the settings, and letting it run.
Today, I wanted to make a backup, so I used Kitematic to change the port so I could access it from a machine to make automated backups. After changing the port in Kitematic, it seems to have started a fresh MariaDB container (i.e. all my data seems to be removed).
Is that the expected behavior? And, more importantly, is there any way to recover the seemingly missing data, or has it been completely removed?
Also, if the data is actually removed, what is the preferred way to change settings—such as the exposed ports—without losing all changes? docker commit?
Notes:
running docker 1.12.0 beta for OS X
docker -ps a shows the database status as "Up for X minutes" when the original had been up for several days
Thanks in advance!
UPDATE:
It looks like the recommended procedure to retain data (without creating a volume or similar) is to:
commit changes (e.g. docker commit <containerid> <name/tag>)
take the container offline
update settings such as exposed port or whatever else
run the image with committed changes
...taken from this answer.
Yes, this is expected behavior. If you want your data to be persistant you should mount volume from host (via --volume option for docker run) or from another container and store your database files at this volume.
docker run --volume /path/on/your/host/machine:/var/lib/mysql mariadb
Losing changes are actually core feature of containers so it can not be omitted. This way you can be sure that between every docker run you get fresh environment without any changes. If you want your changes to be permanent you should do them in your image's Dockerfile, not in container itself.
For more information please visit official documentation: https://docs.docker.com/engine/tutorials/dockervolumes/.
it looks like you dont mount container volume into certain path. You can read about volumes and storing data into container here
you need run container with volume option
$ docker run --name some-mariadb -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mariadb:tag
where /my/own/datadir is directory on host machine

Where is the mysqld event log?

When I run mysqld, it has a whole lot of information about what it's doing.
As I understand, this is not the correct way to run a mysql server and you should use service mysql start instead (on older servers at least).
Any searches for mysqld log come up with logs for queries, I want to know what the program is doing as it starts. (I'm trying to set up mariadb 10.1.14 with galera replication)
I want to be able to run service mysql start and then watch what's happening in the background.

Docker containers slow after restart in Azure VM

I'm experiencing some weirdness with docker.
I have an Ubuntu server VM running in Windows Azure.
If I start a new docker container for e.g. Wordpress like so:
sudo docker run --name some-wordpress --link some-mysql:mysql -p 80:80 -d wordpress
everything works nicely, I get a resonably snappy site considering the low end VM settings.
However, if I reboot the VM, and start the containers:
sudo docker start some-mysql
sudo docker start some-wordpress
The whole thing runs very slowly, the response time for a single page gets up to some 2-4 seconds.
Removing the containers and starting new ones makes everything run normally again.
What can cause this?
I suspect it has to do with disk usage, does the MySQL container use local disk for storage? When you restart an existing docker container, you reuse the existing volume, normally stored at in a sub folder of /var/lib/docker, whereas a new container creates a new volume.
I find a few search results saying that Linux on Azure doesn't handle "soft" reboots well and that stuff doesn't get reconnected as it should. A "hard" reboot supposedly fixes that.
Not sure if it helps, my Docker experience is all from AWS.
Your containers are running on a disk which is stored in a blob storage with a max. 500 IOPS per disk. You can avoid hitting the disk (not very realistic with MySQL) or add more disks to use with striping (RAID0) or use SSD (D series in Azure) And depending on your use case, you might also rebase Docker completely to use ephemeral storage (/dev/sdb) - here's how for CoreOS. BTW, there are some MySQL performance (non-Docker) suggestions in azure.com.

Uwsgi, MySQL restart on reboot in a wrong order

I am trying to setup a django website on EC2, basically I want to start MySQL server, and Uwsgi after reboot.
In order to make MySQL start on reboot, I did:
sudo cp /opt/mysql/server-5.6/support-files/mysql.server /etc/init.d/
sudo update-rc.d mysql.server defaults
In order to make Uwsgi start on reboot, I created a file /etc/init/uwsgi.conf:
description "ubuntu uwsgi instance"
start on runlevel [2345]
stop on runlevel [06]
exec uwsgi --ini /home/ubuntu/uwsgi.ini
However the problem is that I will need mysql to start first, right now it looks like Uwsgi starts first, and tried to connect to mysql, which fails, and mysql never gets started.
Could anyone help me on how to solve this issue?
Thanks in advance
When your computer starts up, it doesn't run the init.d scripts directly. Instead, depending on what's called the "runlevel", it runs the scripts in /etc/rcN.d (where N is the runlevel). You can determine the current runlevel with the runlevel command; mine returns 2 in normal operation. That means that when the computer started up, it ran the scripts in /etc/rc2.d. The contents of rc2.d are just symlinks to scripts in /etc/init.d, named according to whether they should be started or stopped, and the order they should be run.
Use the runlevel command to find out what runlevel your computer is at (probably 2), then look in /etc/rc2.d for a link named smthing like uwsgi, which will be a symlink to /etc/init.d/uwsgi, and rename it to zzz999 - or whatever it takes to get it to sort after the other entries - that will cause it to run last.
There's more information about init.d scripts and runlevels at https://www.linux.com/news/enterprise/systems-management/8116-an-introduction-to-services-runlevels-and-rcd-scripts
Even if you start MySQL before uWSGI you're not assured it will be available when uWSGI is managing requests.
At start MySQL does some checks on database, loads InnoDB indexes, recover from transaction log or it may even hang.
You shouldn't rely on that approach.
Instead add application logic that ensures you correctly handle unavailability of database, i.e. retrying or showing an error page to the user asking to retry.

switch ebs volume between instance

Here is the situation
In instance A , have EBS volume where my mysql db data located was created based on this http://qugstart.com/blog/amazon-web-services/how-to-set-up-db-server-on-amazon-ec2-with-data-stored-on-ebs-drive-formatted-with-xfs/
I want to move db into separate instance B so I have created instance and installed Mysql already.
Both instances and volume were in same region.
My question here was if I detach ebs volume from instance A and attach to instance B will work automatically or do I have to make any precaution steps?
If you are following the instructions from the link/blog. You don't have to shutdown the instance to detach the EBS volume. You only need to shutdown your EC2 instance if your EC2 volume is the root volume. i.e /dev/sda1 /dev/sda /dev/xvda
Having said that, you do need to shutdown your mysql service on instance A before detaching the volume:
service mysqld stop
Then you can bring up another instance B and then attach the EBS volume where your data is and then mount it. (Assuming you are attaching to /dev/sdh or /dev/xvdh)
echo "/dev/sdh /vol xfs noatime 0 0" | sudo tee -a /etc/fstab
sudo mkdir -m 000 /vol
sudo mount /vol
You can move EBS volumes, but before you detach it from the original server, you should stop the server.
When you attach the volume to the new server, look into EC2 console to see where it is attached to (i.e. /dev/xvdb). Then all you need is mount it somewhere. Your Mysql server's data directory should point to that mount location:
http://dev.mysql.com/doc/refman/5.5/en/server-options.html#option_mysqld_datadir
I have been able to easily detach ebs volumes from on instance and reattach to another running instance with no problems at all.
I would certainly make sure you first terminate any programs that may have open files or are using that volume before detaching.
Not very familiar with MySQL, but I assume when you attach the new volume you will need to let MySQL know about the database and where it is. In SQL Server you would do this by 'Attaching' it to a running sql server instance - mySQL probably has a similar process.