Docker containers slow after restart in Azure VM - mysql

I'm experiencing some weirdness with docker.
I have an Ubuntu server VM running in Windows Azure.
If I start a new docker container for e.g. Wordpress like so:
sudo docker run --name some-wordpress --link some-mysql:mysql -p 80:80 -d wordpress
everything works nicely, I get a resonably snappy site considering the low end VM settings.
However, if I reboot the VM, and start the containers:
sudo docker start some-mysql
sudo docker start some-wordpress
The whole thing runs very slowly, the response time for a single page gets up to some 2-4 seconds.
Removing the containers and starting new ones makes everything run normally again.
What can cause this?

I suspect it has to do with disk usage, does the MySQL container use local disk for storage? When you restart an existing docker container, you reuse the existing volume, normally stored at in a sub folder of /var/lib/docker, whereas a new container creates a new volume.
I find a few search results saying that Linux on Azure doesn't handle "soft" reboots well and that stuff doesn't get reconnected as it should. A "hard" reboot supposedly fixes that.
Not sure if it helps, my Docker experience is all from AWS.

Your containers are running on a disk which is stored in a blob storage with a max. 500 IOPS per disk. You can avoid hitting the disk (not very realistic with MySQL) or add more disks to use with striping (RAID0) or use SSD (D series in Azure) And depending on your use case, you might also rebase Docker completely to use ephemeral storage (/dev/sdb) - here's how for CoreOS. BTW, there are some MySQL performance (non-Docker) suggestions in azure.com.

Related

Can't SSH to Google Cloud VM After Installing MySQL

I'm trying to set up a small blog server on Google Cloud Platform using the free tier f1.micro instance. I'm using Ubuntu 20.04 LTS as the base image (Ubuntu is the only Linux distro that I'm at all familiar with), though I tried 20.10. Everything works normally until I install MySQL. This is the guide that I'm following. After each failure, I deleted the VM and started with a fresh one.
These are the VM settings:
In addition to the steps listed in the guide, I also tried adding ssh to ufw, just in case.
sudo ufw allow ssh
sudo ufw enable
I also tried running this prior to installing MySQL, based on this article after failing the first couple of times.
sudo apt-get purge mysql*
sudo apt-get autoremove
sudo apt-get autoclean
sudo apt-get dist-upgrade
Once I try installing mysql-server the ssh prompt hangs here:
I've tried reconnecting immediately and I've tried waiting overnight, but I always get stuck here when I try to connect again (it stays like this for a very long time before failing):
I experienced a similar issue with a MySQL Instance in GCP, the first issue was related with the type of the VM instance I used, I had a f1-micro machine type on this VM Instance and suddenly I wasn’t able to access the ssh. As this type of VM Instance has only 0.6GB of memory, it became out of memory soon, I changed it to a e2-medium that is value by default and it resolved my problems this time.
As the Instance was out of memory the services in the instance started to fail, it was the reason that I can't access my instance.
At another time I started again with similar issues, but this time, the problem was the disk, I only had 10 GB and there was a process filling my disk, when a partition was out of space, the instance started to fail again.
I only resized my disk, now my instance disk is 20GB and is working fine.
Having said that, I suggest increasing your resources per your convenience to enhance your performance, because to have the problems you described is a good indicator that your existing machine type is not a good fit for your workloads you run on that instance.
So, I suggest to change the machine type to adjust your memory and you can follow the next steps for these tasks please visit the following link to get further information about it.
Changing a machine type
1.- Go to the VM Instances page.
2.- In the Name column, click your instance.
From the instance details page, complete the following steps:
a) Click the Stop button to stop the instance, if you have not stopped it yet.
b) After the instance stops, click the Edit button at the top of the page.
c) Under the Machine configuration section, select the machine type you want to use, or create a custom machine type to increase only the Memory.
d) Save your changes and start again your VM Instance.
You can resize your disk following this guide or with the following command:
gcloud compute disks resize DISK_NAME --size DISK_SIZE
Or with the Console:
Go to the Disks page to see a list of zonal persistent disks in your project.
Click the name of the disk that you want to resize.
On the disk details page, click Edit.
In the Size field, enter the new size for your disk.
Click Save to apply your changes to the disk.
After you resize the disk, you must resize the file system so that the operating system can access the additional space.
Note: Do not resize boot disks beyond 2 TB because this is the limit.
As per the installation guide you need a server with at least 1GB of memory and your selected VM instance has 614MB of memory. If I understand correctly, when Mysql service is installed it has been occupied total memory and that might be the reason you got stuck on that point also not able to SSH the instance.

Sysbench test Mysql but no disk read

When I use sysbench to test mysql, I use iotop to monitor io and I find only have DiSH WRITE speed, the DISK READ speed is always 0. Then I use free -h and I find that buffer/cache increase, does it mean that sysbench's test data is not write in disk but in buffer and no auto update into disk?
Thank you so much!
where is the mysql running ?? I dont know about iotop and what its measuring but even tiny sysbench runs generate enormous IO. it could be a user issue maybe, perhaps mysql is generating io under a different user and not getting picked up.
# you can isolate the db into a container and run sysbench against this to see
# if/when/how much IO there is.
docker run --rm -it --name mdb105 --network host -e MYSQL_ALLOW_EMPTY_PASSWORD=on mariadb:10.5 --port=10306
# in another terminal run
docker stats
# now run sysbench, and you will see enormous IO
# you can get another shell in container named mdb105 by:
docker exec -it --user root mdb105 bash
# mariadb:10.5 is based on ubuntu:20.04
# you could maybe run iotop inside the container
Update: I was able to replicate something like your zero IO situation using a raspberry pi. indeed docker stats shows no IO while clearly data is being saved to disk. my initial reaction was that maybe some kernels/distro are missing certain facilities, but it looks like its not kernel/distro because i saw IO when playing around with disk/fs ie extenal USB disk ... i think it was more to do with the micro sd card and its controller/drivers that dont support this kind of stats. and since your tps is very low i suspect you are on something similar to micro sd as well.
this likely wont happen in a ec2 instance.

Mysql server on kubernetes wont restart

im trying to restart my mysql server. The server is on a Kubernetes/fedora container. I have tried to use # /etc/init.d/mysqld restart and # systemctl restart mysqld. The problem is that there is no files in the init.d.
When running # /etc/init.d/mysqld restart bash says No such file, obviously as there is no such file. When running # systemctl restart mysqld it responds "bash: systemctl: Command not found"
The mysql-servier is running fine and i can log into it, however i cant restart it. Please help.
To restart a server on Kubernetes you simply need to delete the pod with kubectl delete pod <id>. If you didn't create pod manually, but rather with a deployment, it will restart and come back online automatically.
Deleting a pod is a correct way of shutting down servers. First Kubernetes will send mysql a TERM signal that will politely ask it to shutdown. Then after some time (configurable) it will shoot it with KILL if it doesn't comply.
The mysql-servier is running fine and i can log into it, however i cant restart it.
You have 2 options and both have it's 'dangers' to address:
More likely, container is started with either CMD given in docker image or command directive in kubernetes manifest that is actually starting your mysql. In that case, regardless of the way you manage to do termination (during restart) of running instance of mysql on that container you will also - terminate the whole container. Kubernetes will then restart that container, or whole pod.
Less likely, but possible is that container was started with some other 'main' command, and mysql is started as part of a separate script. In that case inspecting docker file or kubernetes manifest will give you details about start/stop procedure and only in that case you can restart mysql without actually killing the container it is running onto.
Dangers: data persistence. If you don't have proper data persistence in place killing running container (either through restart or refresh) will also destroy any ephemeral data with it.
Additional note: have you tried service mysql status and service mysql restart?

mysql in docker container hangs

Two mysql(5.6.20) instances in two docker containers (1.8.32),
master and slave build semi-synchronous replication with each other,
then users do some dml or ddl operating in master always。
after ten days or more, all the clients which connect to slave will hang
gdb -p/strace slave mysqld process hangs
pstack/perf top -p slave mysqld process show nothing
kill -9 will not kill the mysqld process
docker stop will not stop the docker container
what tools or methods can help locating the problem?
I had the same occur today. In my case, using docker compose to bring up mysql and a range of consumers, using the current "latest" mysql image from docker hub. (5.7.16-1debian8)
I've launched a number of these, and within a week I've seen a couple of instances where mysql has well over 100 threads, all the memory on the host is consumed, and the containers are hung. I can't stop anything, I can't even reboot. Only a power cycle of the VM recovers.
I'll try and monitor. I suspect it depends highly on infrastructure load (slow VM host results in slow queries backing up). The solution is more likely to be mysql tuning and a docker bug.

Is MariaDB data lost after Docker setting change?

I've setup a basic MariaDB instance running in Docker - basically from starting the container using the Kitematic UI, changing the settings, and letting it run.
Today, I wanted to make a backup, so I used Kitematic to change the port so I could access it from a machine to make automated backups. After changing the port in Kitematic, it seems to have started a fresh MariaDB container (i.e. all my data seems to be removed).
Is that the expected behavior? And, more importantly, is there any way to recover the seemingly missing data, or has it been completely removed?
Also, if the data is actually removed, what is the preferred way to change settings—such as the exposed ports—without losing all changes? docker commit?
Notes:
running docker 1.12.0 beta for OS X
docker -ps a shows the database status as "Up for X minutes" when the original had been up for several days
Thanks in advance!
UPDATE:
It looks like the recommended procedure to retain data (without creating a volume or similar) is to:
commit changes (e.g. docker commit <containerid> <name/tag>)
take the container offline
update settings such as exposed port or whatever else
run the image with committed changes
...taken from this answer.
Yes, this is expected behavior. If you want your data to be persistant you should mount volume from host (via --volume option for docker run) or from another container and store your database files at this volume.
docker run --volume /path/on/your/host/machine:/var/lib/mysql mariadb
Losing changes are actually core feature of containers so it can not be omitted. This way you can be sure that between every docker run you get fresh environment without any changes. If you want your changes to be permanent you should do them in your image's Dockerfile, not in container itself.
For more information please visit official documentation: https://docs.docker.com/engine/tutorials/dockervolumes/.
it looks like you dont mount container volume into certain path. You can read about volumes and storing data into container here
you need run container with volume option
$ docker run --name some-mariadb -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mariadb:tag
where /my/own/datadir is directory on host machine