Sysbench test Mysql but no disk read - mysql

When I use sysbench to test mysql, I use iotop to monitor io and I find only have DiSH WRITE speed, the DISK READ speed is always 0. Then I use free -h and I find that buffer/cache increase, does it mean that sysbench's test data is not write in disk but in buffer and no auto update into disk?
Thank you so much!

where is the mysql running ?? I dont know about iotop and what its measuring but even tiny sysbench runs generate enormous IO. it could be a user issue maybe, perhaps mysql is generating io under a different user and not getting picked up.
# you can isolate the db into a container and run sysbench against this to see
# if/when/how much IO there is.
docker run --rm -it --name mdb105 --network host -e MYSQL_ALLOW_EMPTY_PASSWORD=on mariadb:10.5 --port=10306
# in another terminal run
docker stats
# now run sysbench, and you will see enormous IO
# you can get another shell in container named mdb105 by:
docker exec -it --user root mdb105 bash
# mariadb:10.5 is based on ubuntu:20.04
# you could maybe run iotop inside the container
Update: I was able to replicate something like your zero IO situation using a raspberry pi. indeed docker stats shows no IO while clearly data is being saved to disk. my initial reaction was that maybe some kernels/distro are missing certain facilities, but it looks like its not kernel/distro because i saw IO when playing around with disk/fs ie extenal USB disk ... i think it was more to do with the micro sd card and its controller/drivers that dont support this kind of stats. and since your tps is very low i suspect you are on something similar to micro sd as well.
this likely wont happen in a ec2 instance.

Related

How to open and work with a very large .SQL file that was generated in a dump?

I have a very large .SQL file, of 90 GB
It was generated with a dump on a server:
mysqldump -u root -p siafi > /home/user_1/siafi.sql
I downloaded this .SQL file on a computer with Ubuntu 16.04 and MySQL Community Server (8.0.16). It has 8GB of RAM
So I did these steps in Terminal:
# Access
/usr/bin/mysql -u root -p
# I create a database with the same name to receive the .SQL information
CREATE DATABASE siafi;
# I establish the privileges. User reinaldo
GRANT ALL PRIVILEGES ON siafi.* to reinaldo#localhost;
# Enable the changes
FLUSH PRIVILEGES;
# Then I open another terminal and type command for the created database to receive the data from the .SQL file
mysql --user=reinaldo --password="type_here" --database=siafi < /home/reinaldo/Documentos/Code/test/siafi.sql
I typed these same commands with other .SQL files, only minor ones, with a maximum of 2GB. And it worked normally
But this 90GB file is processing for over twelve hours without stopping. I do not know if it's working
Please, is there any more efficient way to do this? Maybe splitting the .SQL file?
Break the file up into smaller chunks and process them separately.
You're probably hitting the logging high-water mark and mysql is trying to roll everything back, and that is a slow process.
Split the file into approx 1Gb chunks, breaking on whole lines. Perhaps using:
split -l 1000000 bigfile.sql part.
Then run them in order using your current command.
You'll have to experiment with split to get the size right, and you haven't said what your OS is, and split implementations/options vary. split --number=100 make work for you.
2 things that might be helpful:
Use pv to see how much of the .sql file has already been read. This can give you a progress bar which at least tells you it's not suck.
Log into MySQL and use SHOW PROCESSLIST to see what MySQL currently is executing. If it's still running, just let it run to completion.
If turned on, it might really help to turn off the binlog for the duration of the restore. Another thing that may or may not be helpful... if you have the choice, try to use the fastest disks available. You may have this kind of option if you're running on hosters like Amazon. You're going to really feel the pain if you're (for example) doing this on a standard EC2 host.
You can use third party tools like
https://philiplb.de/sqldumpsplitter3/
Very easy to use, can define size, location etc...
Or use this one also
same but interface its bit colorful and use to use
https://sqldumpsplitter.net/

mysql in docker container hangs

Two mysql(5.6.20) instances in two docker containers (1.8.32),
master and slave build semi-synchronous replication with each other,
then users do some dml or ddl operating in master always。
after ten days or more, all the clients which connect to slave will hang
gdb -p/strace slave mysqld process hangs
pstack/perf top -p slave mysqld process show nothing
kill -9 will not kill the mysqld process
docker stop will not stop the docker container
what tools or methods can help locating the problem?
I had the same occur today. In my case, using docker compose to bring up mysql and a range of consumers, using the current "latest" mysql image from docker hub. (5.7.16-1debian8)
I've launched a number of these, and within a week I've seen a couple of instances where mysql has well over 100 threads, all the memory on the host is consumed, and the containers are hung. I can't stop anything, I can't even reboot. Only a power cycle of the VM recovers.
I'll try and monitor. I suspect it depends highly on infrastructure load (slow VM host results in slow queries backing up). The solution is more likely to be mysql tuning and a docker bug.

Docker containers, memory consumption and logs

I've been trying Docker for a few days. I'm using a Drupal image (docker4drupal) which basically contains MySQL (MariaDB), PHP (php-fpm) and NGINX.
Almost everytime I do a database import to the database container, on a VPS with 512MB RAM, the container with MariaDB dies... and messages like "MySQL server has gone away" appear... And this does not happen when my VPS has 1GB o 2GB RAM.
So, this seems to be a memory problem, but I need the evidence! I don't know where is the log that tells me that my container died because wasn't enough memory.
I checked MariaDB logs but I can't find anything... it's log only say somethign like "the database was not normally shutdown" and thaen "it's starting" and then "wating for connections"...
So, independently of my MariaDB config (which is not proper for a 512MB VPS)... Where can I find explicitly the reason of why the container with the database server died?
Any help is welcome.
Thanks a lot.
PD: I execute mysql cli from the PHP container, that's why despite the database container dies I still can see the output that something wrong happened.
Could be the kernel terminating most memory-consuming process on 'lack-of-memory' event. Some entries may be there in host system log. Lack of such entries doesn't guarantee it wasn't kernel who killed your DB, though.
Exact filename depends on host system configuration (meaning the VPS, in your case). Could be /var/log/{system.log,error.log, ...}.
As long as docker container is not an isolated VM but a wrapper over kernel-driven cgroups, kernel events are handled by host system loggin daemon
Hi Beto we can see the logs in docker checkout the below commands:
The docker logs --follow command will continue streaming the new output from the container’s STDOUT and STDERR.
That is probably too much to cram in a minuscule 512MB. Do one of
Increase RAM available. ("And this does not happen when my VPS has 1GB")
Split applications across multiple tiny Dockers.
Tune each app to use less RAM. (Didn't I answer your question recently?)
How many tables do you have? Hopefully not a lot, as in https://dba.stackexchange.com/questions/60888/mysql-runs-out-of-memory-when-importing-innodb-database

Is MariaDB data lost after Docker setting change?

I've setup a basic MariaDB instance running in Docker - basically from starting the container using the Kitematic UI, changing the settings, and letting it run.
Today, I wanted to make a backup, so I used Kitematic to change the port so I could access it from a machine to make automated backups. After changing the port in Kitematic, it seems to have started a fresh MariaDB container (i.e. all my data seems to be removed).
Is that the expected behavior? And, more importantly, is there any way to recover the seemingly missing data, or has it been completely removed?
Also, if the data is actually removed, what is the preferred way to change settings—such as the exposed ports—without losing all changes? docker commit?
Notes:
running docker 1.12.0 beta for OS X
docker -ps a shows the database status as "Up for X minutes" when the original had been up for several days
Thanks in advance!
UPDATE:
It looks like the recommended procedure to retain data (without creating a volume or similar) is to:
commit changes (e.g. docker commit <containerid> <name/tag>)
take the container offline
update settings such as exposed port or whatever else
run the image with committed changes
...taken from this answer.
Yes, this is expected behavior. If you want your data to be persistant you should mount volume from host (via --volume option for docker run) or from another container and store your database files at this volume.
docker run --volume /path/on/your/host/machine:/var/lib/mysql mariadb
Losing changes are actually core feature of containers so it can not be omitted. This way you can be sure that between every docker run you get fresh environment without any changes. If you want your changes to be permanent you should do them in your image's Dockerfile, not in container itself.
For more information please visit official documentation: https://docs.docker.com/engine/tutorials/dockervolumes/.
it looks like you dont mount container volume into certain path. You can read about volumes and storing data into container here
you need run container with volume option
$ docker run --name some-mariadb -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mariadb:tag
where /my/own/datadir is directory on host machine

Docker containers slow after restart in Azure VM

I'm experiencing some weirdness with docker.
I have an Ubuntu server VM running in Windows Azure.
If I start a new docker container for e.g. Wordpress like so:
sudo docker run --name some-wordpress --link some-mysql:mysql -p 80:80 -d wordpress
everything works nicely, I get a resonably snappy site considering the low end VM settings.
However, if I reboot the VM, and start the containers:
sudo docker start some-mysql
sudo docker start some-wordpress
The whole thing runs very slowly, the response time for a single page gets up to some 2-4 seconds.
Removing the containers and starting new ones makes everything run normally again.
What can cause this?
I suspect it has to do with disk usage, does the MySQL container use local disk for storage? When you restart an existing docker container, you reuse the existing volume, normally stored at in a sub folder of /var/lib/docker, whereas a new container creates a new volume.
I find a few search results saying that Linux on Azure doesn't handle "soft" reboots well and that stuff doesn't get reconnected as it should. A "hard" reboot supposedly fixes that.
Not sure if it helps, my Docker experience is all from AWS.
Your containers are running on a disk which is stored in a blob storage with a max. 500 IOPS per disk. You can avoid hitting the disk (not very realistic with MySQL) or add more disks to use with striping (RAID0) or use SSD (D series in Azure) And depending on your use case, you might also rebase Docker completely to use ephemeral storage (/dev/sdb) - here's how for CoreOS. BTW, there are some MySQL performance (non-Docker) suggestions in azure.com.