I am using mariadb and wordpress container. But this error keeps on happening. How can I ensure that this crash does not happen anymore ? Am I being attacked ? Or is it a problem that occurs to other people ? How can I attach to mariadb and have access to the shell and try to find out what goes on inside mariadb container?
See below the messages logged after every crash... There seems to be a high number of page hits as well. page visits go up to 20.000 to 60.000 hits on pages. These seem to be the work of crawlers, bots. Not sure if these are malicious attacks.
Any help on how to go about dealing with this problem?
I have mariadb, wordpress and phpmyadmin working in three docker containers under ubuntu 14 on digital ocean.
Here are the crash messages:
[1668002.926214] Out of memory: Kill process 16765 (mysqld) score 176 or sacrifice child [1668002.935614] killed process 16765 (mysqld) total-vm:1012836kb, anon-rss:178840kb, file-rss:0kb
[1668040.992415] killed process 22570 (php5-fpm) total-vm:418044kB, anon-rss:145392kB, file-rss: 20624kB
New York Server:
[1225007.977126] Out of memory: Kill process 3161 (mysqld) score 245 or sacrifice child [
1225007.985657] killed process 3161 (mysqld) total-vm: 977148kb, anon-rss:122488kb, file-rss:0kB)
Frankfurt server
[1632264.057873] Out of memory: Kill process 22421 (mysqld) score 246 or sacrifice child
[1632264.067530] Killed process 22421 (mysqld) total-vm: 1005228kb, anon-rss:249328kb, file-rss:0kb
The official MySQL images on Docker Hub use a configuration that's recommended by MySQL. Basically, the default configuration is tuned for performance, and is intended for running MySQL on a dedicated server, with lots of memory (multiple gigabytes).
Tune MySQL settings based on your requirements and available resources
When running MySQL in a container on a small DigitalOcean droplet (512MB, 1GB), you will have to modify the default settings to fit your situation. For example; limit the maximum amount of simultaneous connections, less query cache, etc.
Also note that, by default, DigitalOcean droplets don't have swap configured, which means that if they run out of memory, they cannot use the SSD to swap. It's important to configure Swap on those droplets so that MySQL doesn't crash if it's temporarily needing more memory (e.g. when re-idexing the database).
This article describes how to configure a Swap partition on Ubuntu 14.04 on DigitalOcean; How To Add Swap on Ubuntu 14.04
The following issues on the official MySQL Docker repository contain some hints for tuning MySQL settings for "performance" or "memory efficiency";
mysql immediately stops after running
Container with MySQL crash almost every day
The MySQL readme on Docker Hub describes how to use a custom configuration file; "Using a custom MySQL configuration file"
Related
Two mysql(5.6.20) instances in two docker containers (1.8.32),
master and slave build semi-synchronous replication with each other,
then users do some dml or ddl operating in master always。
after ten days or more, all the clients which connect to slave will hang
gdb -p/strace slave mysqld process hangs
pstack/perf top -p slave mysqld process show nothing
kill -9 will not kill the mysqld process
docker stop will not stop the docker container
what tools or methods can help locating the problem?
I had the same occur today. In my case, using docker compose to bring up mysql and a range of consumers, using the current "latest" mysql image from docker hub. (5.7.16-1debian8)
I've launched a number of these, and within a week I've seen a couple of instances where mysql has well over 100 threads, all the memory on the host is consumed, and the containers are hung. I can't stop anything, I can't even reboot. Only a power cycle of the VM recovers.
I'll try and monitor. I suspect it depends highly on infrastructure load (slow VM host results in slow queries backing up). The solution is more likely to be mysql tuning and a docker bug.
I've been trying Docker for a few days. I'm using a Drupal image (docker4drupal) which basically contains MySQL (MariaDB), PHP (php-fpm) and NGINX.
Almost everytime I do a database import to the database container, on a VPS with 512MB RAM, the container with MariaDB dies... and messages like "MySQL server has gone away" appear... And this does not happen when my VPS has 1GB o 2GB RAM.
So, this seems to be a memory problem, but I need the evidence! I don't know where is the log that tells me that my container died because wasn't enough memory.
I checked MariaDB logs but I can't find anything... it's log only say somethign like "the database was not normally shutdown" and thaen "it's starting" and then "wating for connections"...
So, independently of my MariaDB config (which is not proper for a 512MB VPS)... Where can I find explicitly the reason of why the container with the database server died?
Any help is welcome.
Thanks a lot.
PD: I execute mysql cli from the PHP container, that's why despite the database container dies I still can see the output that something wrong happened.
Could be the kernel terminating most memory-consuming process on 'lack-of-memory' event. Some entries may be there in host system log. Lack of such entries doesn't guarantee it wasn't kernel who killed your DB, though.
Exact filename depends on host system configuration (meaning the VPS, in your case). Could be /var/log/{system.log,error.log, ...}.
As long as docker container is not an isolated VM but a wrapper over kernel-driven cgroups, kernel events are handled by host system loggin daemon
Hi Beto we can see the logs in docker checkout the below commands:
The docker logs --follow command will continue streaming the new output from the container’s STDOUT and STDERR.
That is probably too much to cram in a minuscule 512MB. Do one of
Increase RAM available. ("And this does not happen when my VPS has 1GB")
Split applications across multiple tiny Dockers.
Tune each app to use less RAM. (Didn't I answer your question recently?)
How many tables do you have? Hopefully not a lot, as in https://dba.stackexchange.com/questions/60888/mysql-runs-out-of-memory-when-importing-innodb-database
We are running into problem with number of apache processes drastically increasing at a specific time. On further investigating, it's found that "mysqldump" was running in MySQL server during that time.
We noticed that while mysqldump was running, the count of apache instances(processes) shoots up to the max. Since we limited MaxClients to 150 there is no further increase in process thread.
My question is: Is it possible that mysqldump would increase number of processes in apache?
MySQLdump itself does not cause Apache to do anything. MySQLdump and Apache Server are only related in that since they are running on the same machine they are sharing the resources of that machine.
I suspect that when you run MySQLdump your server becomes resource-constrained (could be CPU, Network or Disk). When Apache is resource-constrained an individual Apache process is not able to complete as quickly and move on to the next request so instead new processes are spawned.
When I check my process using htop on a relatively new install of Ubuntu on Linode, I see about 17 MySQL processes. None of them eat CPU, and each of them eats like 2.1% of ram.
My website isn't even running yet, I've not relocated the domain, I've just been setting up the server. Its like idle mode, and has all those processes and eats up all that ram. Is that normal?
You can specify max_connections within your mysql configuration. If your applications efficiently use/reuse connections, you probably need far fewer than 17.
When the system runs out of memory, ubuntu 12.04 kills the mysql process:
Out of memory: Kill process 17074 (mysqld) score 146 or sacrifice child
So the process ends up killed.
This happens at peaks of server load and mainly because of apache getting wild and eating the remaining available memory. Possible approaches could be:
Change somewhere somehow the priority of mysql, so it's not killed (probably a bad fix as something else will be killed)
Monitor the status of mysql and restart automatically whenever it's killed (the one I'm thinking about, but don't know how to do it).
How do you see it?
Abrupt termination of a database server is a very serious crash. You need to avoid this in a production system, because it may not restart cleanly.
The database server is a shared resource, and should almost never terminate in an unplanned fashion in production. The only thing that should cause unplanned termination is a catastrophic hardware or power failure. Most properly configured production data base servers have an unplanned termination once every ten years or less frequently. Seriously.
What to do?
Fix your apache configuration. Limit the number of worker threads and processes it can use, so it can't run wild. Learn how to do this. It's vital. See here: http://httpd.apache.org/docs/current/mod/mpm_common.html#maxrequestworkers
Fix the defects in your web app that are causing your apache to run wild.
If you can, move your mysqld server to a different server machine from apache, so the two don't contend for the same hardware resources.
Configure your mysqld to limit the number of connections it will accept from apache worker threads or other clients. Your web app probably handles the situation where a worker thread needs to wait for a connection. See here. http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_max_connections
Are you on an EC2 micro instance? You need to do some serious tuning. See here: http://ubuntuforums.org/showthread.php?t=1979049
You can check mysql status every minute (with cron) and restart if it is crashed:
* * * * * service mysql status | grep running || service mysql restart