mysqldump increase apache process threads? - mysql

We are running into problem with number of apache processes drastically increasing at a specific time. On further investigating, it's found that "mysqldump" was running in MySQL server during that time.
We noticed that while mysqldump was running, the count of apache instances(processes) shoots up to the max. Since we limited MaxClients to 150 there is no further increase in process thread.
My question is: Is it possible that mysqldump would increase number of processes in apache?

MySQLdump itself does not cause Apache to do anything. MySQLdump and Apache Server are only related in that since they are running on the same machine they are sharing the resources of that machine.
I suspect that when you run MySQLdump your server becomes resource-constrained (could be CPU, Network or Disk). When Apache is resource-constrained an individual Apache process is not able to complete as quickly and move on to the next request so instead new processes are spawned.

Related

Docker containers, memory consumption and logs

I've been trying Docker for a few days. I'm using a Drupal image (docker4drupal) which basically contains MySQL (MariaDB), PHP (php-fpm) and NGINX.
Almost everytime I do a database import to the database container, on a VPS with 512MB RAM, the container with MariaDB dies... and messages like "MySQL server has gone away" appear... And this does not happen when my VPS has 1GB o 2GB RAM.
So, this seems to be a memory problem, but I need the evidence! I don't know where is the log that tells me that my container died because wasn't enough memory.
I checked MariaDB logs but I can't find anything... it's log only say somethign like "the database was not normally shutdown" and thaen "it's starting" and then "wating for connections"...
So, independently of my MariaDB config (which is not proper for a 512MB VPS)... Where can I find explicitly the reason of why the container with the database server died?
Any help is welcome.
Thanks a lot.
PD: I execute mysql cli from the PHP container, that's why despite the database container dies I still can see the output that something wrong happened.
Could be the kernel terminating most memory-consuming process on 'lack-of-memory' event. Some entries may be there in host system log. Lack of such entries doesn't guarantee it wasn't kernel who killed your DB, though.
Exact filename depends on host system configuration (meaning the VPS, in your case). Could be /var/log/{system.log,error.log, ...}.
As long as docker container is not an isolated VM but a wrapper over kernel-driven cgroups, kernel events are handled by host system loggin daemon
Hi Beto we can see the logs in docker checkout the below commands:
The docker logs --follow command will continue streaming the new output from the container’s STDOUT and STDERR.
That is probably too much to cram in a minuscule 512MB. Do one of
Increase RAM available. ("And this does not happen when my VPS has 1GB")
Split applications across multiple tiny Dockers.
Tune each app to use less RAM. (Didn't I answer your question recently?)
How many tables do you have? Hopefully not a lot, as in https://dba.stackexchange.com/questions/60888/mysql-runs-out-of-memory-when-importing-innodb-database

mysql docker container crashes often

I am using mariadb and wordpress container. But this error keeps on happening. How can I ensure that this crash does not happen anymore ? Am I being attacked ? Or is it a problem that occurs to other people ? How can I attach to mariadb and have access to the shell and try to find out what goes on inside mariadb container?
See below the messages logged after every crash... There seems to be a high number of page hits as well. page visits go up to 20.000 to 60.000 hits on pages. These seem to be the work of crawlers, bots. Not sure if these are malicious attacks.
Any help on how to go about dealing with this problem?
I have mariadb, wordpress and phpmyadmin working in three docker containers under ubuntu 14 on digital ocean.
Here are the crash messages:
[1668002.926214] Out of memory: Kill process 16765 (mysqld) score 176 or sacrifice child [1668002.935614] killed process 16765 (mysqld) total-vm:1012836kb, anon-rss:178840kb, file-rss:0kb
[1668040.992415] killed process 22570 (php5-fpm) total-vm:418044kB, anon-rss:145392kB, file-rss: 20624kB
New York Server:
[1225007.977126] Out of memory: Kill process 3161 (mysqld) score 245 or sacrifice child [
1225007.985657] killed process 3161 (mysqld) total-vm: 977148kb, anon-rss:122488kb, file-rss:0kB)
Frankfurt server
[1632264.057873] Out of memory: Kill process 22421 (mysqld) score 246 or sacrifice child
[1632264.067530] Killed process 22421 (mysqld) total-vm: 1005228kb, anon-rss:249328kb, file-rss:0kb
The official MySQL images on Docker Hub use a configuration that's recommended by MySQL. Basically, the default configuration is tuned for performance, and is intended for running MySQL on a dedicated server, with lots of memory (multiple gigabytes).
Tune MySQL settings based on your requirements and available resources
When running MySQL in a container on a small DigitalOcean droplet (512MB, 1GB), you will have to modify the default settings to fit your situation. For example; limit the maximum amount of simultaneous connections, less query cache, etc.
Also note that, by default, DigitalOcean droplets don't have swap configured, which means that if they run out of memory, they cannot use the SSD to swap. It's important to configure Swap on those droplets so that MySQL doesn't crash if it's temporarily needing more memory (e.g. when re-idexing the database).
This article describes how to configure a Swap partition on Ubuntu 14.04 on DigitalOcean; How To Add Swap on Ubuntu 14.04
The following issues on the official MySQL Docker repository contain some hints for tuning MySQL settings for "performance" or "memory efficiency";
mysql immediately stops after running
Container with MySQL crash almost every day
The MySQL readme on Docker Hub describes how to use a custom configuration file; "Using a custom MySQL configuration file"

MySQL hanging in Writing to Net

I have problem, when MySQL thread sometime stuck at status "Writing to net".
I have 4 Apache server (2.4) (requests are load-balanced on them) a 1 MySQL (MariaDB 10). Apache is executing php56. All Apache servers have same configuration. All servers runs on CentOS 7. SElinux is disabled on Apache servers for debug reasons. No problems in audit logs on DB server. All servers are virtual and located on same cluster (VMware).
Problem appear only on specific pages and specific queries to DB.
Usually there is around 100-200 separate queries on page and most of them takes 0.0001-0.0010 s. But then I have one query that takes around 1-2sec. The query itself take much lesser time (around 0.0045s).
Problematic query returns around 8984 rows and when executed from CLI from debug script, it is executed fast as expected.
Strange is that in time some Apache servers execute that page quickly, and some slowly. It changes (during day). Also I tried remove one Apache server from cluster and then send same request. If server is not under any load, it usually responds fast.
All server have enough resources (CPU and RAM) so it is definitely not load issue. They usually have around 4-10 active Apache workers (prefork) and have capacity for 100 active workers.
I tried debugging with tcpdump and when requesting page, I can see packet flow for fast queries and then it stops for a while and resumes. Not sure if the problem is on MySQL server or on Apache server.
My guess is that I am hitting some kind of limit, but I have no idea which one.
The solution is quite odd.
First few more details:
All Apache severs have same application data (PHP files, images, etc.) Mounted from NFS. The NFS share was working fine (low latency, no data corruption).
Solution:
When I was desperate I went through every possible log. Then I noticed that iptables are dropping some packets from NFS server. Well I said to myself that I should probably fix that, even when its not related.
But after I allowed all traffic from NFS to my Apache servers, MySQL status "writing to net" disappeared and all websites started to respond quickly.

How to configure Amazon EC2 t1.micro instance for Buddypress

We have installed WordPress on EC2 t1.micro instance and installed Buddypress on top of that, everything work fine for single user, but when two user access at same time, site goes down, because of RAM issue, httpd (Apache) takes maximum memory, how to overcome this, is there any configuration need to do in http.conf file or any network / traffic blocking tool do i need to install?
Micro instances are notoriously too small to handle WordPress and MySQL together. They're going to thrash (overuse the disk swap feature) or just run out of RAM and crash.
You are going to have to do a lot of tuning to get this right on a micro instance, and it is never going to be rock-stable. It's a pain in the neck. If your time is worth more than a dollar an hour compared to hosting fees, you should upgrade to an instance with more RAM, or sign up for one of the many US$6 per month shared hosting accounts available in the world.
Where to start tuning? Try setting a value in the Apache httpd.conf.
Set MaxRequestWorkers to a low number. You might try 4. When this number is low then you also won't have many simultaneous clients connecting from your Apache/php to your MySQL server.
Requests from web-browser clients will be enqueued when all your workers are busy. That works correctly, but may make your web site seem slow to your users. See the backlog parameter in the Linux documentation for listen(2) for an explanation of that queuing.
That will save both on Apache RAM and MySQL resources.
http://httpd.apache.org/docs/current/mod/mpm_common.html#maxrequestworkers
Then you probably should look at the my.conf file for MySQL, and see what you can play around with.
Edit MySQL, Apache, and php are all drawing on the same pool of RAM -- 512MB if I remember correctly. Reducing the number of Apache workers should help control RAM usage by Apache (and php, which is probably running in the Apache server's address space). Do that.
Then, go find the memory_limit in php.ini. It's set to 128M in many standard installations. Try reducing it to 64M or 40M. That will make each php instance use less RAM. But, if your WordPress installation is complex (lots of plugins, fancy theme), it may make some pages fail to load. WordPress will announce the problem as memory running out. http://php.net/memory-limit
Then, jump into MySQL's my.ini. The standard MySQL install comes with a file called my-small.ini, which contains the configuration parameters for a small MySQL instance. Yours can be small: WordPress's tables contain hundreds or a few thousands of rows, not hundreds of thousands. Save your old my.ini and then copy the contents of my-small.ini into my.ini. Restart your MySQL server after doing that.
Those steps may help you squeak by in a micro instance. They may not. They are, I suppose, worth a try.

Restart Mysql automatically when ubuntu on EC2 micro instance kills it when running out of memory

When the system runs out of memory, ubuntu 12.04 kills the mysql process:
Out of memory: Kill process 17074 (mysqld) score 146 or sacrifice child
So the process ends up killed.
This happens at peaks of server load and mainly because of apache getting wild and eating the remaining available memory. Possible approaches could be:
Change somewhere somehow the priority of mysql, so it's not killed (probably a bad fix as something else will be killed)
Monitor the status of mysql and restart automatically whenever it's killed (the one I'm thinking about, but don't know how to do it).
How do you see it?
Abrupt termination of a database server is a very serious crash. You need to avoid this in a production system, because it may not restart cleanly.
The database server is a shared resource, and should almost never terminate in an unplanned fashion in production. The only thing that should cause unplanned termination is a catastrophic hardware or power failure. Most properly configured production data base servers have an unplanned termination once every ten years or less frequently. Seriously.
What to do?
Fix your apache configuration. Limit the number of worker threads and processes it can use, so it can't run wild. Learn how to do this. It's vital. See here: http://httpd.apache.org/docs/current/mod/mpm_common.html#maxrequestworkers
Fix the defects in your web app that are causing your apache to run wild.
If you can, move your mysqld server to a different server machine from apache, so the two don't contend for the same hardware resources.
Configure your mysqld to limit the number of connections it will accept from apache worker threads or other clients. Your web app probably handles the situation where a worker thread needs to wait for a connection. See here. http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_max_connections
Are you on an EC2 micro instance? You need to do some serious tuning. See here: http://ubuntuforums.org/showthread.php?t=1979049
You can check mysql status every minute (with cron) and restart if it is crashed:
* * * * * service mysql status | grep running || service mysql restart