Docker uses all memory and crashes the system - mysql

I have an AWS t2.micro EC2 instance with docker on it, and I bring up the following instances;
jwilder/nginx-proxy
mysql
wordpress
Which results in something like this docker stats;
CONTAINER MEM USAGE/LIMIT MEM %
wordpress 331.9 MB/1.045 GB 31.77%
nginx 18.32 MB/1.045 GB 1.75%
mysql 172.1 MB/1.045 GB 16.48%
Then, I run siege's default 15 concurrent connections against it, which spawns multiple apache processes, reaching the memory limit of the EC2 instance, crashing docker and bash due to no more memory, requiring my intervention to get it all running again.
I have a couple of questions regarding this.
Am I expecting too much? Should this setup be able to handle 15 concurrent connections? If so, what changes* need to be made?
How can I automate recovery from this? Is there a way to detect that memory is reaching capacity and do something (like reject requests or similar) until memory usage decreases? Is there a way to keep the system stable during the high request volume so once it's over it does not require my intervention to bring it back up?
* I've already done this to drop mysql memory from 22% to 15%.

Given a t2.micro only has 1GB total, and each of those containers has a 1GB limit on its own, have you tried limiting the max memory usage on each container (as per http://docs.docker.com/engine/reference/run/#user-memory-constraints) such that the total memory limit doesn't exceed 1GB?

The biggest impact, which stopped the EC2 instance from falling over, was limiting the memory a docker container can use with the -m option per #palfrey's answer.
Some additionally tweaks were required to reduce the memory footprint and have the service respond to 15 concurrent users, albeit somewhat slowly. This included;
MySQL
Disabling performance_schema
Using a minimal config
WordPress
Disabling KeepAlive
Limiting servers:
<IfModule mpm_prefork_module>
StartServers 1
MinSpareServers 1
MaxSpareServers 3
MaxRequestWorkers 10
MaxConnectionsPerChild 3000
</IfModule>
Docker
I created some docker images that extend the default images to include these optimisations;
mysql-minimal
wordpress-minimal
Further details in my blog post.

Probably, a micro only has 1GB of ram. You can run this configuration without docker just fine, but you do have to adjust for memory limitations. Docker probably adds some overhead. Is there a reason for running both nginx and apache?
Generally you test and limit your threads to what the system can handle, there are probably things you can do with caching that will help improve performance. Apache, nginx, php-fpm all have settings that can control the number of threads that are allowed to be created.

Related

Docker containers, memory consumption and logs

I've been trying Docker for a few days. I'm using a Drupal image (docker4drupal) which basically contains MySQL (MariaDB), PHP (php-fpm) and NGINX.
Almost everytime I do a database import to the database container, on a VPS with 512MB RAM, the container with MariaDB dies... and messages like "MySQL server has gone away" appear... And this does not happen when my VPS has 1GB o 2GB RAM.
So, this seems to be a memory problem, but I need the evidence! I don't know where is the log that tells me that my container died because wasn't enough memory.
I checked MariaDB logs but I can't find anything... it's log only say somethign like "the database was not normally shutdown" and thaen "it's starting" and then "wating for connections"...
So, independently of my MariaDB config (which is not proper for a 512MB VPS)... Where can I find explicitly the reason of why the container with the database server died?
Any help is welcome.
Thanks a lot.
PD: I execute mysql cli from the PHP container, that's why despite the database container dies I still can see the output that something wrong happened.
Could be the kernel terminating most memory-consuming process on 'lack-of-memory' event. Some entries may be there in host system log. Lack of such entries doesn't guarantee it wasn't kernel who killed your DB, though.
Exact filename depends on host system configuration (meaning the VPS, in your case). Could be /var/log/{system.log,error.log, ...}.
As long as docker container is not an isolated VM but a wrapper over kernel-driven cgroups, kernel events are handled by host system loggin daemon
Hi Beto we can see the logs in docker checkout the below commands:
The docker logs --follow command will continue streaming the new output from the container’s STDOUT and STDERR.
That is probably too much to cram in a minuscule 512MB. Do one of
Increase RAM available. ("And this does not happen when my VPS has 1GB")
Split applications across multiple tiny Dockers.
Tune each app to use less RAM. (Didn't I answer your question recently?)
How many tables do you have? Hopefully not a lot, as in https://dba.stackexchange.com/questions/60888/mysql-runs-out-of-memory-when-importing-innodb-database

Slow https requests after sudden spike in mysql queries

I have wordpress website running in a VPS server and is handling about 150 mysql queries / second.
Ocassionaly when we notice a spike in traffic to about 200 mysql queries a second the https requests to the site is extremely slow.
The site loads decently with http but with https it takes 20+ seconds.
Gradually over a period of an hour after the spike, the load times get better and then it gets back to normal again.
There server load and memory looks fine. There is only a spike in mysql queries, firewall traffic and eth0 requests. There are no mysql slow queries
Any help would be appreciated.
Thank You
I think your answer is in "disk latency" and "disk utilization" charts.
MySQL works well under small loads, when it can cache all data it needs. But when your results or queries get too big, or you request too many of them, it will start doing many disk I/O operations. This enables you to handle huge loads and very big data, but in the moment you exceed your MySQL allocated memory, you will need to read and write everything to disk.
This would not be so bad if you ran on a local SSD drive. But from the device name I see that you are running on an EBS volume, which is not a real hard drive. It is networked drive, so all the traffic overloads your network connection.
You have several options:
1.) install mysqltuner, let the server operate for some time and then run it and see what it suggest. My guess is, that it will suggest you to increase your MySQL memory pool, decrease number of parallel connections or restructure your queries.
2.) use EC2 instance type with actual local storage (e.g. m3 or r3) and write to local SSD. You can do RAID on several SSD drives to make it even faster.
3.) use EBS optimized instance (dedicated EBS network bandwidth) and correct volume type (some EBS volume types have I/O credits, similar to CPU credits for t-type instances, and when you run of those, your operations will slow down to crawl).

Mysql processes on Ubuntu server

When I check my process using htop on a relatively new install of Ubuntu on Linode, I see about 17 MySQL processes. None of them eat CPU, and each of them eats like 2.1% of ram.
My website isn't even running yet, I've not relocated the domain, I've just been setting up the server. Its like idle mode, and has all those processes and eats up all that ram. Is that normal?
You can specify max_connections within your mysql configuration. If your applications efficiently use/reuse connections, you probably need far fewer than 17.

Restart Mysql automatically when ubuntu on EC2 micro instance kills it when running out of memory

When the system runs out of memory, ubuntu 12.04 kills the mysql process:
Out of memory: Kill process 17074 (mysqld) score 146 or sacrifice child
So the process ends up killed.
This happens at peaks of server load and mainly because of apache getting wild and eating the remaining available memory. Possible approaches could be:
Change somewhere somehow the priority of mysql, so it's not killed (probably a bad fix as something else will be killed)
Monitor the status of mysql and restart automatically whenever it's killed (the one I'm thinking about, but don't know how to do it).
How do you see it?
Abrupt termination of a database server is a very serious crash. You need to avoid this in a production system, because it may not restart cleanly.
The database server is a shared resource, and should almost never terminate in an unplanned fashion in production. The only thing that should cause unplanned termination is a catastrophic hardware or power failure. Most properly configured production data base servers have an unplanned termination once every ten years or less frequently. Seriously.
What to do?
Fix your apache configuration. Limit the number of worker threads and processes it can use, so it can't run wild. Learn how to do this. It's vital. See here: http://httpd.apache.org/docs/current/mod/mpm_common.html#maxrequestworkers
Fix the defects in your web app that are causing your apache to run wild.
If you can, move your mysqld server to a different server machine from apache, so the two don't contend for the same hardware resources.
Configure your mysqld to limit the number of connections it will accept from apache worker threads or other clients. Your web app probably handles the situation where a worker thread needs to wait for a connection. See here. http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_max_connections
Are you on an EC2 micro instance? You need to do some serious tuning. See here: http://ubuntuforums.org/showthread.php?t=1979049
You can check mysql status every minute (with cron) and restart if it is crashed:
* * * * * service mysql status | grep running || service mysql restart

Tomcat and MySql on VPS

I am doing some experimentation with VPS before moving my application from private Tomcat hosting to cloud. It is a read intensive app built on Struts 2 + Spring + Hibernate + MySql. Its a moderately popular app in India with 1500 visitors and 10,000 pageviews per day. I have some basic questions about choosing a server configuration.
1) Would it be enough to have 256M RAM for running both Tomcat and Mysql. I wont be running anything else other than SSH. No apache, FTP etc. My current heap size is 190M. can i still set the heap size as 190M with 256M RAM? What are the pros and cons?
2) Is it better to have 2 256M servers one with Tomcat and one with Mysql? or 1 server with 512M running both MySql and Tomcat?
I am open to any suggestions. Thanks!
1)
I think it could be done.. I've seen a similarly sized app running on 256MB Linux VPS.
However, you're leaving very little memory for MySQL, which will cause it to have to go to disk often. It could be quite slow.
2)
One server is better than two. You have less to configure and you don't pay for the OS + virtual machine container overhead twice. Also, your app server and your database may not use equal memory, so being on separate machine could be an inefficient use of memory.
I don't believe a 256MB VPS can effectively run a Java WebApp. You need at least a 512MB VPS. I wouldn't consider splitting until you go past a 1GB container. When you split, you have the overhead of the OS running 2x. One of the advantages of splitting though, is you may have more burst-able capacity. If the two VPS are on separate hardware and you get a rush of traffic, you may be able to use more CPU cycles than a single container on a single host. This depends on the load of the other containers of course, and your VPS provider's policy on bursting.