When I check my process using htop on a relatively new install of Ubuntu on Linode, I see about 17 MySQL processes. None of them eat CPU, and each of them eats like 2.1% of ram.
My website isn't even running yet, I've not relocated the domain, I've just been setting up the server. Its like idle mode, and has all those processes and eats up all that ram. Is that normal?
You can specify max_connections within your mysql configuration. If your applications efficiently use/reuse connections, you probably need far fewer than 17.
Related
I'm in the process of both virtualizing and updating an old Linux server running a reporting system developed in house (Apache, MySQL, PHP).
The old physical server is running 64-bit Ubuntu 10.04.3 LTS, MySQL 5.1.41 and PHP 5.3. The server has an Intel Xeon CPU X3460 # 2.80GHz (4 cores), 4GB RAM.
We have ESXI 5.5 running on an HP DL380G6 with 2 x Intel Xeon X5650 6 Core 2.66ghz and 32GB RAM.
I created a new VM with 4 cores and 4GB RAM, and did a clean install of 64-bit Ubuntu 16.04.4 LTS, MySQL 5.7.21 and PHP 7.0, migrated our app, and everything is running much slower. I believe it's MySQL because when doing the same direct query on the old physical server vs the new virtualized one, queries can take 8 seconds (VM) instead of 1 (Physical). The tables all have appropriate indexes, running "EXPLAIN" on each server provides the same results, yet one is substantially slower. When a page is running numerous complex queries, it can take a minute+ to load instead of a few seconds.
Any idea why this can be? Same dataset, same query, same engine (MyISAM). The VM has much more recent versions of everything, same number of cores and same RAM. I even tried doubling the VM CPU to 2 sockets, 4 cores and 8GB RAM, and it doesn't seem to have a substantial impact.
I've compared the MySQL configuration and nothing is jumping out at me at being very different.
What might I be missing here? Is it the virtual host hardware?
Did you upgrade the vmware tools? if not do so.
Then, on this ESXi host, do you have other VMs on it? If yes, I would advise you to create a resource pool and then configure the resource limits.
In the resource pool the amount of resource you assign will be in a way reserved for your VM, so you won't have to share with other VMs.
We have installed WordPress on EC2 t1.micro instance and installed Buddypress on top of that, everything work fine for single user, but when two user access at same time, site goes down, because of RAM issue, httpd (Apache) takes maximum memory, how to overcome this, is there any configuration need to do in http.conf file or any network / traffic blocking tool do i need to install?
Micro instances are notoriously too small to handle WordPress and MySQL together. They're going to thrash (overuse the disk swap feature) or just run out of RAM and crash.
You are going to have to do a lot of tuning to get this right on a micro instance, and it is never going to be rock-stable. It's a pain in the neck. If your time is worth more than a dollar an hour compared to hosting fees, you should upgrade to an instance with more RAM, or sign up for one of the many US$6 per month shared hosting accounts available in the world.
Where to start tuning? Try setting a value in the Apache httpd.conf.
Set MaxRequestWorkers to a low number. You might try 4. When this number is low then you also won't have many simultaneous clients connecting from your Apache/php to your MySQL server.
Requests from web-browser clients will be enqueued when all your workers are busy. That works correctly, but may make your web site seem slow to your users. See the backlog parameter in the Linux documentation for listen(2) for an explanation of that queuing.
That will save both on Apache RAM and MySQL resources.
http://httpd.apache.org/docs/current/mod/mpm_common.html#maxrequestworkers
Then you probably should look at the my.conf file for MySQL, and see what you can play around with.
Edit MySQL, Apache, and php are all drawing on the same pool of RAM -- 512MB if I remember correctly. Reducing the number of Apache workers should help control RAM usage by Apache (and php, which is probably running in the Apache server's address space). Do that.
Then, go find the memory_limit in php.ini. It's set to 128M in many standard installations. Try reducing it to 64M or 40M. That will make each php instance use less RAM. But, if your WordPress installation is complex (lots of plugins, fancy theme), it may make some pages fail to load. WordPress will announce the problem as memory running out. http://php.net/memory-limit
Then, jump into MySQL's my.ini. The standard MySQL install comes with a file called my-small.ini, which contains the configuration parameters for a small MySQL instance. Yours can be small: WordPress's tables contain hundreds or a few thousands of rows, not hundreds of thousands. Save your old my.ini and then copy the contents of my-small.ini into my.ini. Restart your MySQL server after doing that.
Those steps may help you squeak by in a micro instance. They may not. They are, I suppose, worth a try.
Every couple of days we have been getting a small number of MySql timeout errors that correspond with a large spike in CPU and DB connections on our MySQL RDS instance. These are queries that are typically very fast (<5ms) that suddenly timeout.
At this point, database operations are very slow for a minute or so (likely because new connections are being allocated). The number of new connections often doubles and seem to correspond to the entire Connection Pool being recycled.
The timeouts do not seem to correspond with heavy database load. The CPU is often under 7% when this happens spiking up to around 12%.
Once these connections are created, the old connections seem to stay around for several hours.
We have some theories:
An occasional network hiccup between EC2 and RDS
A connection pool recycle (is there such a thing?)
Resource contention on the server that backs up all queries (no deadlocks present)
Any help on debugging this would be very much appreciated.
System Details:
Windows 2012 EC2 instances
.NET 4.5
MySql Connector 6.8.3
Entity Framework 6.0.2
MySql.Data.Entities 6.8.3
MySql 5.6.12 (Hosted in Amazon's RDS)
I wanted to put this as a comment not an answer but "...must have 50 reputation to comment..."
Are you maxing out on connections? show variables like 'max_connections'; show process_list; (as root user)
How's your disk I/O: iostat -x 5 via command line and pay special attention to queue sizes & service/wait times. If its an issue you can purchase AWS reserved IOPS for better reliability & performance.
You can profile it - i like Jet Profiler, simple & low load.
When the system runs out of memory, ubuntu 12.04 kills the mysql process:
Out of memory: Kill process 17074 (mysqld) score 146 or sacrifice child
So the process ends up killed.
This happens at peaks of server load and mainly because of apache getting wild and eating the remaining available memory. Possible approaches could be:
Change somewhere somehow the priority of mysql, so it's not killed (probably a bad fix as something else will be killed)
Monitor the status of mysql and restart automatically whenever it's killed (the one I'm thinking about, but don't know how to do it).
How do you see it?
Abrupt termination of a database server is a very serious crash. You need to avoid this in a production system, because it may not restart cleanly.
The database server is a shared resource, and should almost never terminate in an unplanned fashion in production. The only thing that should cause unplanned termination is a catastrophic hardware or power failure. Most properly configured production data base servers have an unplanned termination once every ten years or less frequently. Seriously.
What to do?
Fix your apache configuration. Limit the number of worker threads and processes it can use, so it can't run wild. Learn how to do this. It's vital. See here: http://httpd.apache.org/docs/current/mod/mpm_common.html#maxrequestworkers
Fix the defects in your web app that are causing your apache to run wild.
If you can, move your mysqld server to a different server machine from apache, so the two don't contend for the same hardware resources.
Configure your mysqld to limit the number of connections it will accept from apache worker threads or other clients. Your web app probably handles the situation where a worker thread needs to wait for a connection. See here. http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_max_connections
Are you on an EC2 micro instance? You need to do some serious tuning. See here: http://ubuntuforums.org/showthread.php?t=1979049
You can check mysql status every minute (with cron) and restart if it is crashed:
* * * * * service mysql status | grep running || service mysql restart
I am doing some experimentation with VPS before moving my application from private Tomcat hosting to cloud. It is a read intensive app built on Struts 2 + Spring + Hibernate + MySql. Its a moderately popular app in India with 1500 visitors and 10,000 pageviews per day. I have some basic questions about choosing a server configuration.
1) Would it be enough to have 256M RAM for running both Tomcat and Mysql. I wont be running anything else other than SSH. No apache, FTP etc. My current heap size is 190M. can i still set the heap size as 190M with 256M RAM? What are the pros and cons?
2) Is it better to have 2 256M servers one with Tomcat and one with Mysql? or 1 server with 512M running both MySql and Tomcat?
I am open to any suggestions. Thanks!
1)
I think it could be done.. I've seen a similarly sized app running on 256MB Linux VPS.
However, you're leaving very little memory for MySQL, which will cause it to have to go to disk often. It could be quite slow.
2)
One server is better than two. You have less to configure and you don't pay for the OS + virtual machine container overhead twice. Also, your app server and your database may not use equal memory, so being on separate machine could be an inefficient use of memory.
I don't believe a 256MB VPS can effectively run a Java WebApp. You need at least a 512MB VPS. I wouldn't consider splitting until you go past a 1GB container. When you split, you have the overhead of the OS running 2x. One of the advantages of splitting though, is you may have more burst-able capacity. If the two VPS are on separate hardware and you get a rush of traffic, you may be able to use more CPU cycles than a single container on a single host. This depends on the load of the other containers of course, and your VPS provider's policy on bursting.