I have problem, when MySQL thread sometime stuck at status "Writing to net".
I have 4 Apache server (2.4) (requests are load-balanced on them) a 1 MySQL (MariaDB 10). Apache is executing php56. All Apache servers have same configuration. All servers runs on CentOS 7. SElinux is disabled on Apache servers for debug reasons. No problems in audit logs on DB server. All servers are virtual and located on same cluster (VMware).
Problem appear only on specific pages and specific queries to DB.
Usually there is around 100-200 separate queries on page and most of them takes 0.0001-0.0010 s. But then I have one query that takes around 1-2sec. The query itself take much lesser time (around 0.0045s).
Problematic query returns around 8984 rows and when executed from CLI from debug script, it is executed fast as expected.
Strange is that in time some Apache servers execute that page quickly, and some slowly. It changes (during day). Also I tried remove one Apache server from cluster and then send same request. If server is not under any load, it usually responds fast.
All server have enough resources (CPU and RAM) so it is definitely not load issue. They usually have around 4-10 active Apache workers (prefork) and have capacity for 100 active workers.
I tried debugging with tcpdump and when requesting page, I can see packet flow for fast queries and then it stops for a while and resumes. Not sure if the problem is on MySQL server or on Apache server.
My guess is that I am hitting some kind of limit, but I have no idea which one.
The solution is quite odd.
First few more details:
All Apache severs have same application data (PHP files, images, etc.) Mounted from NFS. The NFS share was working fine (low latency, no data corruption).
Solution:
When I was desperate I went through every possible log. Then I noticed that iptables are dropping some packets from NFS server. Well I said to myself that I should probably fix that, even when its not related.
But after I allowed all traffic from NFS to my Apache servers, MySQL status "writing to net" disappeared and all websites started to respond quickly.
Related
I have been working on the server migration of a legacy ecommerce application using PHP 5.6.
The switch involved two Dedicated 32 servers from Linode.
One server is for NginX + PHP and the other is for MySQL only.
The legacy application leverages memcached.
After the switch, I can see a heavy internal traffic caused due to private inbound and outbound connections.
So far this element didn't cause any problem on performance.
However, I was under the impression that the queries would be cached on the local machine, and not on the remote.
Because if the query is cached on the remote host, it sill has to transmit the result set over the private network, instead of retrieving from RAM or the local SSD.
Am I assuming this wrong?
It may be that I am missing the point where the private inbound traffic is more beneficial for overall performance when compared to a local cache.
MySQL has a feature called the Query Cache, but this caches query result sets in the mysqld server process, not on the client. If you run the exact same query again after the result has been cached in the Query Cache, it will copy the result from the Query Cache and avoid the cost of running the query again. But this will not avoid the time to transfer the result across the network from mysqld to your PHP application.
Also keep in mind that the MySQL Query Cache is being deprecated and retired.
Alternatively, your application may store data from query results in memcached, but typically this would be done by the application code (I know there are UDF's to read and write memcached from MySQL triggers, but this is a bad idea).
If your memcached service is not on the same host as your PHP code, it would result in network transfer twice: Once when querying the data from MySQL the first time, then again transferring the data into memcached, then later every time you fetch the cached data out of memcached.
PHP also has some features to do in-memory caching, such as APCu. I don't have any experience with this, and it's not clear from a brief scan of the documentation where it stores cached data.
PHP is designed to be a "shared nothing" language. Every PHP request has its own data, and data doesn't normally last until the next request. This is why a cache is typically not kept in PHP memory. Applications rely on either memcached or the database itself, because those will hold data longer than a single PHP request.
If you have a fast enough network, it shouldn't be a high cost to fetch items out of a cache over a network. The performance architects at a past job of mine developed this wisdom:
"Remote memory is faster than local storage."
They meant that if the data is in RAM on a server, then reading it from RAM even with the additional overhead of transferring it across a network is usually better than reading the data from persistent (disk) storage on the local host.
My server got stucked last night because of database connection error.
I investigated it is caused by too many database connections. After a research from google and stackoverflow, I didn't get any useful information. While I am trying to investigate all plugins one by one to see if any of them has a bug or something did this, I would like to ask your helps..
First of all, when I logged in to MySQL I can see a lot of SLEEP queries with NULL info there. I tried to use command line to kill all sleep queries but there still more requests fill all connections right away.
The weird thing is, the apache server is not actually getting high volumn of requests. I am actually using AWS RDS as my database server so the apache and mysql is not on same server. The RDS server doesn't have public access so I am sure all requests are only from my apache server. The cpu usage on apache server is not high. Also, I searched the apache's access_log there are not a lot requests at that time. And I cannot find anything wrong with these requests. Especially there is no requests is performing injection attack. I think it is possible some thing triggered in the code so I searched 'SLEEP' in all my code but can only find some in the w3 total cache plugin, which the code blocks in this plugin is not easily get reached..I turned off the XML-RPC in apache level so it shouldn't be the XML-RPC attack.
I know there are a lots of possibility since I am using about twenty plugins in my site, but it is really weird I cannot find any possible requests caused this on apache level. Is it possible any requests can hit the server without being recorded in access_log?
I am pretty new to configure apache and mysql on my own and still learning these features..Thanks in advance for helping me!
We have a 2GB Digital Ocean server, and it is dedicated for a MySQL server of other two PHP servers. we are using Percona MySQL Server 5.6 on this server. We configured MySQL replication and these configuration is working fine
Our issue is sometime our site monitoring tools reporting that some of the URL hosted with this server is down (May be this is happening once in a week or two). When I am checking, I could see that Mysql Master server load is too much high (May be 35 - 40), so the MySQL server was not responded. # that I usually do a MySQl service restart, this restart cause to server load become normal and the sites started working after service restart.
This is a back-end MySQL database server of 20-25 PHP applications (WordPress, Drupal and some custom applications server).
Here are my questions,
Why this server load automatically goes down, after a spikes happens?
Is there any way in which database is causing issues? So that I can identify the application too.
How can I identify the root cause of this issues
Depending upon your working dataset, a 2GB server providing access for 20-25 PHP applications (WordPress, Drupal and some custom applications server) could be the issue.
For example, if you have a 1.4GB buffer pool (assuming all tables are InnnoDB) and 10GB of data, then your various applications could end up competing for resources, such as I/O, buffer pool pages, Adaptive Hash Index, query cache. They could also, assuming caching is used, be invalidating theit caches within a similar timeframe, thus sending expensive queries to the database.
Whilst a load of 50 is something that you would normally want to avoid, the load average is not something that you should concern yourself with if showing in isolation.
The use of the uninterruptible state has since grown in the Linux
kernel, and nowadays includes uninterruptible lock primitives. If the
load average is a measure of demand in terms of running and waiting
threads (and not strictly threads wanting hardware resources), then
they are still working the way we want them to.
http://www.brendangregg.com/blog/2017-08-08/linux-load-averages.html
If the issue is happening once per week then it is starting to sound like a batch process, or cache expiration issue - too much happening at once for the resources available.
The best thing to do is to monitor and look for the cause. Since you are already using Percona Server, using PMM should give you the perfect insight to find the cause, although it works with Oracle MySQL, MariaDB, Aurora, etc. You can try a demo to see the insights that you can gain:
https://pmmdemo.percona.com. The software is Open Source and free to use.
You can look in QAN to find the most expensive queries, whilst looking at Prometheus data to give an insight into the host itself. There are some recommendations to get the most from PMM, depending upon your flavour of MySQL.
We have installed WordPress on EC2 t1.micro instance and installed Buddypress on top of that, everything work fine for single user, but when two user access at same time, site goes down, because of RAM issue, httpd (Apache) takes maximum memory, how to overcome this, is there any configuration need to do in http.conf file or any network / traffic blocking tool do i need to install?
Micro instances are notoriously too small to handle WordPress and MySQL together. They're going to thrash (overuse the disk swap feature) or just run out of RAM and crash.
You are going to have to do a lot of tuning to get this right on a micro instance, and it is never going to be rock-stable. It's a pain in the neck. If your time is worth more than a dollar an hour compared to hosting fees, you should upgrade to an instance with more RAM, or sign up for one of the many US$6 per month shared hosting accounts available in the world.
Where to start tuning? Try setting a value in the Apache httpd.conf.
Set MaxRequestWorkers to a low number. You might try 4. When this number is low then you also won't have many simultaneous clients connecting from your Apache/php to your MySQL server.
Requests from web-browser clients will be enqueued when all your workers are busy. That works correctly, but may make your web site seem slow to your users. See the backlog parameter in the Linux documentation for listen(2) for an explanation of that queuing.
That will save both on Apache RAM and MySQL resources.
http://httpd.apache.org/docs/current/mod/mpm_common.html#maxrequestworkers
Then you probably should look at the my.conf file for MySQL, and see what you can play around with.
Edit MySQL, Apache, and php are all drawing on the same pool of RAM -- 512MB if I remember correctly. Reducing the number of Apache workers should help control RAM usage by Apache (and php, which is probably running in the Apache server's address space). Do that.
Then, go find the memory_limit in php.ini. It's set to 128M in many standard installations. Try reducing it to 64M or 40M. That will make each php instance use less RAM. But, if your WordPress installation is complex (lots of plugins, fancy theme), it may make some pages fail to load. WordPress will announce the problem as memory running out. http://php.net/memory-limit
Then, jump into MySQL's my.ini. The standard MySQL install comes with a file called my-small.ini, which contains the configuration parameters for a small MySQL instance. Yours can be small: WordPress's tables contain hundreds or a few thousands of rows, not hundreds of thousands. Save your old my.ini and then copy the contents of my-small.ini into my.ini. Restart your MySQL server after doing that.
Those steps may help you squeak by in a micro instance. They may not. They are, I suppose, worth a try.
When the system runs out of memory, ubuntu 12.04 kills the mysql process:
Out of memory: Kill process 17074 (mysqld) score 146 or sacrifice child
So the process ends up killed.
This happens at peaks of server load and mainly because of apache getting wild and eating the remaining available memory. Possible approaches could be:
Change somewhere somehow the priority of mysql, so it's not killed (probably a bad fix as something else will be killed)
Monitor the status of mysql and restart automatically whenever it's killed (the one I'm thinking about, but don't know how to do it).
How do you see it?
Abrupt termination of a database server is a very serious crash. You need to avoid this in a production system, because it may not restart cleanly.
The database server is a shared resource, and should almost never terminate in an unplanned fashion in production. The only thing that should cause unplanned termination is a catastrophic hardware or power failure. Most properly configured production data base servers have an unplanned termination once every ten years or less frequently. Seriously.
What to do?
Fix your apache configuration. Limit the number of worker threads and processes it can use, so it can't run wild. Learn how to do this. It's vital. See here: http://httpd.apache.org/docs/current/mod/mpm_common.html#maxrequestworkers
Fix the defects in your web app that are causing your apache to run wild.
If you can, move your mysqld server to a different server machine from apache, so the two don't contend for the same hardware resources.
Configure your mysqld to limit the number of connections it will accept from apache worker threads or other clients. Your web app probably handles the situation where a worker thread needs to wait for a connection. See here. http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_max_connections
Are you on an EC2 micro instance? You need to do some serious tuning. See here: http://ubuntuforums.org/showthread.php?t=1979049
You can check mysql status every minute (with cron) and restart if it is crashed:
* * * * * service mysql status | grep running || service mysql restart