Drupal site: mysql queries not closing and entry resource limit reached - mysql

I have a drupal site (castlehillbasin.co.nz) that has a small number of users. Over the last few days it has suddenly hit the "entry processes limit" continually.
My host provider has shown me that there are many open queries that are sleeping, so are not getting closed correctly. They have advised "to contact a web-developer and check the website codes to see why the databases queries are not properly closing. You will need to optimize the database and codes to resolve the issue". (their words)
I have not made any changes or updates prior to the problem starting. I also have a duplicate on my home server that does not have this issue. The host uses cpanel and I can not see these 'sleeping' processes through mysql queries.
Searching is not turning up many good solutions, except raising the entry process limit (which is 20) and the host will not do that.
So I am a little stumped as to how to resolve the issue, any advice?

I think I have answered it myself. I got temporary ssh access and inspected live queries.
It was the Flickr module and the getimagesize() call timing out (which takes 2 minutes). Turns out it only uses this call for non-square image requests, so I have just displayed square images for now.
In progress issue here: https://www.drupal.org/node/2547171

Related

WordPress: too many queries to the database

I have just moved a WP installation from one hosting provider to another. Everything went fine except for a problem I have with the new installation. Please note that I have moved from a regular VPS to a kinda powerful and fast dedicated machine.
The thing is that now, the website is slower than when in the previous server. It takes 6-7 seconds to load a page and according to Chrome's Dev Tools network panel, it has a period of 3-4 seconds to the get the first response byte (TTFB), which is insane.
I have tried the following with no success:
Review database for anomalies
Disable all plugins (and delete them)
Disable template (and delete it)
With these last two actions, I lowered the loading time to 5-6 seconds, which is a lot for small site (a few hundreds of posts and 50-60 pages), with no comments enabled. I still have the 3-4 TTFB period.
After that, I installed the Query Monitor plugin and found out that, at every page load, WP performs hundreds (ranging from 400 to 800) database queries and, in some cases, even 1500 database queries. OMG!
Honestly, I am quite lost here. I mean, on one hand I have this strange database behavior I cannot really understand. And on the other hand, I cannot help wondering how it was faster on the previous & slower server.
By the way, I have moved from MySQL to MariaDB, which should be even faster too. Indexes are kept when dumping & importing the file. I am lost. :(
Any help is greatly appreciated. Apologies for my english (not my language) and please let me know if the is some important information missing. I will be glad to provide all the necessary information that help me/us troubleshoot this.
Thanks in advance!
I think you should optimize your MySQL config (my.cnf in Linux or my.ini in Windows). To view problems in MySQL you can try run the script MySQLTuner: https://github.com/major/MySQLTuner-perl.

Application Slow after a couple of uses

I created an application that works perfect in my computer but when I uploaded it to start server tests it becomes very slow, specially after a couple of uses (the first minutes work fine)...It even becomes unresponsive, as I move through a treetable a form should be updated from the database but stops working after a while...
I'm using an Amazon EC2 Linux server and a MySQL database...I checked if the connections to the database is what failed, but I'm using no more than 7 out of 150 max connections to the database.
Is this a common problem?
Any ideas on how to solve this?
Thanks!!!
Note: This is a copy of an internal vaadin forum thread: https://vaadin.com/forum#!/thread/4816326 ...Hope is not against the forum rules to do this...
It sounds like you may have a memory leak in your application somewhere that your computer is able to sustain, but your server is not. I would suggest trying some load testing on another machine and see what actions are causing it to spin out.
You can have a look at this SO answer to see how to do that:
https://stackoverflow.com/a/46227692/460802

How to find out what is causing a slow down of the application?

This is not the typical question, but I'm out of ideas and don't know where else to go. If there are better places to ask this, just point me there in the comments. Thanks.
Situation
We have this web application that uses Zend Framework, so runs in PHP on an Apache web server. We use MySQL for data storage and memcached for object caching.
The application has a very unique usage and load pattern. It is a mobile web application where every full hour a cronjob looks through the database for users that have some information waiting or action to do and sends this information to a (external) notification server, that pushes these notifications to them. After the users get these notifications, the go to the app and use it, mostly for a very short time. An hour later, same thing happens.
Problem
In the last few weeks usage of the application really started to grow. In the last few days we encountered very high load and doubling of application response times during and after the sending of these notifications (so basically every hour). The server doesn't crash or stop responding to requests, it just gets slower and slower and often takes 20 minutes to recover - until the same thing starts again at the full hour.
We have extensive monitoring in place (New Relic, collectd) but I can't figure out what's wrong; I can't find the bottlekneck. That's where you come in:
Can you help me figure out what's wrong and maybe how to fix it?
Additional information
The server is a 16 core Intel Xeon (8 cores with hyperthreading, I think) and 12GB RAM running Ubuntu 10.04 (Linux 3.2.4-20120307 x86_64). Apache is 2.2.x and PHP is Version 5.3.2-1ubuntu4.11.
If any configuration information would help analyze the problem, just comment and I will add it.
Graphs
info
phpinfo()
apc status
memcache status
collectd
Processes
CPU
Apache
Load
MySQL
Vmem
Disk
New Relic
Application performance
Server overview
Processes
Network
Disks
(Sorry the graphs are gifs and not the same time period, but I think the most important info is in there)
The problem is almost certainly MySQL based. If you look at the final graph mysql/mysql_threads you can see the number of threads hits 200 (which I assume is your setting for max_connections) at 20:00. Once the max_connections has been hit things do tend to take a while to recover.
Using mtop to monitor MySQL just before the hour will really help you figure out what is going on but if you cannot install this you could just using SHOW PROCESSLIST;. You will need to establish your connection to mysql before the problem hits. You will probably see lots of processes queued with only 1 process currently executing. This will be the most likely culprit.
Having identified the query causing the problems you can attack your code. Without understanding how your application is actually working my best guess would be that using an explicit transaction around the problem query(ies) will probably solve the problem.
Good luck!

solutions to overcome "site goes offline due to mysql 'max_user_connections' error."

I have been working on eCommerce site (using drupal). Few days ago before i am getting this error my site was working fine no issues was there. But now a days no. of times my site goes offline with the error message ('max_user_connection').
I was using some custom code containing mysql_connect and mysql_query now i changed everything into module and no custom queries left as such.The error is still their. On some of the pages data is populated with two different databases and to handle two database at same page i am using drupal function db_set_active().
I had discussed with hosting provider also they have increased a 'connection_limit' but error is still coming, what will be the possible reasons of having this kind of issue and the ways to handle this.
In this case the dbms is not able to serve all incoming connection requests to the database.
You can check with the "show full processlist" (which requires SUPER privilege) for current count of connections.
You now have either two choices: alter you application logic so that overall connections are descreased or you can try to alter the max_connections system variable in order to allow your DBMS to server more connections (also requires SUPER privilege).
But if your provider already told you that they increased 'connection_limit, you should go for the first approach (alter your application logic).

How to benchmark and optimize a really database-intensive Rails action?

There is an action in the admin section of a client's site, say Admin::Analytics (that I did not build but have to maintain) that compiles site usage analytics by performing a couple dozen, rather intensive database queries. This functionality has always been a bottleneck to application performance whenever the analytics report is being compiled. But, the bottleneck has become so bad lately that, when accessed, the site comes to a screeching halt and hangs indefinitely. Until yesterday I never had a reason to run the "top" command on the server, but doing so I realized that Admin::Analytics#index causes mysqld to spin at upwards of 350+% CPU power on the quad-core, production VPS.
I have downloaded fresh copies of production data and the production log. However, when I access Admin::Analytics#index locally on my development box, while using the production data, it loads in about 10 - 12 seconds (and utilizes ~ 150+% of my dual-core CPU), which sadly is normal. I suppose there could be a discrepancy in mysql settings that has suddenly come into play. Also, a mysqldump of the database is now 531 MB, when it was only 336 MB 28 days ago.  Anyway, I do not have root access on the VPS, so tweaking mysqld performance would be cumbersome, and I would really like to get to the exact cause of this problem. However, the production logs don't contain info. on the queries; they merely report the length that these requests took, which average out to a few minutes apiece (although they seemed to have caused mysqld to stall for much longer than this and prompting me to request our host to reboot mysqld just to get our site back up in one instance).
I suppose I can try upping the log level in production to solicit info. on the database queries being performed by Admin::Analytics#index, but at the same time I'm afraid to replicate this behavior in production because I don't feel like calling our host up to restart mysqld again! This action contains a single database request in its controller, and a couple dozen prepared statements embedded in its view!
How would you proceed to benchmark/diagnose and optimize/fix this action?!
(Aside: Obviously I would like to completely replace this functionality with Google Analytics or a similar solution, but I need fix this problem before proceeding.)
I'd recommend taking a look at this article:
http://axonflux.com/building-and-scaling-a-startup
Particularly, query_reviewer and newrelic have been a life-saver for me.
I appreciate all the help with this, but what turned out to be the fix for this was to implement a couple of indexes on the Analytics table to cater to the queries in this action. A simple Rails migration to add the indexes and the action now loads in less than a second both on my dev box and on prod!