Log DB connections opening and closing with Laravel - mysql

I have a Laravel app where I've encountered a [1040] Too many connections error a few times over the past few weeks. I'm very surprised because my max_connections is at the default value (151), and Google Analytics shows I haven't had more than 30 users on the website at the same time..
I'm struggling to debug where the issue might come from. I'm thinking a starting point would be to monitor when Laravel opens and closes connections to the database, to see if I can identity connections that remain open longer than they should.
Is there a way to detect when Laravel actually opens/closes a connection?
(Any other idea on how to find where the issue comes from is also welcome!)

Related

Codeigniter 3 Sessions and RDS - Random Logouts

Can someone explain or tell me if some kind of 'setting' in Codeigniter will solve this?
System : Amazon AWS EC2 (AWS Linux Centos 6.9)
Database : Amazon AWS RDS (Mysql Compatible 5.6.34)
Framework : Codeigniter 3.0.6
PHP : 5.6
Issue : When running my website, I have random logouts that happen after 5 minutes. This means when I start my computer, after 5 minutes the session is cleared and I find myself logged out. (Random amount of time). This continues for the rest of the day. (Random Logouts)
This does NOT happen when setting sessions in codeigniter to files
Setting it to database causes this (Logouts happen within 24 hours).
My pages use active ajax which is hitting the codeigniter system all the time. I read that there is an Ajax Race condition that can cause it, BUT I've noticed that after reading the CI code, it's related to this command
if ($this->_db->query("SELECT GET_LOCK('".$arg."', 300) AS ci_session_lock")->row()->ci_session_lock)
It appears that this 'locks' during an AJAX race condition and that causes CI to drop my session cookie info (And of course logs me out under a `$this->fail()) call). We are using the RDS system, so I am suspecting that GET_LOCK on RDS is slightly different than GET_LOCK on a true MYSQL system.
Anyone have thoughts / ideas? And Yes I tried a ton of combinations of sess_expiration and sess_time_to_update and the only way to fix it is go back to files.
As I expect my system to run on multiple servers in the future, the files might not be desirable (If you know CodeIgniter you know why, too complicated to explain here).
Can anyone give some suggestions / answers on why RDS has an issue with GET_LOCK?

Drupal site: mysql queries not closing and entry resource limit reached

I have a drupal site (castlehillbasin.co.nz) that has a small number of users. Over the last few days it has suddenly hit the "entry processes limit" continually.
My host provider has shown me that there are many open queries that are sleeping, so are not getting closed correctly. They have advised "to contact a web-developer and check the website codes to see why the databases queries are not properly closing. You will need to optimize the database and codes to resolve the issue". (their words)
I have not made any changes or updates prior to the problem starting. I also have a duplicate on my home server that does not have this issue. The host uses cpanel and I can not see these 'sleeping' processes through mysql queries.
Searching is not turning up many good solutions, except raising the entry process limit (which is 20) and the host will not do that.
So I am a little stumped as to how to resolve the issue, any advice?
I think I have answered it myself. I got temporary ssh access and inspected live queries.
It was the Flickr module and the getimagesize() call timing out (which takes 2 minutes). Turns out it only uses this call for non-square image requests, so I have just displayed square images for now.
In progress issue here: https://www.drupal.org/node/2547171

MySQL Connections staying open (I think...)

Alright, so this morning I got a giant spam of automated mails from my vbulletin website with mysql errors stating
`Can't connect to MySQL server on '127.0.0.1'"
Too many connections
User username already has more than 'max_user_connections' active
connections`.
I've never had this before on my host, I don't get that many visitors on my two sites. One site running vbulletin gets between 300-700 daily visits and my second site is one I put together myself so that's probably the source of the connections staying open, I started advertising it yesterday but it doesn't get many visitors either so i don't think it's just too many users connecting, I think it's connections staying open or something...
Is there some way to figure out the source of this, or the location where connections stay open too long or any information would be helpful actually.
Thanks
In a MySQL shell you could run show processlist; which will show you currently running processes, what user is logged in, what database they have selected and what host they're coming from. This might give you some clues to the origin of your excess connections. Maybe you can see queries that are running for a very long time (combine that with an impatient user repeatedly hitting refresh).
Keep in mind that if any of your code runs with persistent connections there will be a bunch of idle processes in that list, which is perfectly normal in that case.

solutions to overcome "site goes offline due to mysql 'max_user_connections' error."

I have been working on eCommerce site (using drupal). Few days ago before i am getting this error my site was working fine no issues was there. But now a days no. of times my site goes offline with the error message ('max_user_connection').
I was using some custom code containing mysql_connect and mysql_query now i changed everything into module and no custom queries left as such.The error is still their. On some of the pages data is populated with two different databases and to handle two database at same page i am using drupal function db_set_active().
I had discussed with hosting provider also they have increased a 'connection_limit' but error is still coming, what will be the possible reasons of having this kind of issue and the ways to handle this.
In this case the dbms is not able to serve all incoming connection requests to the database.
You can check with the "show full processlist" (which requires SUPER privilege) for current count of connections.
You now have either two choices: alter you application logic so that overall connections are descreased or you can try to alter the max_connections system variable in order to allow your DBMS to server more connections (also requires SUPER privilege).
But if your provider already told you that they increased 'connection_limit, you should go for the first approach (alter your application logic).

If a secondary database crashes, will it crash the primary coldfusion server?

Right now we are dealing with a bit of a conundrum in my corporate environment where we are being blamed for a server crash, but I'm not 100% sure we are the culprit. Here's the server environment: We have a primary Coldfusion and its MSSQL database. We then also have a secondary database (MySQL) hosted on a different cloud which is used for miscellaneous tasks. The main reason the system is structured this way is because the primary server is operated by our Content Management System thus we are not allowed to modify it, add tables, or any operations like that, so we use the alternate database for that. By design there are no mission critical items on it, and pages are built in a way such that if the alternate DB returns no rows, the pages will continue to render properly.
Basically, I am being told that when the alternate MySQL server goes down, or stops accepting connections, that it is taking the entire primary cloud with it, including 5 other sites hosted on it. I do not have access to the primary Coldfusion or database logs, because the CMS provider will not give them to me. Thus I can only judge based on the validity of the explanation they are giving me.
The explanation for this behavior coming from our CMS provider is that when Coldfusion queries a database it creates a thread, and that if the DB doesn't respond the threads continue to stack. Eventually the processor is capped, and the server goes down. Is that an accurate explanation of how Coldfusion operates? If so, is there anyway to prevent it, possibly with shorter DB timeouts and the like? Or is the entire explanation posed by our CMS a red herring and something else is really causing the crashes.
Any guidance would be greatly appreciated.
Question answered - Documents found
http://kb2.adobe.com/cps/180/tn_18061.html
http://www.adobe.com/devnet/server_archive/articles/cf_timeouts_and_unresponsive_requests.html
Setting timeout requests globally does not timeout internal processes waiting on external resources (cfquery/cfhttp etc). The only way to time those out is by manually setting the timeout attribute. Not setting this could result in thread overload and a crashed server as was occurring with us.
http://kb2.adobe.com/cps/180/tn_18061.html
From reading bullet point 3 and depending on your traffic, your CMS guy might be right.
Also from the link above:
If the database is down and unresponsive, how many times will ColdFusion Server try to reconnect to the database? Will it eventually restart the ColdFusion Server?
If the database is down or the network link to the database goes down when a query request occurs, the connection will timeout (you can customize the timeout period with the timeout attribute in the cfquery tag) and return an error to the user. Please note that the ability to set the timeout for the connection depends on which driver you are using. You can trap this error and handle it programmatically with thecftry/cfcatch tags.
The catch here is that the timeout variable on the cfquery tags are not compatable with the MySQL ODBC driver. Could not find what the default timeout is. Let's say 5 minutes. If you get more than one request in those 5 minutes, it does appear that the connections will start to 'pile up'.