MySQL Connections staying open (I think...) - mysql

Alright, so this morning I got a giant spam of automated mails from my vbulletin website with mysql errors stating
`Can't connect to MySQL server on '127.0.0.1'"
Too many connections
User username already has more than 'max_user_connections' active
connections`.
I've never had this before on my host, I don't get that many visitors on my two sites. One site running vbulletin gets between 300-700 daily visits and my second site is one I put together myself so that's probably the source of the connections staying open, I started advertising it yesterday but it doesn't get many visitors either so i don't think it's just too many users connecting, I think it's connections staying open or something...
Is there some way to figure out the source of this, or the location where connections stay open too long or any information would be helpful actually.
Thanks

In a MySQL shell you could run show processlist; which will show you currently running processes, what user is logged in, what database they have selected and what host they're coming from. This might give you some clues to the origin of your excess connections. Maybe you can see queries that are running for a very long time (combine that with an impatient user repeatedly hitting refresh).
Keep in mind that if any of your code runs with persistent connections there will be a bunch of idle processes in that list, which is perfectly normal in that case.

Related

Log DB connections opening and closing with Laravel

I have a Laravel app where I've encountered a [1040] Too many connections error a few times over the past few weeks. I'm very surprised because my max_connections is at the default value (151), and Google Analytics shows I haven't had more than 30 users on the website at the same time..
I'm struggling to debug where the issue might come from. I'm thinking a starting point would be to monitor when Laravel opens and closes connections to the database, to see if I can identity connections that remain open longer than they should.
Is there a way to detect when Laravel actually opens/closes a connection?
(Any other idea on how to find where the issue comes from is also welcome!)

How to force close Access application when user lost connection to back-end

The Question:
Is there some way to force close access so it doesn't need access to the back-end server in order to exit?
The Situation:
I have an Access 2016 DB. The back-end is on a networked share drive which is only accessible when connected to the lan or on VPN. On load there is a ping test to the server, if found it copies the tables to local tables, if not, it just tells the user can't connect and continues on using the old data. The users travel a lot and can't always be on the VPN so the idea is that the data they have isn't more than a few days old. BTW, did I mention the users are only consumers of information and not contributors so I don't care that they can't write to the back-end. The tables contain a few 100k records, the application just puts it in nice easy to search and cross-reference reports.
The Problem:
While this loads and runs really nicely regardless of them being connected to the lan or not, it will NOT close if they don't have a connection to the server. It doesn't produce an error which I could easily handle, it just hangs. Task Manager won't even close it.
Attempted Solutions:
I tried to unlink the tables and just use a temporary connection to the backend to load the tables when I need them at the beginning, however this meant the user was prompted by the Microsoft Trust Center about 8 times every single time they loaded this unless I have each of them actually open the back-end DB themselves, give them the password to do that, and none of that is practical.
Access doesn't play well with remote BE..if you want to be on the Remote Side with Access you have 2 options :
Connect via RDS..the user connects to the server via Remote Desktop ..everything is "local" ..so now issues on lost connections ...as long the RDP connection hold everything is smooth and more importantly you don't have BE disconnects that cause corruption or loosing data (hint : Using the RemoteApp technology it will seem to the end user like he/she is working locally...i am using it and its great)
Switch BE...as i said , is not wise to use Access BE via remote connection..in order to do that switching to MsSQL/MySQL/PostGre ...etc will give you the true remote connection capability
After playing with all the settings for a few days, I finally figured out what my problem was.
In an effort to test different settings to see if I could reduce file size at one point I turned on "clear cache on exit" in the Current Database settings. Turning this off fixed the problem. I had forgotten that was on, so it turned out to not be a programming issue after all.

mariaDB see what users are deleted

So this morning I was looking at our company's database users and it was a great mess, with some big potential insecurities. Since most colleagues where around I decided to gather them around and decide which users to delete.
Now I forgot one colleague uses powerbi and he wasn't around (a lot of different dashboards) and it will take some time for him te replace all the data sources with a new user. So I was wondering if there is anyway I can find the users I deleted so I can see which one was used by him since you cant see which credentials he was using in powerbi.
If the general query log was enabled on your server, then you have a log of each query and the user that executed it.
Alternatively, enable the error log and set log-warnings to 1 or higher and then ask the user to try to refresh the dashboard. The access denied event will be logged into the error log.

Drupal site: mysql queries not closing and entry resource limit reached

I have a drupal site (castlehillbasin.co.nz) that has a small number of users. Over the last few days it has suddenly hit the "entry processes limit" continually.
My host provider has shown me that there are many open queries that are sleeping, so are not getting closed correctly. They have advised "to contact a web-developer and check the website codes to see why the databases queries are not properly closing. You will need to optimize the database and codes to resolve the issue". (their words)
I have not made any changes or updates prior to the problem starting. I also have a duplicate on my home server that does not have this issue. The host uses cpanel and I can not see these 'sleeping' processes through mysql queries.
Searching is not turning up many good solutions, except raising the entry process limit (which is 20) and the host will not do that.
So I am a little stumped as to how to resolve the issue, any advice?
I think I have answered it myself. I got temporary ssh access and inspected live queries.
It was the Flickr module and the getimagesize() call timing out (which takes 2 minutes). Turns out it only uses this call for non-square image requests, so I have just displayed square images for now.
In progress issue here: https://www.drupal.org/node/2547171

Connections Option in RDS Mysql and best way to handle many connections

In the below image it shows current activity as 99 Connections.
How exactly it is counted.
RDS is accessed through node.js webservices, php website. Every time I do some operations I close the connection. So once after closing it doesn't decrease rather it keeps increasing. Later I got the too many connections error message once the connections became 608. I restarted then it works. I never seen it decreasing.
So what is the best way I can handle it.
Below is the image which is showing when I run SHOW FULL PROCESSLIST;
PHP-based web pages that use a MySQL connection generally exit as soon as they're done rendering page content, so the connection gets closed whether you explicitly call a mysqli or PDO close method or not.
The same is not true of Node services, which run for a long time and can therefore easily leak resources. It's probable that you're opening connections, but not closing them, in your Node service, which would produce the sort of behavior you're seeing here. (This is an easy mistake to make, especially for those of us whose background is largely in more ephemeral PHP scripts.)
One good way to identify the problem is to connect to the MySQL instance via Workbench or the console monitor and issue SHOW FULL PROCESSLIST; to get a list of currently active connections, their originating hosts, and the queries (if any) they are executing. This may help you narrow down the source of the leaking connections, so that you can identify the code at fault and repair it.