How to fix: mysql_connect(): Too many connections - mysql

I am getting the following error:
mysql_connect(): Too many connections
It has completely shut down my site, which has been running seamlessly for several years.
Note: I have shared hosting with GoDaddy.
How do I fix this?
ALSO: is there a way to close all connections and restart when on a shared hosting plan?

This is a Technical Response
You will get this "too many connections" error upon connecting to MySQL when the MySQL server has reached its software configurable artificial limit of maximum concurrent client connections.
So, the proper way to fix this is:
Directly connect to the MySQL server, and perform the query: SET GLOBAL max_connections = 1024; to change the connection limit at runtime (no downtime).
Make your change permanent, and edit the /etc/my.cnf (or similar) and add the line max_connections = 1024 line within the [mysqld] section; then restart if you couldn't do the live change.
The chosen limit of 1024 was pure subjective; use whatever limit you want to. You can also inspect your current limit via query SHOW VARIABLES LIKE "max_connections";. Keep in mind, that these limits are there for good use, in order to prevent unnecessary overload of your backend database. So always choose wise limits.
However, for those steps you are required to have direct access to your database MySQL server.
As you said, you are using GoDaddy (I do not know them that much), you are left out with the option to contact your service provider (i.e. GoDaddy). However, they will see this in their logs anyway, also.
Possible Root Causes
This of course means, that too many clients are attempting to connect to the MySQL server at the same time - as of the per configuration specified artificial software limit.

Most probably, you have been a subject of a DDoS attack.
People on this forum complain on exactly same thing with exactly same provider.
The answer is this:
VB told me it was a DOS attack - here is their message:
This is not an 'exploit'. This is a DoS attack (Denial of Service). Unfortunately there is nothing we can do about this. DoS attacks can only be fought at the server or router level, and this is the responsibility of your host. Instead of doing this they have decided to take the easy way out and suspend your account.
If you cannot get them to take this seriously, then you should look for another host. Sorry for the bad news.
A possible workaround can be this: if your connection fails with mysql_connect(): Too many connections, you don't quit, but instead sleep() for half a second and try to connect again, and exit only when 10 attempts fail.
It's not a solution, it's a workaround.
This of course will delay your page loading, but it's better than an ugly too many connections message.
You also can come with a some kind of a method which tells bots and browsers apart.
Like, set a salted SHA1 cookie, redirect to the same page and then check that cookie and connect to MySQL only if the user agent had passed the test.

Another thing that can cause this error is if the Database has run out of space. I recently had this occur, and the issue wasn't connections, it was Disk space. Hope this helps someone else!

Do you close your connection when you're done with them? Are you using some type of connection pooling? Sounds like you're opening connections and not closing them.
EDIT: Already answered by Quassnoi. In the case it is a DDoS, and you're using shared hosting, you may be left with just contacting your host and working it out with them. Unfortunately this is a risk when you don't have control of your whole system.

Consider using mysql_pconnect(). Your host may have adding some sort of throttling for connections. Like a maximum of 100 per 20 minutes or something weird.

First, check your database connections:
show variables like 'max_connections';
to check all variables
show variables;
Connect your MySQL server and update the connection size:
SET GLOBAL max_connections = 1001;

Related

Connections Option in RDS Mysql and best way to handle many connections

In the below image it shows current activity as 99 Connections.
How exactly it is counted.
RDS is accessed through node.js webservices, php website. Every time I do some operations I close the connection. So once after closing it doesn't decrease rather it keeps increasing. Later I got the too many connections error message once the connections became 608. I restarted then it works. I never seen it decreasing.
So what is the best way I can handle it.
Below is the image which is showing when I run SHOW FULL PROCESSLIST;
PHP-based web pages that use a MySQL connection generally exit as soon as they're done rendering page content, so the connection gets closed whether you explicitly call a mysqli or PDO close method or not.
The same is not true of Node services, which run for a long time and can therefore easily leak resources. It's probable that you're opening connections, but not closing them, in your Node service, which would produce the sort of behavior you're seeing here. (This is an easy mistake to make, especially for those of us whose background is largely in more ephemeral PHP scripts.)
One good way to identify the problem is to connect to the MySQL instance via Workbench or the console monitor and issue SHOW FULL PROCESSLIST; to get a list of currently active connections, their originating hosts, and the queries (if any) they are executing. This may help you narrow down the source of the leaking connections, so that you can identify the code at fault and repair it.

Magento and database max_user_connections error

Is there a way of monitoring in Magento what modules make connections to the database? Recently I encountered that my website the following error in reports:
SQLSTATE[42000] [1203] User magento_db_user already has more than 'max_user_connections' active connections
My hosting allows having 10 active connections at once, so the hosting shouldn't be the problem here, right? The number of users that visit my website at once is also not that high.
I would have to know a way of monitoring/logging what modules try to connect to the database, so I can react, maybe improving or disabling some of them. Is there a way to do it in Magento? The only monitoring methods I was able to find on the Internet are for databases themselves, but my hosting doesn't allow tinkering with the db.
Thanks in advance for any ideas on how to deal with this error.
#boruch - enabling persistent connections, huh??
#Bartosz Górski -if you dont have access to my.cnf file and if your hosting provider limiting your database operations, you better find another one. for god sake, this is your shop, your business. today you can get any hosting you like, unlimited.
Try enabling persistent connections in your server (if you can).
Also you can use an event observer to get all connections (like model_load_before)
But the module could be a bit complex.
Maybe try disabling modules one at a time and see if this returns? :)

solutions to overcome "site goes offline due to mysql 'max_user_connections' error."

I have been working on eCommerce site (using drupal). Few days ago before i am getting this error my site was working fine no issues was there. But now a days no. of times my site goes offline with the error message ('max_user_connection').
I was using some custom code containing mysql_connect and mysql_query now i changed everything into module and no custom queries left as such.The error is still their. On some of the pages data is populated with two different databases and to handle two database at same page i am using drupal function db_set_active().
I had discussed with hosting provider also they have increased a 'connection_limit' but error is still coming, what will be the possible reasons of having this kind of issue and the ways to handle this.
In this case the dbms is not able to serve all incoming connection requests to the database.
You can check with the "show full processlist" (which requires SUPER privilege) for current count of connections.
You now have either two choices: alter you application logic so that overall connections are descreased or you can try to alter the max_connections system variable in order to allow your DBMS to server more connections (also requires SUPER privilege).
But if your provider already told you that they increased 'connection_limit, you should go for the first approach (alter your application logic).

mysql connections. Should I keep it alive or start a new connection before each transaction?

I'm doing my first foray with mysql and I have a doubt about how to handle the connection(s) my applications has.
What I am doing now is opening a connection and keeping it alive until I terminate my program. I do a mysql_ping() every now and then and the connection is started with MYSQL_OPT_RECONNECT.
The other option (I can think of), would be to start a new connection before doing anything that requires my connection to the database and closing it after I'm done with it.
What are the pros and cons of these two approaches?
what are the "side effects" of a long connection?
What is the most used method of handling this?
Cheers ;)
Some extra details
At this point I am keeping the connection alive and I ping it every now and again to now it's status and reconnect if needed.
In spite of this, when there is some consistent concurrency with queries happening in quick succession, I get a "Server has gone away" message and after a while the connection is re-established.
I'm left wondering if this is a side effect of a prolonged connection or if this is just a case of bad mysql server configuration.
Any ideas?
In general there is quite some amount of overhead incurred when opening a connection. Depending on how often you expect this to happen it might be ok, but if you are writing any kind of application that executes more than just a very few commands per program run, I would recommend a connection pool (for server type apps) or at least a single or very few connections from your standalone app to be kept open for some time and reused for multiple transactions.
That way you have better control over how many connections get opened at the application level, even before the database server gets involved. This is a service an application server offers you, but it can also be rolled up rather easily if you want to keep it smaller.
Apart from performance reasons a pool is also a good idea to be prepared for peaks in demand. When a lot of requests come in and each of them tries to open a separate connection to the database - or as you suggested even more (per transaction) - you are quickly going to run out of resources. Keep in mind that every connection consumes memory inside MySQL!
Also you want to make sure to use a non-root user to connect, because if you don't (I think it is tied to the MySQL SUPER privilege), you might find yourself locked out. MySQL reserves at least one connection for an administrator for problem fixing, but if your app connects with that privilege, all connections would already be used up when you try to put out the fire manually.
Unless you are worried about having too many connections open (i.e. over 1,000), you she leave the connection open. There is overhead in connecting/reconnecting that will only slow things down. If you know you are going to need the connection to stay open for a while, run this query instead of pinging periodically:
SET SESSION wait_timeout=#
Where # is the number of seconds to leave an idle connection open.
What kind of application are you writing? If it's a webscript: keep it open. If it's an executable, pool your connections (if necessary, most of the times a singleton will do).

If a secondary database crashes, will it crash the primary coldfusion server?

Right now we are dealing with a bit of a conundrum in my corporate environment where we are being blamed for a server crash, but I'm not 100% sure we are the culprit. Here's the server environment: We have a primary Coldfusion and its MSSQL database. We then also have a secondary database (MySQL) hosted on a different cloud which is used for miscellaneous tasks. The main reason the system is structured this way is because the primary server is operated by our Content Management System thus we are not allowed to modify it, add tables, or any operations like that, so we use the alternate database for that. By design there are no mission critical items on it, and pages are built in a way such that if the alternate DB returns no rows, the pages will continue to render properly.
Basically, I am being told that when the alternate MySQL server goes down, or stops accepting connections, that it is taking the entire primary cloud with it, including 5 other sites hosted on it. I do not have access to the primary Coldfusion or database logs, because the CMS provider will not give them to me. Thus I can only judge based on the validity of the explanation they are giving me.
The explanation for this behavior coming from our CMS provider is that when Coldfusion queries a database it creates a thread, and that if the DB doesn't respond the threads continue to stack. Eventually the processor is capped, and the server goes down. Is that an accurate explanation of how Coldfusion operates? If so, is there anyway to prevent it, possibly with shorter DB timeouts and the like? Or is the entire explanation posed by our CMS a red herring and something else is really causing the crashes.
Any guidance would be greatly appreciated.
Question answered - Documents found
http://kb2.adobe.com/cps/180/tn_18061.html
http://www.adobe.com/devnet/server_archive/articles/cf_timeouts_and_unresponsive_requests.html
Setting timeout requests globally does not timeout internal processes waiting on external resources (cfquery/cfhttp etc). The only way to time those out is by manually setting the timeout attribute. Not setting this could result in thread overload and a crashed server as was occurring with us.
http://kb2.adobe.com/cps/180/tn_18061.html
From reading bullet point 3 and depending on your traffic, your CMS guy might be right.
Also from the link above:
If the database is down and unresponsive, how many times will ColdFusion Server try to reconnect to the database? Will it eventually restart the ColdFusion Server?
If the database is down or the network link to the database goes down when a query request occurs, the connection will timeout (you can customize the timeout period with the timeout attribute in the cfquery tag) and return an error to the user. Please note that the ability to set the timeout for the connection depends on which driver you are using. You can trap this error and handle it programmatically with thecftry/cfcatch tags.
The catch here is that the timeout variable on the cfquery tags are not compatable with the MySQL ODBC driver. Could not find what the default timeout is. Let's say 5 minutes. If you get more than one request in those 5 minutes, it does appear that the connections will start to 'pile up'.