MySql Error: Too Many Connections - mysql

Since yesterday I get the error.
How much could be contacted?
Warning: mysql_connect() [function.mysql-connect]: Too many connections in /home/----/public_html/----/class_db.php on line 85

Look at your /etc/my.cnf or wherever your MySQL configuration file is located. Increase max_connections. The default number depends on your version of MySQL, but it is probably set to 100.

You should monitor your connections and close them if they are not used anymore. Otherwise you could decrease the value of the wait_timeout server variable, it will help to close them automatically.

It is also worth knowing that if you run out of usable disc space on your server partition or drive, that this will also cause MySQL to return with this error.
If you're sure it's not the actual number of users connected then the next step is to check that you have free and usable space on your MySQL server drive/partition.
I found this out this morning, when a backup system oversaved and crammed my drive full of backups, running out of disc space and causing this error :-/ .

Related

RHEL mysql cannot connect through the socket, permissions and locations are set properly, curious about the reasons

I had a problem over the weekend. MySQL started giving "Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock'" Error over the weekend.
As suggested, I checked the location of the socket file, and it was in the right spot.
Then I checked the permission of the folders, and they were set to 0755.
Then I checked the disk space and there was plenty of it.
Then I performed a graceful server restart. That was what solved the problem.
My question, with these given, what can be the cause for this issue? This is our production server and I need to do due diligence an get to the root of this to prevent this from happening.
EDIT:
I am not attempting to resolve the issue, I am attempting to investigate the cause.
my.cnf has this set:
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
Thank you.
The last time I ran into this issue, I was able to get around it by setting the host parameter (-h127.0.0.1).
I'm not sure why it worked though, or rather, I'm not sure which specific configuration issues lead to it. Sometimes there can be different mysql.sock files at the same time, or in unexpected places (for example, on my last work machine I accidentally had three instances in different directories).

1030 Got error 28 from storage engine

I am working on a project where i need to create a database with 300 tables for each user who wants to see the demo application. it was working fine but today when i was testing with a new user to see a demo it showed me this error message
1030 Got error 28 from storage engine
After spending some time googling i found it is an error that is related to space of database or temporary files. I tried to fix it but i failed. now i am not even able to start mysql. How can i fix this and i would also like to increase the size to maximum so that i won't face the same issue again and again.
Mysql error "28 from storage engine" - means "not enough disk space".
To show disc space use command below.
myServer# df -h
Results must be like this.
Filesystem Size Used Avail Capacity Mounted on
/dev/vdisk 13G 13G 46M 100% /
devfs 1.0k 1.0k 0B 100% /dev
To expand on this (even though it is an older question); It is not not about the MySQL space itself probably, but about space in general, assuming for tmp files or something like that.
My mysql data dir was not full, the / (root) partition was
I had the same issue in AWS RDS. It was due to the Freeable Space (Hard Drive Storage Space) was Full. You need to increase your space, or remove some data.
My /tmp was %100. After removing all files and restarting mysql everything worked fine.
My /var/log/apache2 folder was 35g and some logs in /var/log totaled to be the other 5g of my 40g hard drive. I cleared out all the *.gz logs and after making sure the other logs werent going to do bad things if I messed with them, i just cleared them too.
echo "clear" > access.log
etc.
Check your /backup to see if you can delete an older not needed backup.
I had a similar issue, because of my replication binary logs.
If this is the case, just create a cronjob to run this query every day:
PURGE BINARY LOGS BEFORE DATE_SUB( NOW(), INTERVAL 2 DAY );
This will remove all binary logs older than 2 days.
I found this solution here.
A simple:
$sth->finish();
Would probably save you from worrying about this. Mysql uses the system's tmp space instead of it's own space.
sudo su
cd /var/log/mysql
and lastly type: > mysql-slow.log
This worked for me
Drop the problem database, then reboot mysql service (sudo service mysql restart, for example).
If you want to use the tokudb plugin
This can happen if you have less than 5% (by default) of free space.
see the option: tokudb_fs_reserve_percent

MySQL 5.5 - Issues importing a large sql file via command line

I have been trying for quite a while to import a large (4GB) sql dump file with the MySQL command line. I always get a "MySQL server has gone away" error at a particular line, or if I split the file into smaller chunks (about 512 MB), I get an error about the maximum allowed packet size exceeded, again around the same line. From what I have read, the solution is to change the max_allowed_packet size in my.ini to a large number and restart, but I have done so with no luck. I have also tried the import using the command line parameter, --max_allowed_packet=2147483648, no luck there either. Is there something else that I am missing here? I've exhausted all of the other forum suggestions, maybe someone has another idea I can try. Thanks,
JW
Also, you have to change it for both the client and the daemon mysqld server. Change the my.cnf or my.ini file under the [mysqld] section and set max_allowed_packet=1000M or you could run these commands in a MySQL console connected to that same server:
set global net_buffer_length=1000000000;
set global max_allowed_packet=1000000000;
Use a very large value for the packet size, because they are in bytes and then restart your MySQL server.

Mysql resource temporarily unavailable

I'm seeing a few of these errors during high load times:
mysql_connect() [<a
href='function.mysql-connect'>function.mysql-connect</a>]: [2002] Resource
temporarily unavailable (trying to connect via
unix:///var/lib/mysql/mysql.sock)
From what I can tell the mysql server isn't hitting its max connections limit, but there's something else stopping it from serving the query. What other limits would MySQL be hitting?
I'm running RHEL 6.2 64bit with MySQL 5.5.21
Let's assume your system is currently Unix-based (as given in your problem statement). If this is correct, here's the set of issues you may be running into:
You've run out of memory available to MySQL.
This is the most likely problem you're facing. Each connection in MySQL's connection pool requires memory to function, and if this resource is exhausted, no further connections can be made. Of course, the memory footprints and maximum packet sizes of various operations can be tuned in your equivalent to my.cnf if you discover this to be an issue.
Here's an additional thread that can help there, but you may also consider using simpler profiling tools like top to get a good ballpark estimate of what's going on.
You've run out of file descriptors available to your MySQL user account.
Another common issue: if you're trying to service requests that require file IO above the 1,024 boundary (by default), you will run into cases where the operation simply fails. This is because most systems specify a soft and hard limit on the number of open file descriptors each user can have available at one time, and walking over this threshold can cause problems.
This will usually have a series of glaringly obvious signs expressed in your log files. Check /var/log/messages and your comparable directories (for example, /var/log/mysql to see if you can find anything interesting.
You've run into a livelock or deadlock scenario where your thread is unsatisfiable.
Corollary to memory and file descriptor exhaustion, threads can time out if you've overstepped the computational load your system is capable of handling. It won't throw this error message, but this is something to watch out for in the future.
Your system is running out of PIDs available to fork.
Another common scenario: fork only has so many PIDs available for its use at any given time. If your system is simply overforked, it will cease to be able to service requests.
The easiest check for this is to see if any other services can connect through to the machine. For example, trying to SSH into the box and discovering that you cannot is a big clue.
An upstream proxy or connection manager has run out of resources and ceased servicing requests.
If you have any service layer between your client and MySQL, it bears inspecting to see if it has crashed, hung, or otherwise become unstable. The advice above applies.
Your port mapper has exhausted itself after 65,536 connections.
Unlikely, but again, a possible exhaustion case. Checking the trivial service connection as above is, ehm, also the best port of call here.
In short: this is a resource exhaustion scenario, inclusive of the server simply being "down". You're going to have to profile your system further to see what you're blocking on. All the error message gives us in this case is the fact the resource is unavailable to the client -- we'd need to see more information about the server to determine a more adequate remedy.
I still haven't found which limits it was hitting, but I did manage to work around the problem. There was a problem with our session table (in vbulletin) which uses the MEMORY engine. The indexes for this table were HASH and thus when vbulletin purged this table once an hour it would lock the table just long enough to hold up other queries and push mysql to the limit of its resources.
By changing the indexes to BTREE this allowed MySQL to delete the rows from the session table a lot quicker and avoid any limits there were reached previously. The errors only started when we upgraded our master db server to MySQL 5.5, so I'm guessing MEMORY tables are handled differently in the latest release.
See http://www.mysqlperformanceblog.com/2008/02/01/performance-gotcha-of-mysql-memory-tables/ for information on speed increases from using BTREE indexes over HASH For MEMORY.
Geez, this could be so many things. It could be that the socket buffer space is exhausted. It could be that mysql is not accepting connections as fast as they are coming in and the backlog limit is reached (though I'd expect that to give you a "Connection Refused" error, I don't know for sure that's what you'll get for a Unix domain socket). It could be any of the things #MrGomez pointed out.
Since you are running Apache and MySQL on the same server and this is a problem under high load, it could well be that Apache is starving the system of some resource and you're just not seeing (noticing?) the dropped/failed incoming connections/requests in your logs.
Are you using connection pooling? If not, I'd start there.
I'd also look for errors in the Apache logs and syslog around the same time as the mysql_connect error and see what else turns up. I'd especially recommend getting MySQL moved over to its own separate dedicated server.
In my case, I was working with JSON data types with PDO (PHP Driver).
I was using fetch to retrieve one item but forgot to add LIMIT 1 to the query. Adding it solved the problem.

Lost connection to MySQL server during query? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Lost connection to MySQL server during query
I am importing some data from a large csv to a mysql table. I am losing the connection to the server during the process of importing the file to the table.
What is going wrong?
The error code is 2013: Lost connection to the mySql server during the query.
I am running these queries from a ubuntu machine remotely on a windows server.
Try the following 2 things...
1) Add this to your my.cnf / my.ini in the [mysqld] section
max_allowed_packet=32M
(you might have to set this value higher based on your existing database).
2) If the import still does not work, try it like this as well...
mysql -u <user> --password=<password> <database name> <file_to_import
Usually that happens when you exhaust one resource for the db session, such as memory, and mysql closes the connection.
Can you break the CSV file into smaller ones and process them? or do commit every 100 rows? The idea is that the transaction you're running shouldn't try to insert a large amount of data.
I forgot to add, this error is related to the configuration property max_allowed_packet, but I can't remember the details of what to change.
The easiest solution I found to this problem was to downgrade the MySql from MySQL Workbench to MySQL Version 1.2.17. I had browsed some MySQL Forums, where it was said that the timeout time in MySQL Workbech has been hard coded to 600 and some suggested methods to change it didn't work for me. If someone is facing the same problem with workbench you could try downgrading too.
1) you may have to increase the timeout on your connection.
2)You can get more information about the lost connections by starting mysqld with the --log-warnings=2 option.
This logs some of the disconnected errors in the hostname.err file
You can use that for further investigation
3) if you are trying to send the data to BLOB columns, check server's max_allowed_packet variable, which has a default value of 1MB. You may also need to increase the maximum packet size on the client end. More information on setting the packet size is given in following link, “Packet too large”.
4) you can check the following url link
5) you should check your available disk space is bigger than the table you're trying to update link
You might like to read this - http://dev.mysql.com/doc/refman/5.0/en/gone-away.html - that very well explains the reasons and fixes for "lost connection during query" scenarios.
In your case, it might be because of the max allowed packet size as pointed by Augusto. Or if you've verified it isn't the case, then it might be the connection wait timeout setting due to which the client is losing connection. However, I do not think latter is true here because it's a CSV file and not containing queries.
I think you can use mysql_ping() function.
This function checks for connection to the server alive or not. if it fails then you can reconnect and proceed with your query.