How to solve MySQL max_user_connections error - mysql

I'm getting following error when I try to log onto phpMyAdmin.
User ** already has more than 'max_user_connections' active connections
Could anyone let me know how to close these DB connections from MySQL server end?
Thank you for your time!

Read max_connections document to solve your problem
If clients encounter Too many connections errors when attempting to
connect to the mysqld server, all available connections are in use by
other clients.
The permitted number of connections is controlled by the
max_connections system variable. The default value is 151 to improve
performance when MySQL is used with the Apache Web server. To support
more connections, set max_connections to a larger value.
First: Check your current database max_connection variable
SHOW VARIABLES LIKE 'max_connections';
+-----------------+-------+
| Variable_name | Value |
+-----------------+-------+
| max_connections | 151 |
+-----------------+-------+
Then Try to increase the max_connection parameter either with running command like:
SET GLOBAL max_connections = 300;
Or set this parameter in my.cnf that mostly is located at /etc/my.cnf
vi /etc/my.cnf
max_connections = 300
Finally: Restart MySQL service
FYI
you can also check max_user_connections. however, they are related like this:
max_connections set the total connection limit
max_user_connections set limit per user
====
As Sushilzzz asked: can this be caused by low RAM?
Short answer: No
Long Answer: yes, If Ram Size is low and MySQL can't respond as fast as needed there will be many open connections and you can easily hit the max connection.
The estimated number of max connections per 1GB of ram is 100 (if you don't have any other process using ram at the same time). I usually use ~75 for max_connection per 1GB of RAM
RAM max_connection
1GB 70
2GB 150
4GB 300
8GB 500

This happens due to limit specified in the mysql configuration, the system variable max_user_connections.
Solutions
Killing the queries which are stuck at the backend is only a solution I would suggest if it is a SELECT query. Queries that change data, like UPDATE/DELETE/INSERT, are not to be killed.
Secondly, you can use the command mysqladmin processlist to check what is going on inside mysql.
If locking is causing your problem, you can check which engine you are using and change it to another. IBM's SolidDB documentation on table locks might help you. Though there may be another reason for this. (For example, perhaps your queries are taking too long because of an unoptimized query, or the table size is too big, or you have a spammed database).

Your best bet is to increase max_connections. For a MySQL instance serving multiple different web apps (raw php, WordPress, phpBB), you probably want a value of at least 60 for this.
Issue this command and you'll find out how many global connections you have available:
show global variables like '%connections%'
You can find out how many connections are in use at any given moment like this:
show status like '%connected%'
You can find out what each connection is doing like this:
show full processlist
I would try for a global value of at least 100 connections if I were you. Your service provider ought to be able to help you if you don't have access to do this. It needs to be done in the my.cnf file configuration for MySQL. Don't set it too high or you run the risk of your MySQL server process gobbling up all your RAM.
A second approach allows you to allocate those overall connections to your different MySQL users. If you have different MySQL usernames for each of your web apps, this approach will work for you. This approach is written up here. https://www.percona.com/blog/2014/07/29/prevent-mysql-downtime-set-max_user_connections/
The final approach to controlling this problem is more subtle. You're probably using the Apache web server as underlying tech. You can reduce the number of Apache tasks running at the same time to, paradoxically, increase throughput. That's because Apache queues up requests. If it has a few tasks efficiently banging through the queue, that is often faster than lots of tasks because there's less contention. It also requires fewer MySQL connections, which will solve your immediate problem. That's explained here: Restart Mysql automatically when ubuntu on EC2 micro instance kills it when running out of memory
By the way, web apps like WordPress use a persistent connection pool. That is, they establish connections to the MySQL data base, hold them open, and reuse them. If your apps are busy, each connection's lifetime ought to be several minutes.

First, this is a hack, but works, especially on a shared host.
We all have bad "neighbors" sometimes, right?
If you have access to your /etc/ increase the limit from 30 to 50, in your my.cnf or through the information schema.
To ignore the error message the visitor might see, use #mysql_connect().
If there are more than 30 MUCs, use the "or die()" statement to stop the query.
Replace the "or die" message with die(header(location: THIS PAGE)) and be sure to mysql_close();
Yes, it will cause a delay in page loading. But better to load than a white screen of death -or worse error messages that visitors have no understanding of.

It looks like queries stuck on the server. Restart and everything will be ok.
If you are on shared hosting, just contact your hosting provider they will fix it.
I'm namecheap user and they solved it.

In my case 1 have a limit of 10 user connections; And I do not have the right to change the max user connections variable. You can check the amount user connection like so.
show variables like "max_user_connections";
You can set the max amount of user connections like so if you have permission.
SET GLOBAL max_connections = 300;
Else you can view the processes in use like so
Show full processlist;
And kill some of the process with I like so. In you case replace number by a id in previous table Show full processlist;
kill 10254745;

Related

Best way to optimize database performance in mysql (mariadb) with buffer size and persistence connection configurations

I have;
a CRUD heavily loaded application in PHP 7.3 which uses CodeIgniter framework.
only 2 users access to application.
The DB is mariadb 10.2 and has 10 tables.In generally, stored INT and engine default is InnoDB but in a table, I store a "mediumtext" column.
application managed by cronjobs (10 different jobs for every minute)
a job average proceed is 100-200 CRUD from DB. (Totally ~ 1k-2k CRUD works in a minute with 10 tables)
Tested;
Persistent Connection in MySQL
I faced an issue maximum connection exceed, so I noticed the Code Igniter do not close connection if you do not set pconnect to config to true in database.php. So, simplified that, it uses allow persistent connection if you set it true. So, I want to fix that issue and I find a solution that I need to set it false and it will close all connections automatically.
I changed my configuration to disallow Persistent connections.
After I update persistent connection disabled. My app started to run properly and after 1 hour later, it crashed again because of a couple of errors that showed below and I fixed those errors with setting max_allow_package to maximum value in my.cnf for mariadb.
Warning --> Error while sending QUERY packet. PID=2434
Query error: MySQL server has gone away
I noticed the DB needs to be tuning. The database size is 1GB+. I have a lot of CRUD jobs scheduled for every minute. So, I changed to buffer size to 1GB and innodb engine pool size to %25 of it. I get used to MySQL Tuner and I figure out those variables with that.
Finally, I am still getting query package errors.
Packets out of order. Expected 0 received 1. Packet size=23
My server has 8GB ram (%25 used), 4 core x 2ghz (%10 used)
I couldn't decide which configuration is the best option for now. I couldn't increase RAM, also %25 used of ram because a key buffer size is 1GB and it could get full use of ram instant jobs.
Can I;
fix the DB errors,
increase average completed CRUD process
8GB ram --> innodb_buffer_pool_size = 5G.
200 qpm --> no problem. (200qps might be a challenge).
10 tables; 2 users --> not an issue.
persistent connections --> frill; not required.
key_buffer_size = 1G? --> Why? You should not be using MyISAM. Change to 30M.
max_allow_package --> What's that? Perhaps a typo for max_allow_packet? Don't set that to more than 1% of RAM.
Packets out of order --> sounds like a network glitch, not a database error.
MEDIUMINT --> one byte smaller than INT, so it is a small benefit when applicable.

Laravel MySql Connection problem too many connections

Too many connection problem in laravel5.8 application
You can see there 54k+ connection in mysql and 32 is in used only
how to remove unused connection so my application work fast.
Neither 54K connections since startup, nor a max of 32 connections simultaneously doing something, is "too many".
What is the real problem? Sluggishness? Find the slowest queries and let's work on speeding them up. Run SHOW FULL PROCESSLIST to see if any queries have been running for more than a few seconds; they are a prime candidate for optimizing. Or use the slowlog.
Connection is just a "count" of attempted connections. It does not relate to active connections nor max_used_connections.
Run the following commands simultaneously:
SHOW VARIABLES LIKE 'max_connections'
SET GLOBAL max_connections = 1000000;
Connection is just a "count" of attempted connections. It does not relate to active connections nor max_used_connections.
See MySQL show status - active or total connections?
If you really are having many current open connections, you should look into what these connections are. You might have a sub-optimal query in your code or a bot is spamming an open endpoint.
You can see process list by running the query
show processlist;
You can then kill connections for the short term solution or take care of whatever problem was causing the connections in the first place.
If you really do need that many connections (doubt it), you should look into scaling your database instance, e.g. by adding read replicas.

MYSQL, limit for max_user_connections

I am have a trouble with my database, I set up the max_user_connections to 800 and the issue persist, "User root already has more than 'max_user_connections' active connections" I would like to know if there is a limit for max_user_connections, Can I set 10000 and that is not going to break my database ?
¿ how can i know the limit ?
If I run SHOW PROCESSLIST and Get this, All is right ?
http://prntscr.com/cc8bbv
To see what is the max connnections for a user use the vairable in information_schema for your mysql database to see global configuration for this.
SHOW VARIABLES LIKE "max_user_connections";
If it is zero, then it is no limit otherwise the limit is set to the one as response.
If you want to check for particular user, use this
SELECT max_user_connections FROM mysql.user WHERE user='db_user' AND host='localhost';
Now for your question what is the effect on increasing this, As per
http://www.electrictoolbox.com/update-max-connections-mysql/
Note that increasing the number of connections that can be made will
increase the potential amount of RAM required for MySQL to run.
Increase the max_connections setting with caution!
So here max_connections is total number of connections allowed from all users.
Also i would suggest to use connection pool so that the size of pool is fixed and it will be from that only and is not growing. Also make sure it is returned back to pool once the work is done.
Too many active connections
The problem you are having is because you are not closing previous connections you have used to access the database. (And its also likely that you are running many separate queries that could all be compressed into a single query)
From looking at your error message, "User root" has more than the max available connections. There aren't 800 different people accessing the database, you are accessing the database 800 different times and not cleaning up afterwards.

Mysql thread connected is 1-3 but experienced too many connections

Yesterday I can see in the log that within 1 minute timespan, there is alot of mysql errors with too many connections failures. I am running default setting: 151 for max_connections and have not experienced this before.
When checking current state, my thread connections is only 1 and up to 3 at max.
Should i increase my max_connections or did i just temporary suffer from a DDoS?
Note: It solved itself and worked immediately after that minute.
Update 2: It just happend again. I can see multiple errors within some seconds timeframe the error log. Why does this happen?
Decrease max_connect_errors to, say, 100. This is a small protection against hackers.
Threads_running is always at least 1 because it includes your SHOW.
With max_connections = 151, Max_used_connections = 152 means that 151 user connections came in, plus one "extra" connection was allowed and used. For this reason, do not run your application as root (or any other SUPER user).
GRANT ... ON your_database.* ..., not ... ON *.* ... for any application logins. This seriously slows down hackers from getting at the mysql database.
root should be allowed only from localhost. This is another safety measure.
If your web server has an "access" log, look at it. You may see hundreds of similar looking hack attacks in a row. And you may a similar set of them often.
Be sure to escape any data coming from html forms. Do this as you build the INSERTs. Without this, you are wide open for attack.
Addenda
SELECT user, host FROM mysql.user WHERE Super_priv = 'Y';
will show you who has SUPER. As a first cut, it should include only host values of localhost, 127.0.0.1, or ::1, all of which are effectively localhost. This does not prevent hackers from first hacking the client, which may also be on localhost.
This may list those with another way in:
SELECT user, host FROM mysql.user WHERE Grant_priv = 'Y';
Back to the question
The Web server's (nginx's?) limit on the number of connections should be less than MySQL's max_connections. This will stop hackers (and users) at the door, rather than letting them clog up both the web server and MySQL.
Even on a well running system, 20 is a reasonable max for how many should be allowed into the web server simultaneously. If there are more, then they are probably stumbling over each other; it would be better to let a few get all the resources they need rather than take longer because of battling over resources.

Mysql decrease opened files

I was wondering if there's a way to decrease the opened files in mysql.
Details :
mysql 5.0.92
engine used : MyISAM
SHOW GLOBAL STATUS LIKE 'Opened_tables' : 150K
SHOW VARIABLES LIKE '%open%' :
open_files_limit 200000
table_open_cache 40000
Solutions tried :
restart server : it works the opened tables counter is 0 but this isn't a good solution from my pov since you will need a restart every week because the counter will increase fast
FLUSH TABLES : like the mysql doc said it should force all tables in use to close but this doesn't happen
So any thoughts on this matter?
Generally, many open tables are nothing to worry about. If you come close to OS limits, you can increase this limits in the kernel settings:
How do I change the number of open files limit in Linux?
MySQL opens tables for each session independently to have better concurrency.
The table_open_cache and max_connections system variables affect the maximum number of files the server keeps open. If you increase one or both of these values, you may run up against a limit imposed by your operating system on the per-process number of open file descriptors. Many operating systems permit you to increase the open-files limit, although the method varies widely from system to system.
In detail, this is explained here
http://dev.mysql.com/doc/refman/5.5/en/table-cache.html
EDIT
To verify your assumption you could decrease max_connections and table_open_cache temporarily by SET GLOBAL table_open_cache := newValue.
The value can be adjusted dynamically without a server restart.
Prior MySQL 5.1 this variable is called table_cache
What I was trying to tell, is, that decreasing this value will probably even have a negative impact on performance in terms of less possible concurrent reads (queue get's longer), instead you should try to increase the OS limit and increase max_open_files, but maybe I just don't see the point here