Host 'Hostname' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts - mysql

What a strange problem!
I have the same applications running in different mysql(dev and test environment)
both 2 mysql are totally the same.
but today in dev environment, I came up with an error like this(which is never happened before):
Host 'Hostname' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts
I also find the solution to this problem, like below
here is the solution of same question but not solved
there are 24 connections in the connection pool of my applications, none of them will fail and there are also no failures at all.
Using show variables like "%max_connect%" I can see max_connect_errors are 100.
My application runs in the same ip as my navicat.
By show processlist I can see there are no other connections except navicat connection before application start. And then I tried to start my application, the problem occured and I cannot even start up my application!
after flush hosts, all the applications will be run in normal without any problem, but when I tried to restart them, the problem shows again.
I don't know what's going on!
I have 4 applications using DB, and the mysql configuration is showing below:
datasource:
driver-class-name: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://${my-config.mysql.host}:${my-config.mysql.port}/${my-config.mysql.db-name}?characterEncoding=utf-8&serverTimezone=Asia/Shanghai&rewriteBatchedStatements=true&allowMultiQueries=true
username: ${my-config.mysql.username}
password: ${my-config.mysql.password}
validation-query: SELECT 1
test-on-borrow: true
test-while-idle: true
time-between-eviction-runs-millis: 10000
hikari:
minimum-idle: 1
maximum-pool-size: 10
you can see that: even all the applications use connections in the maximum pool size, ther should be only 40 connections, and also even all of them failed there are only 40 failures.
So far I have noticed that using show processinglist after flush hosts there are always 1 or 2 unauthorized connections like this:
maybe this is why my application will always down after restarting.
But I notice that that unauthorized connection is from my computer and I did not run any other applications...
Another problem is that, when the problem occurs, it shows
Host '192.168.18.19' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'"
but all the other existing connections using the same IP will not receive such report, they works well and can reach to mysql!
you can see that my gateway application was toasted because of the problem.
but my message center application using the same IP can get infomation from mysql(only if it can get rows from mysql, it print those words like 配置已更新).
If mysql blocked my IP, why other applications using the same IP will not be blocked?
Am I hacked?

Related

MariaDB Refusing LOCALHOST connections

I'm wondering how I can troubleshoot what's happening, there is not enough details to reproduce the problem and find a fix, here it is what I found:
1)The script does many queries every couple of minutes to the localhost MariaDB server
2)The queries are Async, so multiple queries start failing at some point, once these queries fail, I'm unable to access MariaDB command line, with error:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
In some rare cases I'm able to access it(the commandline admin "mariadb"), any query would fail with the same error.
In the mysql logs file I can see the error:
[Warning] Aborted connection x to db: 'Dbname' user: 'useraccessingdb' host: 'localhost' (Got an error reading communication packets)
Following that line there are many more, at the same time, usually having a connection number from 9 to 19 (aborted connection x=9/19).
How can I debug the issue?
What could be the issue?
Thanks for Your time.
The OS is Ubuntu 19
MariaDB is version 10.3.22
It's likely a firewall error.
Try launching gufw and see what rules you have enabled.
Possibly just the default allow outgoing, reject incoming
Even though localhost is the same machine, it sees any traffic trying to access a port as incoming, so you need to specifically allow it for your database port.
You'll have to check the documentation to find out what port that is. I've played around with those before, but I don't remember which the default is, though it may be specific to your purposes.
https://www.linuxquestions.org/questions/ubuntu-63/what-ufw-rule-will-allow-port-80-to-localhost-but-only-from-localhost-4175595450
Alternatively, you can just use a blanket allow all local traffic which should be fine for what you're doing.
Also, doublecheck that your hosts file has localhost defined
https://en.wikipedia.org/wiki/Hosts_%28file%29
I've run into a case where it wasn't.

Remote DB Error: connect ECONNREFUSED (node-mysql/UBuntu/VirtualBox)

I have two separate Ubuntu VMs running on VirtualBox. I am getting the error "Remote DB Error: connect ECONNREFUSED". Here is some background information:
When co-located on same VM, NodeJS to MySQL works fine together. Problem only started after moving MySQL to its own VM.
VMs set up in VirtualBox as Internal Network. They have their own static IPs, and the two VMs can ping each other's IP addresses fine.
When I first got the error, the indication was that nodeJS was trying connection on port 3306 ("Error: connect ECONNREFUSED 192.168.1.69:3306"). Then, I added the port option when creating the connection object ("port : '3306'), but this did not fix problem.
Next, I saw a thread that suggested checking to see what port mySQL is listening on by running (netstat -ln | grep mysql), and the result I got back was "unix 2 [ ACC ] STREAM LISTENING 1831 /var/run/mysqld/mysqld.sock". So, since it said it was listening on 1831, I switched my port in the connection creation code to below:
qvar connection = mysql.createConnection({
host : '192.168.1.69',
port : '1831',
user : 'root',
password : 'vinson',
database : 'pilot',
stringifyObjects: 'true'
});
, however, I was still getting the same error.
UPDATE TO MY POST:
Since my first posting of this... eh.. post, I have learned some things, and in the process made some incremental progress:
By default, MySQL only listens to localhost traffic. In order to have it listen to external traffic you have to disable the listen/bind address in it's my.cnf file. So, I did that, and then restarted MySQL.
Once I did that, I ran "netstat -tlnp", a new line dsiplayed indicating something (definitely MySQL) listening on 0.0.0.0:3306, and this was not there before I made the config change and restarted MySQL.
Then, I executed a query again from the NodeJS VM, and I got a different error (hey, I'll take this as a sign of incremental progress):
" Error: Cannot enqueue Query after fatal error."
So that is where I am now. As before, I would be grateful for any ideas as to what I might try next. Thanks for any help!
Ok, I figured out the rest of my issues. The error above (Error: Cannot enqueue Query after fatal error.) was due to the fact that I had not restarted my NodeJS server, so it had the old connection object. Once I restarted NodeJS server, I then got a new error:
ER_HOST_NOT_PRIVILEGED
The reason I was getting this error was simply because, for a given user, MySQL must know the host from which that user is connecting from (since a valid connection credential for MySQL is combination userId/password/host). Once I updated my user account for remote connections with the appropriated allowed remote hosts, everything worked fine!

MariaDB client connection aborted after 60 sec. Seems to relate to SSL. Would like to find out why

I'm running MariaDB v10.0.14 on a Windows 2012R2 Server, and I work locally from a Win7 machine. I'm limiting this problem to using the command line client tool. I am encrypting the connection to the server DB with SSL. I can connect and issue commands, however after being idle for 60s I get:
ERROR 2013 (HY000): Lost connection to MySQL server during query
When I re-issue the command I get:
ERROR 2006 (HY000): MySQL server has gone away.
No Connection. Trying to reconnect...
Then the client reconnects and runs the command. I don't know why this is occurring and am worried it may affect DB users' connections. Some troubleshooting:
When I connect without SSL this does not occur
I have been ignoring this issue for a while so can not say what change may have led to this. I certainly remember connecting with SSL in the past and not having these timeouts.
I can RDP to the server, connect to DB with command line tool and SHOW FULL PROCESSLIST. I can see the localhost connection plus the remote client connection. When the client has just been started I see Command as 'Sleep' and State as 'cleaning up'. I can issue commands from the client. When Time > 60 State changes to Null and the client shows the above symptoms.
I've read through this, tried all standard suggestions but can't even seem to find any mention of this behaviour. Is it normal?
Wait_timeout and interactive_timeout are set at 28800 so I don't think this is the problem.
net_read_timeout=30 and net_write_timeout=60 but these are tiny commands
connect_timeout=10 but connection is not the issue.
Credentials and permissions are fine as I can connect originally.
Error log has entries corresponding to this event as:
Aborted connection xxx to db: <dbname> user: '<user>' host: '<host>' (Unknown Error)
Firewall logs show that traffic seems to be flowing just fine.
I took a capture of network traffic on the server and saw the below. The blue is the original connection. In orange you can see that at 73s I issue a new command which is met in red with [FIN, ACK] then [RST, ACK] from the server. The green afterward is when the command is reissued and the reconnection occurs. Note the change in client port. Handshakes seem to be fine. Beyond that I'm lost. I'm a data guy, not a network guy.
Anyone have any insights or ideas? Thanks.
Output for
show variables like '%timeout%';
(can't post more than 2 links. I should answer some questions)
connect_timeout=10
deadlock_timeout_long=500000000
deadlock_timeout_short=10000
delayed_insert_timeout=300
innodb_flush_log_at_timeout=1
innodb_rollback_on_timeout=OFF
interactive_timeout=28800
lock_wait_timeout=31536000
net_read_timeout=30
net_write_timeout=60
slave_net_timeout=3600
wait_timeout=28800
Sorry if it comes a bit late, but the reason is this bug https://jira.mariadb.org/browse/MDEV-9836 . SSL connection were aborted after net_read_timeout (which is quite short) not after net_wait_timeout.

Tomcat7 MySQL Connection Error

I'm facing a perplexing problem. I've completed a jsf web app that utlizes hibernate and infinispan with Tomcat7 and tomcat-jdbc-pool as the connection pool provider.
It is being deployed to a Linode cluster w/ 2 nodes -- one database server and one production server.
I can run the app on my local environment using the exact same copy of Tomcat7 (I literally tarred the tomcat directory and promoted it to the server to debug this error) -- even when connected to the live database instance. Everything runs fine.
When I attempt to run the application from the production server I get a MySQLIO error:
Caused by: java.net.ConnectException:
Connection refused
Looking further up in the logs i see:
The last packet sent successfully to
the server was 1 milliseconds ago. The
driver has not received any packets
from the server.
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
Communications link failure
I can connect from the command line on the prodapp server just fine -- even using tcp:
mysql -h db01 -u user -p --protocol=tcp
But it just won't work running inside the Tomcat container. I've tried all kinds of things but I'm really stumped. It just seems strange that I can connect to the database server using the same copy of Tomcat7 locally but when deployed to production the same copy of tomcat7 can't connect -- even though I can connect from the command line on that production server ... I'm stumped.
Any help is greatly appreciated.
EDIT: Solved my problem after wasting too much good life on it. The answer was I'm stupid. Thanks to everyone who tried to help. I have the app moded w/ a dev and live mode and the connection pool was reading the dev mode this whole time. What really made it confusing is that the sessionfactory was moded to live so it would actually reach the live database and initialize a connection when it started up so i could see it connecting (and running meta data queries in the mysql log), but when it actually went to grab a connection from infinispan it blew up. Oh well -- at least it's working now. Thanks again.
If you're connecting to your MySQL database server from a different box, then you need to explicitly grant permission for that user account to connect from that IP.
You can do this whilst in the command line:
GRANT ALL PRIVILEGES ON *.* TO 'username'#'ip_address'
EDIT: Granting all permissions on all tables generally isn't required, be specific about what permissions you want to grant (http://dev.mysql.com/doc/refman/5.1/en/grant.html)
To see what permissions you currently have:
USE mysql;
SELECT * FROM users;
My apologies if I'm patronising you, just that this is the most common problem that I come across.

Users can't connect remotely to MySQL

Problem
Users from other IPs on the (Windows XP) LAN suddenly cannot connect to my local MySQL server.
Background
I've set up MySQL on my local Windows computer so that other computers on the network have access to the root account. I've added each IP as a host for root. Up to some weeks ago, things worked flawlessly and I could connect to the server programatically and using various MySQL admin tools. Now, however, the MySQL server simply refuses connections from those IPs and I can't figure out why.
The network changes that I've done are: changing network card for two (of three) computers and fiddled around with MySQL settings. None of which should have caused this problem. I've tried adding a new user with all relevant hosts, but I get the same type of error:
MySQL Error number 1045 Access denied
for user 'root'#'shop' (using
passwords: YES)
The odd part is that the computer name, 'shop', is used instead of the IP. I don't know why.
Somehow, IPs seem to be resolved now and hostnames are used. Did you grant access to root#shop? Did you flush privileges?
First thing that pops into mind is Windows Firewall, which could have got re-enabled if you swapped NICs on the host computer.
My next suggestion would be to use a sniffer like Wireshark on the host computer and see what exactly happens packet-wise. You can use filters to make to reduce the output - they're very simple and easy to use. This tool has saved me countless hours of debugging.
-EDIT-
Another possible cause might be that your server somehow decided to resolve IPs to hostnames, in which case ip addresses may no longer work - one would need to add hostnames to the allowed list. Not sure if it works this way for MySQL though.
Could you have turned off TCP connections in MySQL?
Also, is the MySQL port open in your firewall?
If you changed your IP (DHCP?), make sure to correct it in my.cnf if you bound mysqld to your lan ip:
[mysqld]
...
bind-address=192.168.x.y