MySQL - 0 [ERROR] Error in accept: Bad file descriptor - mysql

Recently upgraded to MySQL 5.7.12 on a Debian (Debian 3.2.78-1 x86_64 GNU/Linux) and have been running into the server hanging after every few hours. This is getting flooded in the syslog and mysql.log:
2016-06-13T18:05:20.261209Z 0 [ERROR] Error in accept: Bad file descriptor
MySQL info:
mysql Ver 14.14 Distrib 5.7.12-5, for debian-linux-gnu (x86_64) using 6.2
Pieces of my.cnf mysqld section that can guide some assistance on tweaking values:
[mysqld]
max_allowed_packet = 64M
thread_stack = 256K
thread_cache_size = 8
max_connections = 150
max_connect_errors = 10000
connect_timeout = 30
wait_timeout = 86400
table_open_cache = 2048
open_files_limit = 65535
query_cache_limit = 4M
query_cache_size = 128M
query_cache_type = 1
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
# * InnoDB
innodb_file_per_table
innodb_buffer_pool_instances=2
innodb_buffer_pool_size=2G
thread_pool_size = 24

We had the same issue on a Ubuntu 16.04 system with mysql 5.7.13 . We increased our max open files parameter in systemd like this:
/etc/systemd/system/mysql.service.d/10-ulimit.conf
[Service]
LimitNOFILE=1000000
So far the issue did not happen again. Maybe mysql needs somehow more file descriptors now.

I found the problem (or possibly one of the problems). Here is an extract from strace on mysqld:
...
socket(PF_INET6, SOCK_STREAM, IPPROTO_TCP) = 20
write(2, "2017-01-29T22:22:45.433033Z 0 [N"..., 72) = 72
setsockopt(20, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
setsockopt(20, SOL_IPV6, IPV6_V6ONLY, [0], 4) = 0
bind(20, {sa_family=AF_INET6, sin6_port=htons(3306), inet_pton(AF_INET6, "::", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = 0
listen(20, 70) = 0
fcntl(20, F_GETFL) = 0x2 (flags O_RDWR)
fcntl(20, F_SETFL, O_RDWR|O_NONBLOCK) = 0
...
accept(20, {sa_family=AF_INET6, sin6_port=htons(58332), inet_pton(AF_INET6, "::ffff:127.0.0.1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, [28]) = 37
rt_sigaction(SIGCHLD, {SIG_DFL, [CHLD], SA_RESTORER|SA_RESTART, 0x7f3ddeac84b0}, {SIG_DFL, [], 0}, 8) = 0
getpeername(37, {sa_family=AF_INET6, sin6_port=htons(58332), inet_pton(AF_INET6, "::ffff:127.0.0.1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, [28]) = 0
getsockname(37, {sa_family=AF_INET6, sin6_port=htons(3306), inet_pton(AF_INET6, "::ffff:127.0.0.1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, [28]) = 0
open("/etc/hosts.allow", O_RDONLY) = 38
fstat(38, {st_mode=S_IFREG|0644, st_size=589, ...}) = 0
read(38, "# /etc/hosts.allow: list of host"..., 4096) = 589
read(38, "", 4096) = 0
close(38) = 0
open("/etc/hosts.deny", O_RDONLY) = 38
fstat(38, {st_mode=S_IFREG|0644, st_size=704, ...}) = 0
read(38, "# /etc/hosts.deny: list of hosts"..., 4096) = 704
close(38) = 0
socket(PF_LOCAL, SOCK_DGRAM|SOCK_CLOEXEC, 0) = 38
connect(38, {sa_family=AF_LOCAL, sun_path="/dev/log"}, 110) = 0
sendto(38, "<36>Jan 29 14:23:08 mysqld[13052"..., 72, MSG_NOSIGNAL, NULL, 0) = 72
shutdown(20, SHUT_RDWR) = 0
close(20) = 0
poll([{fd=20, events=POLLIN}, {fd=22, events=POLLIN}], 2, -1) = 1 ([{fd=20, revents=POLLNVAL}])
accept(-1, 0x7ffe6ebd7160, 0x7ffe6ebd70fc) = -1 EBADF (Bad file descriptor)
write(2, "2017-01-29T22:23:08.109451Z 0 [E"..., 75) = 75
... rinse and repeat *REALLY* fast!
In locking down my system with tcp_wrappers I had inadvertently taken mysqld out of both hosts.allow and hosts.deny. It seems that after checking both hosts.allow and hosts.deny mysqld shuts down and closes the socket as you might expect. However it them immediately starts to poll the (now non-existent) socket for activity.
I just did another test where my tcp_wrappers was correctly configured. When I connect from an authorized host all is fine; however when I connected from a blocked address the same issue occurs. Based on this I recommend using other tools to secure mysqld and making your tcp_wrappers config more open than your firewall. That being said the bug should still be fixed!
This fix has yet to stand the test of time so, as usual, YMMV. Hope it helps anyway
Nick

Researched a bit and found following;
Present in MariaDB also
https://lists.launchpad.net/maria-discuss/msg03060.html
https://mariadb.atlassian.net/browse/MDEV-8995
Percona Server/Percona XtraDB Cluster
https://groups.google.com/forum/#!topic/percona-discussion/Tu0S2OvYqKA
Old bug from 2010/2012
https://bugs.mysql.com/bug.php?id=48929
http://lists.mysql.com/commits/96472
Some interesting information (should never happen)
https://lists.mysql.com/mysql/97275
[I work for Percona]

I have the same issue after upgrading to Percona Cluster 5.7.14-26.17-1.trusty.
The ulimit.conf suggestion doesn't help, and I've made sure that there are sufficient file handles, so far as I can tell, by editing /etc/security/limits.conf and /etc/sysctl.conf.
I can reproduce this easily by telnetting to post 3306 and then disconnecting; the server then goes into a spin logging this error.
A horrible workaround for this, which looks promising in my environment, is to avoid using TCP connections on port 3306, and use unix sockets instead.
You can proxy from port 3306 to the socket by changing the port number in /etc/mysql/my.cnf and then using socat
nohup socat TCP4-LISTEN:3306,fork UNIX-CONNECT:/var/run/mysqld/mysqld.sock&
If I then telnet in on port 3306 and disconnect, I can't provoke the problem. I intend to report back on how well this stands up over time.
FWIW, the code looks as though it expects this to happen sometimes:
for (uint retry= 0; retry < MAX_ACCEPT_RETRY; retry++)
{
socket_len_t length= sizeof(struct sockaddr_storage);
connect_sock= mysql_socket_accept(key_socket_client_connection, listen_sock,
(struct sockaddr *)(&cAddr), &length);
if (mysql_socket_getfd(connect_sock) != INVALID_SOCKET ||
(socket_errno != SOCKET_EINTR && socket_errno != SOCKET_EAGAIN))
break;
}
if (mysql_socket_getfd(connect_sock) == INVALID_SOCKET)
{
/*
accept(2) failed on the listening port, after many retries.
There is not much details to report about the client,
increment the server global status variable.
*/
connection_errors_accept++;
if ((m_error_count++ & 255) == 0) // This can happen often
sql_print_error("Error in accept: %s", strerror(errno));
if (socket_errno == SOCKET_ENFILE || socket_errno == SOCKET_EMFILE)
sleep(1); // Give other threads some time
return NULL;
}

I came here with the same error and none of the solutions worked BUT after some research on our end we found that it was apparmor that was denying our logging directory causing the bad file descriptors error message.

Related

Simulate "No operations allowed after connection closed"

I'm getting this error "No operations allowed after connection closed" from grils2x/mysql/dbcp occasionally and I couldn't find a solution.
because I get the error hours later, like next day after a restart, it's difficult to fix it.
I feel like I need to replicate it in a predictable manner and so I can find a definitive fix.
What parameter set can I use for MySQL and Grails side that ends up with that error immediately after it is run?
I ended up setting two timeouts manually , and make a connection afterwards , and see the following in mysql logs , to confirm connection was aborted after 10 seconds.
2022-02-12T22:03:53.690960Z 493 [Note] Aborted connection 493 to db: 'quantanywhere_2' user: 'root' host: '172.17.0.1' (Got timeout reading communication packets)
SET ##GLOBAL.wait_timeout=10;
SET ##SESSION.wait_timeout=10;
And following should be ok with the default wait_timeout of 28800 seconds=8 hours.
validationInterval = 28000
testWhileIdle=true
maxActive = 50
maxIdle = 25
maxWait = 10000
maxAge: 600000
minIdle = 5
validationQuery="select 1"
validationQueryTimeout=3
initialSize = 10
minEvictableIdleTimeMillis = 60000
timeBetweenEvictionRunsMillis = 5000
numTestsPerEvictionRun = 3
testOnBorrow = true
testWhileIdle = true
testOnReturn = false
removeAbandoned = true
removeAbandonedTimeout = 120

iSQL - Segmentation fault for mysql server but works fine with SQL servers

I can connect to SQL servers using iSQL but when I tested mysql server it throws Segmentation fault error. Same issue with tsql. Mysql Server version is 5.7. Tested with 2 different MySQL server (5.7 though).
root#client001~: isql -v DB01
Segmentation fault (core dumped)
cat /etc/odbc.ini
[DB01]
Driver = FreeTDS
Server = 10.10.10.10
Port = 3306
TDS Version = 7.2
cat /etc/odbcinst.ini
[FreeTDS]
Description=v0.63 with protocol v8.0
Driver=/usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so
Setup=/usr/lib/x86_64-linux-gnu/odbc/libtdsS.so
strace isql -v DB01 username password
lines_above_this are just reading odbc.ini
brk(0xa68000) = 0xa68000
read(4, "", 4096) = 0
close(4) = 0
brk(0xa65000) = 0xa65000
brk(0xa48000) = 0xa48000
brk(0xa47000) = 0xa47000
socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 4
setsockopt(4, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0
setsockopt(4, SOL_TCP, TCP_NODELAY, [1], 4) = 0
ioctl(4, FIONBIO, [1]) = 0
connect(4, {sa_family=AF_INET, sin_port=htons(3306), sin_addr=inet_addr("10.10.10.10")}, 16) = -1 EINPROGRESS (Operation now in progress)
poll([{fd=4, events=POLLOUT}], 1, 90000000) = 1 ([{fd=4, revents=POLLOUT}])
getsockopt(4, SOL_SOCKET, SO_ERROR, [0], [4]) = 0
poll([{fd=4, events=POLLOUT}], 1, -1) = 1 ([{fd=4, revents=POLLOUT}])
sendto(4, "\2\0\2\0\0\0\0\0CLIENT_SERVER001\0\0\0\0\0\0\0\0\0\0"..., 512, MSG_NOSIGNAL|MSG_MORE, NULL, 0) = 512
poll([{fd=4, events=POLLOUT}], 1, -1) = 1 ([{fd=4, revents=POLLOUT}])
sendto(4, "\2\1\0L\0\0\0\0\0\0\0\0\0\0\n\0\0\0\0\0\0\0\0\0\0\0\0\0\0utf"..., 76, MSG_NOSIGNAL, NULL, 0) = 76
poll([{fd=4, events=POLLIN}], 1, -1) = 1 ([{fd=4, revents=POLLIN}])
recvfrom(4, "[\0\0\0\n5.7", 8, MSG_NOSIGNAL, NULL, NULL) = 8
--- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=0} ---
+++ killed by SIGSEGV (core dumped) +++
Segmentation fault (core dumped)
Figured it out.
Had to download ODBC lib 5.2.7 then copy it to libs dir and create a new entry in odbcinst.ini and odbc.ini (or add line Driver to odbc.ini) .
wget https://cdn.mysql.com/archives/mysql-connector-odbc-5.2/mysql-connector-odbc-5.2.7-linux-debian6.0-x86-64bit.tar.gz
Extract and copy mysql-connector-odbc-5.2.7-linux-debian6.0-x86-64bit/lib/libmyodbc5*.so to /usr/lib/x86_64-linux-gnu/odbc/
cat /etc/odbcinst.ini
[FreeTDS]
Description=v0.63 with protocol v8.0
Driver=/usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so
Setup=/usr/lib/x86_64-linux-gnu/odbc/libtdsS.so
[MySQL_ODBC]
Driver=/usr/lib/x86_64-linux-gnu/odbc/libmyodbc5w.so
cat /etc/odbc.ini
[DB01]
Driver = MySQL_ODBC
Server = 10.10.10.10
Port = 3306
TDS Version = 7.2

24 cores and MYSQL is using 1 on INSERT

Hi my server have 24 cores and 32GB of memory.
Am doing multiple "INSERT INTO SELECT" of 50 millions row at a time.
This takes about 15h a query but it is ticking along at 100% of only one CPU, I'am trying to get mySQL(5.5)(InnoDB) to use more of the resources.
I have read multiple threads about it, but I do not get it to work.
Most info is about adding innodb_thread_concurrency = 0
But I still get no results.
port = 3306
socket = /var/run/mysqld/mysqld.sock
[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0
[mysqld]
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /media/ssd/db
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
innodb_buffer_pool_size=26G
innodb_thread_concurrency = 0
bind-address = 127.0.0.1
#
# * Fine Tuning
#
key_buffer = 1000M
max_allowed_packet = 160M
thread_stack = 192K
thread_cache_size = 8
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam-recover = BACKUP
table_cache = 800
query_cache_limit = 5000M
query_cache_size = 1600M
join_buffer_size = 1000M
log_error = /var/log/mysql/error.log
# Here you can see queries with especially long duration
log_slow_queries = /var/log/mysql/mysql-slow.log
long_query_time = 2
Try this parameters:
innodb_io_capacity=5000 (or even 20000 depending on your IO subsystem)
innodb_buffer_pool_size=4G (for example)
innodb_log_file_size=1G
innodb_write_io_threads = 64
innodb_read_io_threads = 64
innodb_thread_concurrency = 0

Django, nginx and uWSGI caching results until uWSGI/MySQL restart

I've written a server app in Django and serve an API to a mobile app with Tastypie and serving the DB with a local MySQL server.
It seems like queries are cached until the process is killed or ended. If I create a new user in the backend it will first appear in the list if I restart uWSGI or MySQL or if I log into the backend from a different browser.
Mysql process list
41 example localhost:58747 example 13 Sleep
42 example localhost:58748 example 16 Sleep
Also if I kill the processes which are Sleep'ed it will also trigger a refresh of the data.
uWSGI config
[uwsgi]
vhost = true
plugins = python
socket = /tmp/example.com.sock
master = true
enable-threads = true
processes = 2
wsgi-file = /var/sites/example-server/example/example/wsgi.py
virtualenv = /var/sites/example-server/PYTHON_ENV
chdir = /var/sites/example-server/example
touch-reload = /var/sites/example-server/example/reload
nginx config
server {
client_max_body_size 20M;
listen 80;
server_name example.com;
access_log /var/log/nginx/example.com_access.log;
error_log /var/log/nginx/example.com_error.log;
location / {
uwsgi_pass unix:///tmp/example.com.sock;
include uwsgi_params;
}
location /media/ {
alias /var/sites/example-server/example/example/media/;
}
location /static/ {
alias /var/sites/example-server/example/example/static/;
}
}
my.cnf
[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0
[mysqld]
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
bind-address = 127.0.0.1
key_buffer = 16M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover = BACKUP
query_cache_limit = 1M
query_cache_size = 16M
log_error = /var/log/mysql/error.log
expire_logs_days = 10
max_binlog_size = 100M
mysqldump]
quick
quote-names
max_allowed_packet = 16M
[mysql]
[isamchk]
key_buffer = 16M
transaction-isolation = READ-COMMITTED
!includedir /etc/mysql/conf.d/
What can I do to make this problem go away?
Cheers
Morten
I had the same behavior and found this post https://plus.google.com/u/0/101898908470597791359/posts/AuMJdgEo93k
Adding this line on settings.py (only the OPTIONS key) on Django:
DATABASES = {
'default': {
'OPTIONS': { "init_command": "SET storage_engine=INNODB, SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED", }
}
}
seems to have resolved the problem.

admin-username error of proxy

I tried to install proxy on development machine and I got the following error.
/etc/init.d/mysql-proxyd start
Starting mysql-proxy: 2011-02-26 15:51:45: (critical) admin-plugin.c:569: --admin-username needs to be set
2011-02-26 15:51:45: (critical) mainloop.c:267: applying config of plugin admin failed
2011-02-26 15:51:45: (critical) mysql-proxy-cli.c:596: Failure from chassis_mainloop. Shutting down.
[ OK ]
Since this is only a test machine, I do not want the security feature of proxy. How do I avoid the above error?
Either upgrade your version of mysql-proxy to 0.8.2 or greater or explicitly specify that you don't need the admin plugin by using mysql-proxy --plugins=proxy
[mysql-proxy]
daemon = true
user = mysql
proxy-skip-profiling = true
keepalive = true
max-open-files = 2048
event-threads = 50
pid-file = /var/run/mysql-proxy.pid
log-file = /var/log/mysql-proxy.log
log-level = debug
admin-address=:4401
admin-username=1
admin-password=1
admin-lua-script=/usr/local/lib/mysql-proxy/lua/admin.lua
proxy-address = 0.0.0.0:3307
proxy-backend-addresses = 192.168.2.1:3306
proxy-read-only-backend-addresses=192.168.6.2:3306, 192.168.6.1:3306
proxy-lua-script=/usr/lib/mysql-proxy/lua/proxy/balance.lua