Running MIPS binaries with Qemu - mips

I'm currently attempting to run a MIPS binary using qemu. My host system in Arch linux. As of right now, I've cd'd into the root directory of the firmware where the binary I'm trying to run exists. I've also copied the qemu-mips binary from my host system into the firmware's root directory.
While in the firmware's root directory I was running this command:
sudo chroot . ./qemu-mips bin/busybox
Yet I'm receiving this error:
chroot: failed to run command ‘./qemu-mips’: No such file or directory
This is strange, considering I just copied the qemu-mips binary to the firmware's root directory where I'm currently sitting. Most of the guides I read describing how to do this say to use qemu-mips-static, however even after installing all available qemu tools, that binary does not exist on my system. Is there something glaring that I'm missing? Thank you.
execve("/usr/sbin/chroot", ["chroot", ".", "./qemu-mips", "bin/busybox"], 0x7fffc0804f98 /* 16 vars */) = 0
brk(NULL) = 0x5653dd4e7000
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=124268, ...}) = 0
mmap(NULL, 124268, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f209819a000
close(3) = 0
openat(AT_FDCWD, "/usr/lib/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0`\20\2\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=2069912, ...}) = 0
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f2098198000
mmap(NULL, 3897584, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f2097bdd000
mprotect(0x7f2097d8b000, 2097152, PROT_NONE) = 0
mmap(0x7f2097f8b000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1ae000) = 0x7f2097f8b000
mmap(0x7f2097f91000, 14576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f2097f91000
close(3) = 0
arch_prctl(ARCH_SET_FS, 0x7f2098199500) = 0
mprotect(0x7f2097f8b000, 16384, PROT_READ) = 0
mprotect(0x5653dcd5b000, 4096, PROT_READ) = 0
mprotect(0x7f20981b9000, 4096, PROT_READ) = 0
munmap(0x7f209819a000, 124268) = 0
brk(NULL) = 0x5653dd4e7000
brk(0x5653dd508000) = 0x5653dd508000
open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=1682192, ...}) = 0
mmap(NULL, 1682192, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f2097ffd000
close(3) = 0
getcwd("/home/user/firmware/kkeps-root", 4096) = 46
chroot(".") = 0
chdir("/") = 0
execve("./qemu-mips", ["./qemu-mips", "bin/busybox"], 0x7ffec746eba0 /* 16 vars */) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/locale.alias", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en_US.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en_US.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en_US/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/lib/charset.alias", O_RDONLY|O_NOFOLLOW) = -1 ENOENT (No such file or directory)
write(2, "chroot: ", 8chroot: ) = 8
write(2, "failed to run command \342\200\230./qemu-"..., 39failed to run command ‘./qemu-mips’) = 39
open("/usr/share/locale/en_US.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en_US.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en_US/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en.UTF-8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en.utf8/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
write(2, ": No such file or directory", 27: No such file or directory) = 27
write(2, "\n", 1
) = 1
close(1) = 0
close(2) = 0
exit_group(127) = ?
+++ exited with 127 +++
So it appears there are a few things that aren't being found, one of them being /etc/ld.so.preload. I'm not 100% sure what to do about this situation. I'm guessing this has to do with the fact that I am not using the static binary.
EDIT: Fixed by installing qemu-user-static from the AUR.

As you've figured out, this is because you weren't using the statically linked binary. It's not actually strictly necessary to use the static QEMU binary inside a chroot, it's just that for the static binary you need only copy that one file into the chroot, whereas if you use the dynamically linked QEMU you also need to copy in the host dynamic linker and all the host libraries that QEMU links against -- and that can run into problems if the host and the guest want to use the same pathname for dynamic libraries.

Related

iSQL - Segmentation fault for mysql server but works fine with SQL servers

I can connect to SQL servers using iSQL but when I tested mysql server it throws Segmentation fault error. Same issue with tsql. Mysql Server version is 5.7. Tested with 2 different MySQL server (5.7 though).
root#client001~: isql -v DB01
Segmentation fault (core dumped)
cat /etc/odbc.ini
[DB01]
Driver = FreeTDS
Server = 10.10.10.10
Port = 3306
TDS Version = 7.2
cat /etc/odbcinst.ini
[FreeTDS]
Description=v0.63 with protocol v8.0
Driver=/usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so
Setup=/usr/lib/x86_64-linux-gnu/odbc/libtdsS.so
strace isql -v DB01 username password
lines_above_this are just reading odbc.ini
brk(0xa68000) = 0xa68000
read(4, "", 4096) = 0
close(4) = 0
brk(0xa65000) = 0xa65000
brk(0xa48000) = 0xa48000
brk(0xa47000) = 0xa47000
socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 4
setsockopt(4, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0
setsockopt(4, SOL_TCP, TCP_NODELAY, [1], 4) = 0
ioctl(4, FIONBIO, [1]) = 0
connect(4, {sa_family=AF_INET, sin_port=htons(3306), sin_addr=inet_addr("10.10.10.10")}, 16) = -1 EINPROGRESS (Operation now in progress)
poll([{fd=4, events=POLLOUT}], 1, 90000000) = 1 ([{fd=4, revents=POLLOUT}])
getsockopt(4, SOL_SOCKET, SO_ERROR, [0], [4]) = 0
poll([{fd=4, events=POLLOUT}], 1, -1) = 1 ([{fd=4, revents=POLLOUT}])
sendto(4, "\2\0\2\0\0\0\0\0CLIENT_SERVER001\0\0\0\0\0\0\0\0\0\0"..., 512, MSG_NOSIGNAL|MSG_MORE, NULL, 0) = 512
poll([{fd=4, events=POLLOUT}], 1, -1) = 1 ([{fd=4, revents=POLLOUT}])
sendto(4, "\2\1\0L\0\0\0\0\0\0\0\0\0\0\n\0\0\0\0\0\0\0\0\0\0\0\0\0\0utf"..., 76, MSG_NOSIGNAL, NULL, 0) = 76
poll([{fd=4, events=POLLIN}], 1, -1) = 1 ([{fd=4, revents=POLLIN}])
recvfrom(4, "[\0\0\0\n5.7", 8, MSG_NOSIGNAL, NULL, NULL) = 8
--- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=0} ---
+++ killed by SIGSEGV (core dumped) +++
Segmentation fault (core dumped)
Figured it out.
Had to download ODBC lib 5.2.7 then copy it to libs dir and create a new entry in odbcinst.ini and odbc.ini (or add line Driver to odbc.ini) .
wget https://cdn.mysql.com/archives/mysql-connector-odbc-5.2/mysql-connector-odbc-5.2.7-linux-debian6.0-x86-64bit.tar.gz
Extract and copy mysql-connector-odbc-5.2.7-linux-debian6.0-x86-64bit/lib/libmyodbc5*.so to /usr/lib/x86_64-linux-gnu/odbc/
cat /etc/odbcinst.ini
[FreeTDS]
Description=v0.63 with protocol v8.0
Driver=/usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so
Setup=/usr/lib/x86_64-linux-gnu/odbc/libtdsS.so
[MySQL_ODBC]
Driver=/usr/lib/x86_64-linux-gnu/odbc/libmyodbc5w.so
cat /etc/odbc.ini
[DB01]
Driver = MySQL_ODBC
Server = 10.10.10.10
Port = 3306
TDS Version = 7.2

Mercurial seems ignoring its config .hgrc, why?

I preconfigured a login/pass to my .hgrc data, despite that the mercurial ignores it:
[auth]
becpg.prefix = https://becpg.fr/hg/becpg-community
becpg.username = read-only
becpg.password = read-only
Trying to clone it, I get this:
$ hg clone https://becpg.fr/hg/becpg-community
http authorization required for https://www.becpg.fr/hg/becpg-community
realm: beCPG repositories
user:
Giving him the login and the password manually, it is accepted without any problem.
Stracing the mercurial process, I find that yes, it finds my .hgrc in my home, and reads it in:
15751 open("/home/myusername/.hgrc", O_RDONLY) = 3
15751 fstat(3, {st_mode=S_IFREG|0600, st_size=112, ...}) = 0
15751 fstat(3, {st_mode=S_IFREG|0600, st_size=112, ...}) = 0
15751 lseek(3, 0, SEEK_CUR) = 0
15751 lseek(3, 0, SEEK_CUR) = 0
15751 fstat(3, {st_mode=S_IFREG|0600, st_size=112, ...}) = 0
15751 read(3, "[auth]\nbecpg.prefix = https://becpg.fr/hg/becpg-community\nbecpg.username = read-only\nbecpg.password = read-only\n", 4096) = 112
15751 read(3, "", 4096) = 0
15751 close(3) = 0
What could be the reason?
$ hg clone https://becpg.fr/hg/becpg-community
http authorization required for https://www.becpg.fr/hg/becpg-community
You try to clone from becpg.fr, then the authorization asks for www.becpg.fr.
I believe you are being redirected.
Try this:
[auth]
becpg.prefix = www.becpg.fr/hg/becpg-community
becpg.username = read-only
becpg.password = read-only

Django MySQL server has gone away

I have a django site the was using sqlite for the backend, recently we upgraded to MySQL and now are receiving intermittent errors:
'MySQL server has gone away'
After starting the site it loads fine, clicking around to change pages will result in the error usually after viewing 1 to 6 pages. The page it occurs on appears to be irrelevant, the same page may load fine the first time but throw the error the 2nd time.
Here's my environment:
Nginx running on host machine as reverse proxy
Docker container running Nginx inside, Django 1.8, uwsgi, and Python 3.4.
Django is using the mysqlclient db driver.
Google Cloud MySQL, 2nd generation
I've tried using a local MySQL server rather than the Google Cloud MySQL, it didn't make a difference. I also tried using the MySQL Connector/Python DB driver instead of mysqlclient. It produced a different (but similar) error message and traceback.
If I go back to SQLite, it works fine. Running under the django development server rather than nginx also works fine.
I've seen posts for this error stating that the django CONN_MAX_AGE should be less than the MySQL wait_timeout, but I'm using the default CONN_MAX_AGE setting of 0. According to the django docs, setting this to 0 (default) causes django to create a new database connection for each request. A value greater than 0 will define how long the connection remains open so it can be reused by another request. If I set it to a value greater than 0 the error goes away, but I'm concerned that I'm just deferring the error until later after the persistent connection expires. Also the django development web server creates a new connection for each request (similar to CONN_MAX_AGE=0), but I don't get the error using the dev server. Django versions prior to 1.6 did not support persistent connections, so it seems like I should be able to keep the default CONN_MAX_AGE=0 setting.
I had to enable logging for uwsgi, it's error log shows the following:
*** Starting uWSGI 2.0.13.1 (64bit) on [Fri Nov 18 22:15:33 2016] ***
compiled with version: 4.8.4 on 18 May 2016 17:48:05
os: Linux-4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016
nodename: 083c33814e43
machine: x86_64
clock source: unix
detected number of CPU cores: 8
current working directory: /
detected binary path: /usr/local/bin/uwsgi
!!! no internal routing support, rebuild with pcre support !!!
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
chdir() to /srv/ftc
your memory page size is 4096 bytes
detected max file descriptor number: 65536
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to UNIX address /srv/ftc/ftc.sock fd 3
Python version: 3.4.3 (default, Oct 14 2015, 20:31:36) [GCC 4.8.4]
Set PythonHome to /srv/env/ftc
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x22a73d0
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 1476189 bytes (1441 KB) for 10 cores
*** Operational MODE: preforking ***
Loading configuration from /srv/ftc/data/settings_local.py
WSGI app 0 (mountpoint='') ready in 2 seconds on interpreter 0x22a73d0 pid: 9 (default app)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 9)
spawned uWSGI worker 1 (pid: 17, cores: 1)
spawned uWSGI worker 2 (pid: 18, cores: 1)
spawned uWSGI worker 3 (pid: 19, cores: 1)
spawned uWSGI worker 4 (pid: 20, cores: 1)
spawned uWSGI worker 5 (pid: 21, cores: 1)
spawned uWSGI worker 6 (pid: 22, cores: 1)
spawned uWSGI worker 7 (pid: 23, cores: 1)
spawned uWSGI worker 8 (pid: 24, cores: 1)
spawned uWSGI worker 9 (pid: 25, cores: 1)
spawned uWSGI worker 10 (pid: 26, cores: 1)
[pid: 21|app: 0|req: 1/1] 192.168.1.15 () {38 vars in 637 bytes} [Fri Nov 18 17:15:39 2016] GET /faq/ => generated 13018 bytes in 3021 msecs (HTTP/1.1 200) 8 headers in 412 bytes (1 switches on core 0)
[pid: 21|app: 0|req: 2/2] 192.168.1.15 () {40 vars in 707 bytes} [Fri Nov 18 17:16:20 2016] GET /find-local-food/ => generated 20480 bytes in 1510 msecs (HTTP/1.1 200) 7 headers in 290 bytes (2 switches on core 0)
[pid: 21|app: 0|req: 3/3] 192.168.1.15 () {40 vars in 711 bytes} [Fri Nov 18 17:16:23 2016] GET /my-programs/ => generated 9297 bytes in 1454 msecs (HTTP/1.1 200) 7 headers in 290 bytes (2 switches on core 0)
[pid: 21|app: 0|req: 4/4] 192.168.1.15 () {40 vars in 683 bytes} [Fri Nov 18 17:16:26 2016] GET / => generated 13562 bytes in 3037 msecs (HTTP/1.1 200) 8 headers in 412 bytes (2 switches on core 0)
[pid: 21|app: 0|req: 5/5] 192.168.1.15 () {40 vars in 683 bytes} [Fri Nov 18 17:16:30 2016] GET /about/ => generated 10064 bytes in 1586 msecs (HTTP/1.1 200) 7 headers in 290 bytes (2 switches on core 0)
[pid: 21|app: 0|req: 6/6] 192.168.1.15 () {40 vars in 685 bytes} [Fri Nov 18 17:16:33 2016] GET /faq/ => generated 13018 bytes in 1145 msecs (HTTP/1.1 200) 8 headers in 412 bytes (2 switches on core 0)
[pid: 21|app: 0|req: 7/7] 192.168.1.15 () {40 vars in 691 bytes} [Fri Nov 18 17:16:35 2016] GET /contact/ => generated 8734 bytes in 1352 msecs (HTTP/1.1 200) 7 headers in 290 bytes (2 switches on core 0)
[pid: 21|app: 0|req: 8/8] 192.168.1.15 () {40 vars in 711 bytes} [Fri Nov 18 17:16:38 2016] GET /find-local-food/ => generated 20480 bytes in 1484 msecs (HTTP/1.1 200) 7 headers in 290 bytes (2 switches on core 0)
[pid: 21|app: 0|req: 9/9] 192.168.1.15 () {40 vars in 699 bytes} [Fri Nov 18 17:16:41 2016] GET /about/ => generated 10064 bytes in 1723 msecs (HTTP/1.1 200) 7 headers in 290 bytes (2 switches on core 0)
Traceback (most recent call last):
File "/srv/env/ftc/lib/python3.4/site-packages/django/core/urlresolvers.py", line 393, in urlconf_module
return self._urlconf_module
AttributeError: 'RegexURLResolver' object has no attribute '_urlconf_module'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/srv/env/ftc/lib/python3.4/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/srv/env/ftc/lib/python3.4/site-packages/django/db/backends/mysql/base.py", line 124, in execute
return self.cursor.execute(query, args)
File "/srv/env/ftc/lib/python3.4/site-packages/MySQLdb/cursors.py", line 250, in execute
self.errorhandler(self, exc, value)
File "/srv/env/ftc/lib/python3.4/site-packages/MySQLdb/connections.py", line 42, in defaulterrorhandler
raise errorvalue
File "/srv/env/ftc/lib/python3.4/site-packages/MySQLdb/cursors.py", line 247, in execute
res = self._query(query)
File "/srv/env/ftc/lib/python3.4/site-packages/MySQLdb/cursors.py", line 411, in _query
rowcount = self._do_query(q)
File "/srv/env/ftc/lib/python3.4/site-packages/MySQLdb/cursors.py", line 374, in _do_query
db.query(q)
File "/srv/env/ftc/lib/python3.4/site-packages/MySQLdb/connections.py", line 270, in query
_mysql.connection.query(self, query)
_mysql_exceptions.OperationalError: (2006, 'MySQL server has gone away')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/srv/env/ftc/lib/python3.4/site-packages/django/core/handlers/wsgi.py", line 170, in __call__
self.load_middleware()
File "/srv/env/ftc/lib/python3.4/site-packages/django/core/handlers/base.py", line 52, in load_middleware
mw_instance = mw_class()
File "/srv/env/ftc/lib/python3.4/site-packages/django/middleware/locale.py", line 24, in __init__
for url_pattern in get_resolver(None).url_patterns:
File "/srv/env/ftc/lib/python3.4/site-packages/django/core/urlresolvers.py", line 401, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/srv/env/ftc/lib/python3.4/site-packages/django/core/urlresolvers.py", line 395, in urlconf_module
self._urlconf_module = import_module(self.urlconf_name)
File "/srv/env/ftc/lib/python3.4/importlib/__init__.py", line 109, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 2254, in _gcd_import
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2226, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1200, in _load_unlocked
File "<frozen importlib._bootstrap>", line 1129, in _exec
File "<frozen importlib._bootstrap>", line 1471, in exec_module
File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
File "./ftc/urls.py", line 23, in <module>
url(r'^', include('areas.public.urls')),
File "/srv/env/ftc/lib/python3.4/site-packages/django/conf/urls/__init__.py", line 33, in include
urlconf_module = import_module(urlconf_module)
File "/srv/env/ftc/lib/python3.4/importlib/__init__.py", line 109, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 2254, in _gcd_import
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2226, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1200, in _load_unlocked
File "<frozen importlib._bootstrap>", line 1129, in _exec
File "<frozen importlib._bootstrap>", line 1471, in exec_module
File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
File "./areas/public/urls.py", line 36, in <module>
url(r'^paypal/', include('areas.public.paypal.urls')),
File "/srv/env/ftc/lib/python3.4/site-packages/django/conf/urls/__init__.py", line 33, in include
urlconf_module = import_module(urlconf_module)
File "/srv/env/ftc/lib/python3.4/importlib/__init__.py", line 109, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 2254, in _gcd_import
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2226, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1200, in _load_unlocked
File "<frozen importlib._bootstrap>", line 1129, in _exec
File "<frozen importlib._bootstrap>", line 1471, in exec_module
File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
File "./areas/public/paypal/urls.py", line 5, in <module>
from . import payment_complete
File "./areas/public/paypal/payment_complete.py", line 9, in <module>
PublicCommon = PublicCommon()
File "./areas/public/common/__init__.py", line 34, in __init__
self.login_page = self.get_cms_page('pu_login')
File "./areas/public/common/__init__.py", line 27, in get_cms_page
return get_object_or_404(Page, reverse_id=reverse_id, publisher_is_draft=False)
File "/srv/env/ftc/lib/python3.4/site-packages/django/shortcuts.py", line 155, in get_object_or_404
return queryset.get(*args, **kwargs)
File "/srv/env/ftc/lib/python3.4/site-packages/django/db/models/query.py", line 328, in get
num = len(clone)
File "/srv/env/ftc/lib/python3.4/site-packages/django/db/models/query.py", line 144, in __len__
self._fetch_all()
File "/srv/env/ftc/lib/python3.4/site-packages/django/db/models/query.py", line 965, in _fetch_all
self._result_cache = list(self.iterator())
File "/srv/env/ftc/lib/python3.4/site-packages/django/db/models/query.py", line 238, in iterator
results = compiler.execute_sql()
File "/srv/env/ftc/lib/python3.4/site-packages/django/db/models/sql/compiler.py", line 840, in execute_sql
cursor.execute(sql, params)
File "/srv/env/ftc/lib/python3.4/site-packages/django/db/backends/utils.py", line 79, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "/srv/env/ftc/lib/python3.4/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/srv/env/ftc/lib/python3.4/site-packages/django/db/utils.py", line 98, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/srv/env/ftc/lib/python3.4/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/srv/env/ftc/lib/python3.4/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/srv/env/ftc/lib/python3.4/site-packages/django/db/backends/mysql/base.py", line 124, in execute
return self.cursor.execute(query, args)
File "/srv/env/ftc/lib/python3.4/site-packages/MySQLdb/cursors.py", line 250, in execute
self.errorhandler(self, exc, value)
File "/srv/env/ftc/lib/python3.4/site-packages/MySQLdb/connections.py", line 42, in defaulterrorhandler
raise errorvalue
File "/srv/env/ftc/lib/python3.4/site-packages/MySQLdb/cursors.py", line 247, in execute
res = self._query(query)
File "/srv/env/ftc/lib/python3.4/site-packages/MySQLdb/cursors.py", line 411, in _query
rowcount = self._do_query(q)
File "/srv/env/ftc/lib/python3.4/site-packages/MySQLdb/cursors.py", line 374, in _do_query
db.query(q)
File "/srv/env/ftc/lib/python3.4/site-packages/MySQLdb/connections.py", line 270, in query
_mysql.connection.query(self, query)
django.db.utils.OperationalError: (2006, 'MySQL server has gone away')
[pid: 20|app: 0|req: 1/10] 192.168.1.15 () {40 vars in 701 bytes} [Fri Nov 18 17:16:43 2016] GET /my-programs/ => generated 0 bytes in 258 msecs (HTTP/1.1 500) 0 headers in 0 bytes (1 switches on core 0)
...brutally killing workers...
worker 1 buried after 1 seconds
worker 2 buried after 1 seconds
worker 3 buried after 1 seconds
worker 4 buried after 1 seconds
worker 5 buried after 1 seconds
worker 6 buried after 1 seconds
worker 7 buried after 1 seconds
worker 8 buried after 1 seconds
worker 9 buried after 1 seconds
worker 10 buried after 1 seconds
binary reloading uWSGI...
chdir() to /
closing all non-uwsgi socket fds > 2 (max_fd = 65536)...
found fd 3 mapped to socket 0 (/srv/ftc/ftc.sock)
running /usr/local/bin/uwsgi
[uWSGI] getting INI configuration from /srv/ftc/conf/uwsgi.ini
Thanks in advance for any help!
This is usually a mysql configuration problem. The settings in particular that you want to adjust for your website are:
max_allowed_packet: http://dev.mysql.com/doc/refman/5.7/en/packet-too-large.html
wait_timeout: http://dev.mysql.com/doc/refman/5.7/en/gone-away.html
But if you’ll look at your log, the mysql gone away problem is only during the handling of an exception that is being caused somewhere else. I would try and solve this exception first before trying to tackle the mysql issue
Traceback (most recent call last):
File "/srv/env/ftc/lib/python3.4/site-packages/django/core/urlresolvers.py", line 393, in urlconf_module
return self._urlconf_module
AttributeError: 'RegexURLResolver' object has no attribute '_urlconf_module'
I found the solution. I added lazy-apps to uwsgi.ini. See ningx, uwsgi, python permanent mysql error after some time from starting application
I'm not using sql alchemy but this did fix the mysql server has gone away error.

MySQL - 0 [ERROR] Error in accept: Bad file descriptor

Recently upgraded to MySQL 5.7.12 on a Debian (Debian 3.2.78-1 x86_64 GNU/Linux) and have been running into the server hanging after every few hours. This is getting flooded in the syslog and mysql.log:
2016-06-13T18:05:20.261209Z 0 [ERROR] Error in accept: Bad file descriptor
MySQL info:
mysql Ver 14.14 Distrib 5.7.12-5, for debian-linux-gnu (x86_64) using 6.2
Pieces of my.cnf mysqld section that can guide some assistance on tweaking values:
[mysqld]
max_allowed_packet = 64M
thread_stack = 256K
thread_cache_size = 8
max_connections = 150
max_connect_errors = 10000
connect_timeout = 30
wait_timeout = 86400
table_open_cache = 2048
open_files_limit = 65535
query_cache_limit = 4M
query_cache_size = 128M
query_cache_type = 1
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
# * InnoDB
innodb_file_per_table
innodb_buffer_pool_instances=2
innodb_buffer_pool_size=2G
thread_pool_size = 24
We had the same issue on a Ubuntu 16.04 system with mysql 5.7.13 . We increased our max open files parameter in systemd like this:
/etc/systemd/system/mysql.service.d/10-ulimit.conf
[Service]
LimitNOFILE=1000000
So far the issue did not happen again. Maybe mysql needs somehow more file descriptors now.
I found the problem (or possibly one of the problems). Here is an extract from strace on mysqld:
...
socket(PF_INET6, SOCK_STREAM, IPPROTO_TCP) = 20
write(2, "2017-01-29T22:22:45.433033Z 0 [N"..., 72) = 72
setsockopt(20, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
setsockopt(20, SOL_IPV6, IPV6_V6ONLY, [0], 4) = 0
bind(20, {sa_family=AF_INET6, sin6_port=htons(3306), inet_pton(AF_INET6, "::", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = 0
listen(20, 70) = 0
fcntl(20, F_GETFL) = 0x2 (flags O_RDWR)
fcntl(20, F_SETFL, O_RDWR|O_NONBLOCK) = 0
...
accept(20, {sa_family=AF_INET6, sin6_port=htons(58332), inet_pton(AF_INET6, "::ffff:127.0.0.1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, [28]) = 37
rt_sigaction(SIGCHLD, {SIG_DFL, [CHLD], SA_RESTORER|SA_RESTART, 0x7f3ddeac84b0}, {SIG_DFL, [], 0}, 8) = 0
getpeername(37, {sa_family=AF_INET6, sin6_port=htons(58332), inet_pton(AF_INET6, "::ffff:127.0.0.1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, [28]) = 0
getsockname(37, {sa_family=AF_INET6, sin6_port=htons(3306), inet_pton(AF_INET6, "::ffff:127.0.0.1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, [28]) = 0
open("/etc/hosts.allow", O_RDONLY) = 38
fstat(38, {st_mode=S_IFREG|0644, st_size=589, ...}) = 0
read(38, "# /etc/hosts.allow: list of host"..., 4096) = 589
read(38, "", 4096) = 0
close(38) = 0
open("/etc/hosts.deny", O_RDONLY) = 38
fstat(38, {st_mode=S_IFREG|0644, st_size=704, ...}) = 0
read(38, "# /etc/hosts.deny: list of hosts"..., 4096) = 704
close(38) = 0
socket(PF_LOCAL, SOCK_DGRAM|SOCK_CLOEXEC, 0) = 38
connect(38, {sa_family=AF_LOCAL, sun_path="/dev/log"}, 110) = 0
sendto(38, "<36>Jan 29 14:23:08 mysqld[13052"..., 72, MSG_NOSIGNAL, NULL, 0) = 72
shutdown(20, SHUT_RDWR) = 0
close(20) = 0
poll([{fd=20, events=POLLIN}, {fd=22, events=POLLIN}], 2, -1) = 1 ([{fd=20, revents=POLLNVAL}])
accept(-1, 0x7ffe6ebd7160, 0x7ffe6ebd70fc) = -1 EBADF (Bad file descriptor)
write(2, "2017-01-29T22:23:08.109451Z 0 [E"..., 75) = 75
... rinse and repeat *REALLY* fast!
In locking down my system with tcp_wrappers I had inadvertently taken mysqld out of both hosts.allow and hosts.deny. It seems that after checking both hosts.allow and hosts.deny mysqld shuts down and closes the socket as you might expect. However it them immediately starts to poll the (now non-existent) socket for activity.
I just did another test where my tcp_wrappers was correctly configured. When I connect from an authorized host all is fine; however when I connected from a blocked address the same issue occurs. Based on this I recommend using other tools to secure mysqld and making your tcp_wrappers config more open than your firewall. That being said the bug should still be fixed!
This fix has yet to stand the test of time so, as usual, YMMV. Hope it helps anyway
Nick
Researched a bit and found following;
Present in MariaDB also
https://lists.launchpad.net/maria-discuss/msg03060.html
https://mariadb.atlassian.net/browse/MDEV-8995
Percona Server/Percona XtraDB Cluster
https://groups.google.com/forum/#!topic/percona-discussion/Tu0S2OvYqKA
Old bug from 2010/2012
https://bugs.mysql.com/bug.php?id=48929
http://lists.mysql.com/commits/96472
Some interesting information (should never happen)
https://lists.mysql.com/mysql/97275
[I work for Percona]
I have the same issue after upgrading to Percona Cluster 5.7.14-26.17-1.trusty.
The ulimit.conf suggestion doesn't help, and I've made sure that there are sufficient file handles, so far as I can tell, by editing /etc/security/limits.conf and /etc/sysctl.conf.
I can reproduce this easily by telnetting to post 3306 and then disconnecting; the server then goes into a spin logging this error.
A horrible workaround for this, which looks promising in my environment, is to avoid using TCP connections on port 3306, and use unix sockets instead.
You can proxy from port 3306 to the socket by changing the port number in /etc/mysql/my.cnf and then using socat
nohup socat TCP4-LISTEN:3306,fork UNIX-CONNECT:/var/run/mysqld/mysqld.sock&
If I then telnet in on port 3306 and disconnect, I can't provoke the problem. I intend to report back on how well this stands up over time.
FWIW, the code looks as though it expects this to happen sometimes:
for (uint retry= 0; retry < MAX_ACCEPT_RETRY; retry++)
{
socket_len_t length= sizeof(struct sockaddr_storage);
connect_sock= mysql_socket_accept(key_socket_client_connection, listen_sock,
(struct sockaddr *)(&cAddr), &length);
if (mysql_socket_getfd(connect_sock) != INVALID_SOCKET ||
(socket_errno != SOCKET_EINTR && socket_errno != SOCKET_EAGAIN))
break;
}
if (mysql_socket_getfd(connect_sock) == INVALID_SOCKET)
{
/*
accept(2) failed on the listening port, after many retries.
There is not much details to report about the client,
increment the server global status variable.
*/
connection_errors_accept++;
if ((m_error_count++ & 255) == 0) // This can happen often
sql_print_error("Error in accept: %s", strerror(errno));
if (socket_errno == SOCKET_ENFILE || socket_errno == SOCKET_EMFILE)
sleep(1); // Give other threads some time
return NULL;
}
I came here with the same error and none of the solutions worked BUT after some research on our end we found that it was apparmor that was denying our logging directory causing the bad file descriptors error message.

Failure to connect to SQl Server from Linux

I am trying to connect to SQL Server 2008 on CentOS 5.8. I am using unixODBC 2.3.0 and SQL Server ODBC Driver (www.microsoft.com/en-us/download/details.aspx?id=28160).
When I try to test the connection by running:
isql -v mydsn username password
it givens me:
[S1T00][unixODBC][Microsoft][SQL Server Native Client 11.0]Login timeout expired
[08001][unixODBC][Microsoft][SQL Server Native Client 11.0]A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.
[08001][unixODBC][Microsoft][SQL Server Native Client 11.0]TCP Provider: Error code 0x2726
[ISQL]ERROR: Could not SQLConnect
The port is open, the server is accessible.
I was trying to diagnose the problem further, but got stuck here:
strace -e trace=network isql -v mydsn username password
socket(PF_FILE, SOCK_STREAM, 0) = 3
connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"...}, 110) = -1 ENOENT (No such file or directory)
socket(PF_FILE, SOCK_STREAM, 0) = 3
connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"...}, 110) = -1 ENOENT (No such file or directory)
socket(PF_FILE, SOCK_STREAM, 0) = 3
connect(3, {sa_family=AF_FILE, path="/var/run/setrans/.setrans-unix"...}, 110) = 0
sendmsg(3, {msg_name(0)=NULL, msg_iov(5)=[{"\1\0\0\0", 4}, {"\1\0\0\0", 4}, {"\1\0\0\0", 4}, {"\0", 1}, {"\0", 1}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 14
socket(PF_FILE, SOCK_STREAM, 0) = 4
connect(4, {sa_family=AF_FILE, path="/var/run/nscd/socket"...}, 110) = -1 ENOENT (No such file or directory)
socket(PF_FILE, SOCK_STREAM, 0) = 4
connect(4, {sa_family=AF_FILE, path="/var/run/nscd/socket"...}, 110) = -1 ENOENT (No such file or directory)
socket(PF_INET, 0x80001 /* SOCK_??? */, IPPROTO_TCP) = -1 EINVAL (Invalid argument)
socket(PF_INET, 0x80001 /* SOCK_??? */, IPPROTO_TCP) = -1 EINVAL (Invalid argument)
socket(PF_INET, 0x80001 /* SOCK_??? */, IPPROTO_TCP) = -1 EINVAL (Invalid argument)
socket(PF_INET, 0x80001 /* SOCK_??? */, IPPROTO_TCP) = -1 EINVAL (Invalid argument)
socket(PF_INET, 0x80001 /* SOCK_??? */, IPPROTO_TCP) = -1 EINVAL (Invalid argument)
socket(PF_INET, 0x80001 /* SOCK_??? */, IPPROTO_TCP) = -1 EINVAL (Invalid argument)
socket(PF_INET, 0x80001 /* SOCK_??? */, IPPROTO_TCP) = -1 EINVAL (Invalid argument)
socket(PF_INET, 0x80001 /* SOCK_??? */, IPPROTO_TCP) = -1 EINVAL (Invalid argument)
socket(PF_INET, 0x80001 /* SOCK_??? */, IPPROTO_TCP) = -1 EINVAL (Invalid argument)
socket(PF_INET, 0x80001 /* SOCK_??? */, IPPROTO_TCP) = -1 EINVAL (Invalid argument)
socket(PF_INET, 0x80001 /* SOCK_??? */, IPPROTO_TCP) = -1 EINVAL (Invalid argument)
socket(PF_INET, 0x80001 /* SOCK_??? */, IPPROTO_TCP) = -1 EINVAL (Invalid argument)
socket(PF_INET, 0x80001 /* SOCK_??? */, IPPROTO_TCP) = -1 EINVAL (Invalid argument)
socket(PF_INET, 0x80001 /* SOCK_??? */, IPPROTO_TCP) = -1 EINVAL (Invalid argument)
socket(PF_INET, 0x80001 /* SOCK_??? */, IPPROTO_TCP) = -1 EINVAL (Invalid argument)
socket(PF_INET, 0x80001 /* SOCK_??? */, IPPROTO_TCP) = -1 EINVAL (Invalid argument)
socket(PF_INET, 0x80001 /* SOCK_??? */, IPPROTO_TCP) = -1 EINVAL (Invalid argument)
socket(PF_INET, 0x80001 /* SOCK_??? */, IPPROTO_TCP) = -1 EINVAL (Invalid argument)
Apparently, there is something wrong going on with the establishment of connection.
Can anyone help me? Please let me know if you need any other info.
Thanks
One 'gotcha' when working with linux and odbc connecting to Microsoft's SQL Server while using Microsoft's linux driver, is the string in odbc.ini for the server must contain the port as well.
Server = [protocol:]server[,port]
as per http://msdn.microsoft.com/en-us/library/hh568455.aspx
This is a different convention than most other setups that use the port = <portnumber> convention. If that is not configured, you will see a 'Could not SQLConnect' error.
Also ensure that the correct odbc files are being used.
odbcinst -j
will show configured sources and their locations.
Another gotcha you might encounter later, is the driver for SQL Server ignores user and password information in odbc.ini if it is in plain text, so make sure your application handles that.
To debug the issue try the following steps:
telnet <1433>
Go to SQL server network configuration
TCP/IP settings
right click and open Properties
switch to the Network Properties tab
under IPAll properties set the dynamic ports to blank and port to 1433
Restart the SQL service
You can open the odbc trace and use tcpdump to catch the network package. I think the odbc trace will give a help.
The problem was that I was using EL6 driver with EL5. After installing the correct version, everything worked. Thanks any way for everyone's responses.