Connection forcibly closed when copying from MySQL to MongoDB - mysql

I've written a script that transforms my data from MySQL to MongoDB. During the handling of a table with 4,000,000 rows I got (when it was almost done):
Traceback (most recent call last):
File "C:\Python32\lib\site-packages\pymongo\connection.py", line 822, in _send_message
sock_info.sock.sendall(data)
socket.error: [Errno 10054] An existing connection was forcibly closed by the remote host
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "kolibri_to_mongo.py", line 94, in <module>
coll.update(..., upsert=True)
File "C:\Python32\lib\site-packages\pymongo\collection.py", line 411, in update
_check_keys, self.__uuid_subtype), safe)
File "C:\Python32\lib\site-packages\pymongo\connection.py", line 837, in _send_message
raise AutoReconnect(str(e))
pymongo.errors.AutoReconnect: [Errno 10054] An existing connection was forcibly closed by the remote host
Exception mysql.connector.errors.InternalError: InternalError() in <bound method SqlConn.__del__ of SQLConn(?)> ignored
Is that a PyMongo Error or an SQL Error? Can I check for any limits (size or timeout) on the MySQL or MongoDB side? Or did just someone kill my query?
EDIT: I've notice that now I cannot connect to the MongoDB anymore with a timeout error :( Are there any limits in MongoDB that need to be changed is it more likely to be another IT/Hardware problem?

That error is coming from MySQL, however, as to why is a bit of an unknown. The error appears to indicate that the remote end, in this instance, MongoDB closed the connection.
I would recommend looking in the pymongo logs to see if there's any further information there.
I would also run tcpdump on the MySQL side to see if the MongoDB server is rejected the connection attempts with a RST or if the SYNs are leaving the MySQL server and simply being ignored (for the latter, you should see the syns spaced out accordingly based on the retransmission timer, e.g. Attempt 1 # 0s, 2 # +3s, 3 # +9s, 4 # +21s).
On the MongoDB server, does netstat -an | grep LIST or sudo lsof -c mongod show that MongoDB is still listening on port 27017 (assuming you haven't changed the default)?
With regard to MongoDB connections errors, the classic case is where ulimit settings are too low and the server runs out of file descriptors. Here are two good links for you to read:
Production Notes
Ulimit

This is a disconnect on the MySQL server - I wonder if you are hitting a query timeout or a deadlock.
Do you have any slow queries or long queries or are you getting any errors in your MySQL logs?

Related

Mysql python - randomly (every other attempt) gets access denied

I can't figure this out and i am not sure how to code for it.
def get_connection():
cnx = MySQLdb.connect(**DB_CONFIG)
print("Connected")
cnx.close()
print("Closed")
12:08 $ python test_mysql.py && python test_mysql.py
Connected
Closed
Traceback (most recent call last):
File "test_mysql.py", line 4, in <module>
get_connection()
File "XX"/mysql/tbred_mysql.py", line 7, in get_connection
cnx = MySQLdb.connect(**DB_CONFIG)
File "XX/lib/python2.7/site-packages/MySQLdb/__init__.py", line 81, in Connect
return Connection(*args, **kwargs)
File "XX/lib/python2.7/site-packages/MySQLdb/connections.py", line 193, in __init__
super(Connection, self).__init__(*args, **kwargs2)
_mysql_exceptions.OperationalError: (1045, "Access denied for user 'XXXX'#'10.0.8.5' (using password: YES)")
I ran them right after each other because it was easier to demonstrate but you can wait 10 or 15 seconds and it will still happen. I works and then it doesn't.
When it fails like this nothing is written to the mysql error log. If i change the user to something that doesn't exist to force this error a record is written to the mysql error log.
EDIT:
I can reproduce problem without python. If i try and remote connect via the mysql command line client in linux i get the same results. Also i have discovered its not random its every other connection regardless of time between attempts. 1st works, 2nd error, 3rd works, 4th errors again it doesn't matter the time between them. Also with these failures they are not recorded to the mysql error log like a normal access denied message is.
Losing my mind!
Repeating this from my comment above.
Setting option skip_name_resolve fixes your issue.
By default, MySQL Server tries to do a reverse DNS lookup of the client IP address to make sure your user is logging in from an authorized hostname. This is a problem if your local DNS server is slow or intermittently flaky. It can slow down or even cause errors for MySQL connections, and it's not clear why.
Using skip_name_resolve tells the server to skip this validation. This should eliminate errors and slow performance due to DNS.
One implication of this is that you cannot use hostnames in GRANT statements. You must identify users' authorized client hosts by IP addresses or wildcards like %.
https://dev.mysql.com/doc/refman/5.7/en/host-cache.html says:
To disable DNS host name lookups, start the server with the --skip-name-resolve option. In this case, the server uses only IP addresses and not host names to match connecting hosts to rows in the MySQL grant tables. Only accounts specified in those tables using IP addresses can be used. (Be sure that an account exists that specifies an IP address or you may not be able to connect.)
Wildcards work too. You can GRANT ALL PRIVILEGES ON *.* TO 'username'#'192.168.%' for example.
Had to use the ip of the server instead of the hostname. Problem was caused by dns resolution issues on the server. The way that it failed still boggles my mind.

MySQL is not starting in XAMPP

Now I'm working with some projects so I fill my Database using xampp(my english is s*cking, sorry). I get error:
Warning: Unknown(): write failed: No space left on device (28) in Unknown on line 0
Warning: Unknown(): Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/tmp) in Unknown on line 0
I changed session storage and get next level of headache:
#2002 - No such file or directory — Server isn't responding (or local socket of Mysql incorrectly configured ).
mysqli_real_connect(): (HY000/2002): No such file or directory
An error occurred while indicating the connection for controluser in configuration
mysqli_real_connect(): (HY000/2002): No such file or directory.
Previously I installed conkeror browser and tried to set it as default browser. After some time (about 4hrs) mysql crashed. I used commmands:
sudo update-alternatives --config x-www-browser
sudo update-desktop-database
I really need to save my database because it contains so mush data...
Please help (
About my computer: Debian 8 GNOME + Awesome WM. Lenovo b590.
So I found solution. I watched my logs and mysql couldn't init tc.log (/opt/lampp/var/mysql/tc.log).
I just removed it and started the server. It started working!
But I still must move my logs to the other partition, because logs eat so much memory...

MySQL 5.5 : "Got an error reading communication packets"

I just upgraded MySQL from 5.1 to 5.5.
I fixed few issues running mysql_upgrade, and changing some deprecated configurations...
I also updated PHP, from 5.3.3-7 to 5.3.29-1.
But, since that, I'm having a reccurent problem (always thrown in this order) :
1. Client* - PHP Warning
Warning: Packets out of order. Expected 1 received 0. Packet size=1 in
/home/www/www.mywebsite.com/shared/vendor/doctrine/dbal/lib/Doctrine/DBAL/Connection.php
line 694
2. Client* - PHP Warning
Warning: PDOStatement::execute() [pdostatement.execute]: Error reading
result set's header in
/home/www/www.mywebsite.com/shared/vendor/doctrine/dbal/lib/Doctrine/DBAL/Connection.php
line 694
3. Server* - MySQL Warning :
150127 17:25:15 [Warning] Aborted connection 309 to db:
'my_database' user: 'root' host: '127.0.0.1' (Got an error
reading communication packets)
4. Client* - PHP Error
PDOStatement::execute() [pdostatement.execute]: MySQL server
has gone away in
/home/www/www.mywebsite.com/shared/vendor/doctrine/dbal/lib/Doctrine/DBAL/Connection.php
line 694
*NB: What I call "Client" is the PHP Application, and "Server" is the MySQL Server, even if they're both on the same localhost Server.
So, apparently, the origin of all those problems is the first one : "Packets out of order".
But when I search for this error I can't find many answers, and they are most of the time not related to my problem : I use Doctrine as an abstraction, so I don't write any query or fetch any result myself. Plus, it's almost never the same values as me, but in my case I always get those values ("Expected 1 received 0. Packet size=1").
The closest result would be this MySQL bug report, but "No feedback was provided for this bug for over a month, so it is
being suspended automatically"...
Plus, some of the "2." errors aren't thrown by my PHP Doctrine code (they're not executed from my localhost, but from another known external service, probably using some old PHP Propel code).
So that might mean there is a problem with my MySQL configuration itself, but I tried changing some parameters without obtaining any obvious effect (sometimes it takes more time after restarting MySQL to get the first errors for example).
Any help would be very much appreciated !
And here is my current configuration (I've got 2 MySQL instances, the second one using replication is mostly for read only).
I also checked most of the system resources with Munin and didn't see anything abnormal (the RAM usage for example is pretty high, but as there is 50Go on the server it's not full at all).
UPDATE
I isolated an SQL query that was repeatedly failing from my PHP Client. When I executed from my local with MySQL Workbench, it did exactly the same (closed the connexion with a MySQL server has gone away message). When I did it from the sql command line it also did the same. Then I executed it from the sql command line on the server host, and it succeded. But some time after when I tried again from Workbench/whatever it worked... So it looks like those "corrupted packets" are cached and disapear after some time.
Thanks, I fixed this issue doing :
RESET QUERY CACHE;
FLUSH QUERY CACHE;

Database SSH connection cannot be established from Workbench

I can't establish a database connection from MySQL client Workbench through SSH. If I click Test Connection I get the error: ERROR local variable 'chan' referenced before assignment in the first step.
However I was able to connect from MySQL server through the command line, via SSH. I was also able to connect to my local database with Workbench. I am using Ubuntu with KDE 14.10 and the problem started with the update, so I guess it has to do with that, but I don't know how. Please let me know if you'd like further information.
Thank you in advance,
PS I saw a similar problem without a solution here.
Here a solution to fix this issue under Debian/Ubuntu:
1 First, close Mysql Workbench!
2 Apply the patch:
sudo cd /usr/lib/mysql-workbench/
sudo wget https://launchpadlibrarian.net/189450207/paramiko.patch
sudo patch -p1 < paramiko.patch
3 Start Mysql Workbench, it's now working!
If you use python 2.x, try using python3?
This bug is probably related: http://bugs.mysql.com/bug.php?id=74960
Edit: confirmed, I have tried with python 2.x and have this error into mysql/workbench/log/wb.log:
5:35:38 [INF][wb_admin_control.py:query_server_installation_info:767]: Currently connected to MySQL server version 'unknown', conn status = None, active plugins = []
15:35:38 [ERR][sshtunnel.py:notify_exception_error:233]: Traceback (most recent call last):
File "/usr/share/mysql-workbench/sshtunnel.py", line 315, in accept_client
sshchan = transport.open_channel('direct-tcpip', self._target, local_sock.getpeername())
File "/usr/lib/mysql-workbench/modules/wb_admin_ssh.py", line 116, in wba_open_channel
raise e
EOFError
15:35:38 [ERR][wb_admin_control.py:server_polling_thread:492]: Error creating SQL connection for monitoring: MySQLError("Lost connection to MySQL server at 'reading initial communication packet', system error: 0 (code 2013)",)
15:35:56 [INF][ base library]: Notification GNFocusChanged is not registered

Django - OperationalError: (2006, 'MySQL server has gone away')

Bottom line first: How do you refresh the MySQL connection in django?
Following a MySQL server has gone away error I found that MySQL documentation and other sources (here) suggest increasing the wait_timeout MySQL parameter. To me this seems like a workaround rather than a solution. I'd rather keep a reasonable wait_timeout and refresh the connection in the code.
The error:
File "C:\my_proj\db_conduit.py", line 147, in load_some_model
SomeModel.objects.update()
File "C:\Python26\lib\site-packages\django-1.3-py2.6.egg\django\db\models\manager.py", line 177, in update
return self.get_query_set().update(*args, **kwargs)
File "C:\Python26\lib\site-packages\django-1.3-py2.6.egg\django\db\models\query.py", line 469, in update
transaction.commit(using=self.db)
File "C:\Python26\lib\site-packages\django-1.3-py2.6.egg\django\db\transaction.py", line 142, in commit
connection.commit()
File "C:\Python26\lib\site-packages\django-1.3-py2.6.egg\django\db\backends\__init__.py", line 201, in commit
self._commit()
File "C:\Python26\lib\site-packages\django-1.3-py2.6.egg\django\db\backends\__init__.py", line 46, in _commit
return self.connection.commit()
OperationalError: (2006, 'MySQL server has gone away')
Setup: Django 1.3.0 , MySQL 5.5.14 , innodb 1.1.8 , Python 2.6.6, Win7 64bit
The idea of the solution is clear: reconnect to mysql if the current connection is broken.
Please check this out:
def make_sure_mysql_usable():
from django.db import connection, connections
# mysql is lazily connected to in django.
# connection.connection is None means
# you have not connected to mysql before
if connection.connection and not connection.is_usable():
# destroy the default mysql connection
# after this line, when you use ORM methods
# django will reconnect to the default mysql
del connections._connections.default
having the same issue.
I need idea how to check connection state for MySQLdb connection in django.
i guess it can be achieved by
try:
cursor.execute(sql)
catch OperationalError:
reconnect
is anybody have a better idea?
UPDATE
my decision
self.connection.stat()
if self.connection.errno()!=0:
check state of mysqldb connection if error recreate connection
UPDATE AGAIN
you also need to serve case if connection is close
if self.connection.open:
self.connection.stat()
refresh connection is just recreating it
db_settings = settings.DATABASES['mysql_db']
try:
self.connection = MySQLdb.connect(host=db_settings['HOST'],port=int(db_settings['PORT']),db=db_settings['NAME'],user=db_settings['USER'],passwd=db_settings['PASSWORD'])
except MySQLdb.OperationalError, e:
self.connection = None
Since Django 1.6, you can use
import django.db
django.db.close_old_connections()
This does basically the same thing as adamsmith's answer except that it handles multiple databases and also honors the CONN_MAX_AGE setting. Django calls close_old_connections() automatically before and after each request, so you normally don't have to worry about it unless you have some long-running code outside of the normal request/response cycle.
The main reason that leads to this exception is mostly due to client ideal longer than wait_timeout on mysql server.
In order to prevent that kind of error, django supports an option named CONN_MAX_AGE which allow django to recreate new connection if old connections are ideal too long.
So you should make sure that CONN_MAX_AGE value is smaller than wait_timout value.
One important thing is that, django with wsgi handles checking CONN_MAX_AGE every requests by calling close_old_connections. So you mainly don't need to care about that. However if you are using django in standard alone application, there is no trigger to run that function. So you have to call it manually. So let call close_old_connections in your code base.
Note: close_old_connections will keep old connections if they're not expired yet. Your connections are still reused in case of high frequency query.
This way can also close the idle connections and make things good.
So before you need to make a query after a long time, running the below lines will work:
from django.db import close_old_connections
# To prevent the error if possible.
close_old_connections()
# Then the following sentence should be always ok.
YourModel.objects.all()