I'm migrating a database from MySQL to Postgres. This question has been very helpful. When running the python manage.py dumpdata --exclude contenttypes --indent=4 --natural-foreign > everything_else.json to create a json fixture the connection is aborted by the database server. All other steps have been successful.
The MySQL database is hosted on RDS and the connection is aborted at the same point in the file each time I try to run this (file size is always 12288 bytes). Logs from the RDS instance state the problem as follows (db, user and host changed to dummy values):
[Note] Aborted connection 600000 to db: 'mydb' user: 'myUsername' host: '127.0.0.1' (Got an error writing communication packets)
In the terminal the message is simply killed.
Why would this error be happening and how can I create this json fixture?
Update
To test for timeout issues I've followed the advice in this post to change default timeout values. This has no effect on the problem.
I've also tried modifying the DB instance to one with more memory etc. This had no effect.
Further update
I didn't get to the bottom of this but instead took a different route and used AWS database migration service (DMS). There's a good walkthrough for this here. For my small ~5GB database the migration process took 5 minutes with negligible costs on the smallest DMS instance.
Related
I am having an issue adding an instance to my ReplicationSet with MySQL 8.0.28 and MySQL Shell
rs.addInstance('a.b.c.d:3306')
the response I get is
Adding instance to the replicaset...
* Performing validation checks
This instance reports its own address as a.b.c.d:3306
a.b.c.d:3306: Instance configuration is suitable.
* Checking async replication topology...
* Checking transaction state of the instance...
WARNING: A GTID set check of the MySQL instance at 'a.b.c.d:3306' determined that it contains transactions that do not originate from the replicaset, which must be discarded before it can join the replicaset.
a.b.c.d:3306 has the following errant GTIDs that do not exist in the replicaset:
2b575744-e07d-11ec-ada9-00ff6b3adad4:1-67
WARNING: Discarding these extra GTID events can either be done manually or by completely overwriting the state of a.b.c.d:3306 with a physical snapshot from an existing replicaset member. To use this method by default, set the 'recoveryMethod' option to 'clone'.
Having extra GTID events is not expected, and it is recommended to investigate this further and ensure that the data can be removed prior to choosing the clone recovery method.
Please select a recovery method [C]lone/[A]bort (default Abort): C
* Updating topology
Waiting for clone process of the new member to complete. Press ^C to abort the operation.
* Waiting for clone to finish...
NOTE: a.b.c.d:3306 is being cloned from x.y.z.x:3306
ERROR: The clone process has failed: Clone Donor Error: 1184 : Aborted connection 554 to db: 'unconnected' user: 'mysql_innodb_rs_10' host: 'xxx' (init_connect command failed). (3862)
ERROR: Error adding instance to replicaset: Clone Donor Error: 1184 : Aborted connection 554 to db: 'unconnected' user: 'mysql_innodb_rs_10' host: xxx' (init_connect command failed).
Reverting topology changes...
Changes successfully reverted.
ERROR: a.b.c.d:3306 could not be added to the replicaset
ReplicaSet.addInstance: Clone Donor Error: 1184 : Aborted connection 554 to db: 'unconnected' user: 'mysql_innodb_rs_10' host: 'xxx' (init_connect command failed). (RuntimeError)
I can't find any information on how to proceed with this, any help would be appreciated?
Assuming that your user has all of the required permissions, then the first thing to check based on (init_connect command failed) in your description would be the init_connect variable on both master and slave:
SHOW GLOBAL VARIABLES LIKE 'init_connect';
It should be the same on both servers.
(The 'unconnected' in your subject will simply be referring to the db - which is not an issue).
I am trying to install Knowage to my Centos VM in command line and get the following error:
WARNING: please provide a database user that can create schemas. The
following schemas will be created (or overwritten):
knowage_demo
foodmart_demo
JDBC connection failed Database Management System Configuration Use an
already installed DBMS [1, Enter]
Select DBMS for metadata:
MariaDB [1, Enter]
MySQL [2]
2
[jdbc:mysql://localhost:3306]
Username: [root]
Password:
WARNING: please provide a database user that can create schemas.
Any ideas on how to make it working? It keeps getting back to this step.
I have a problem with my SQL database.
My RAID failed but I recover data from the drive and now I have my old database, but it is full errors. I want to export it to SQL file and import it to new RAID and drives. ยจ
But when I tried to dump it wit this command:
root#LFCZ:/home# mysqldump -u root -password mc | gzip -9 > mc.sql.gz
It gives me this Error:
mysqldump: Got error: 2013: Lost connection to MySQL server during query when using LOCK TABLES
Can you help with that? Only thing I need is to get .sql file. It is a very big database (approx. 13 GB) but It is running on OVH dedicated server, so it is powerfully enough.
On HDP 2.3.2 with Sqoop 1.4.6, I'm trying to import tables from SQL Server 2008.
I'm able to successfully connect to the SQL Server because I can list databases and tables etc.
However, every single time during imports I run into the following error:
Error: java.lang.RuntimeException: java.lang.RuntimeException:
com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection
to the host x.x.x.x, port 1433 has failed. Error: "connect timed
out. Verify the connection properties. Make sure that an instance of
SQL Server is running on the host and accepting TCP/IP connections at
the port. Make sure that TCP connections to the port are not blocked
by a firewall.".
Again, I am actually able to successfully import from SQL Server, but only after a couple of retries. However, regardless of whether the import succeeded or failed, I always get the error mentioned above and I was wondering what could be causing the problem? It's rather cumbersome to have to keep repeating the imports whenever they fail.
I've already turned off the connection time-out on the SQL Server, and though the connection from the Hadoop cluster and the SQL Server passes through our corporate firewall, our admins tell me that the timeout on the firewall is 3600 seconds. The imports fail before getting anywhere near that mark.
Just an example of one of the sqoop commands I use:
sqoop import \
--connect "jdbc:sqlserver://x.x.x.:1433;database=CEMHistorical" \
--table StreamSummary --username hadoop \
--password-file hdfs:///user/sqoop/.adg.password --hive-import \
--hive-overwrite --create-hive-table --split-by OfferedTime \
--hive-table develop.streamsummary --map-column-hive Call_ID=STRING,Stream_ID=STRING,OriginalCall_ID=STRING,TransactionID=TIMESTAMP
Update:
After getting in touch with our network team, it seems this is most definitely a network issue. To add context, the Hadoop cluster is on a different VLAN as the SQL Server and it goes through a number of firewalls. To test, I tried importing from a different SQL Server within the same VLAN as the Hadoop cluster and I didn't encounter this exception at all.
Posting this here as a reference:
I never heard back from our network team with regards to firewall logs, but our NameNode's OS got corrupted and had to be reformatted and HDP reinstalled. For some reason we're not encountering this error any longer.
One difference between the original cluster and the new installation is that we had 4 nodes (1 name node and 3 data nodes) which were virtualized in a single server. Now, we're running a single node cluster (HDP 2.3.4) with no virtualization on the server.
I done messed something up.
I am working on a Node / Express app that is using the Sequelize ORM to write to a local mysql DB for development. I loaded it up this morning, and it was fine. At some point in working today, I attempted to reset my mysql root password (for unrelated reasons). I did so, then attempted to restart the node server for my app, and it now fails to load. When running
node app.js
I get
TypeError: Uncaught, unspecified "error" event.
at TypeError (<anonymous>)
at EventEmitter.emit (events.js:74:15)
at module.exports.finish (/Users/DrHall/Desktop/gitRepos/CRPinit/node_modules/sequelize/lib/query-chainer.js:142:30)
at exec (/Users/DrHall/Desktop/gitRepos/CRPinit/node_modules/sequelize/lib/query-chainer.js:96:16)
at onError (/Users/DrHall/Desktop/gitRepos/CRPinit/node_modules/sequelize/lib/query-chainer.js:72:11)
at EventEmitter.emit (events.js:95:17)
at null.<anonymous> (/Users/DrHall/Desktop/gitRepos/CRPinit/node_modules/sequelize/lib/dao-factory.js:299:42)
at EventEmitter.emit (events.js:95:17)
at null.<anonymous> (/Users/DrHall/Desktop/gitRepos/CRPinit/node_modules/sequelize/lib/query-interface.js:224:17)
at EventEmitter.emit (events.js:98:17)
I have found other people reporting this error message, but its vague enough that it has been for different reasons that the spesific failures I am getting. I am too new at this whole process to know what I did in the process of reset my root password that is not making Sequelize fail to load.
Mysql root loads fine with the new password. The app does not use the root user, it uses a separate user, who can also log in fine.
Any ideas what I did wrong?
Extra info to address questions asked:
TO reset the password I did use '-skip-grant-tables'. Running grep i saw that mysql was still running with that command, so I killed it and started it up again. Running ps -Af | grep mysql I get
501 7563 1 0 4:31PM ttys000 0:00.02 /bin/sh /usr/local/Cellar/mysql/5.6.14/bin/mysqld_safe --datadir=/usr/local/var/mysql --pid-file=/usr/local/var/mysql/Lil-Coder.pid
501 7662 7563 0 4:31PM ttys000 0:00.44 /usr/local/Cellar/mysql/5.6.14/bin/mysqld --basedir=/usr/local/Cellar/mysql/5.6.14 --datadir=/usr/local/var/mysql --plugin-dir=/usr/local/Cellar/mysql/5.6.14/lib/plugin --log-error=/usr/local/var/mysql/Lil-Coder.err --pid-file=/usr/local/var/mysql/Lil-Coder.pid
501 7677 7460 0 4:32PM ttys000 0:00.00 grep mysql
Which seems right? But the same error on trying to boot node.
I'm just going to file this one under 'W' for WTF. In the end what fixed it was basically nothing. I was connecting to the MySQL DB via Sequelize via:
sequelize = new Sequelize(config.database, config.username, config.password, {
dialect: 'mysql',
port: 3306
}),
with config being an external file I was requiring. I replaced all of the config. variables with the actual strings from the config file (copy/paste) and it worked. Copied them back, and it still worked. It's all voodoo to me.