Losing connection to Cloud SQL while performing a mysqldump - mysql

Recently, I've started seeing issues performing a mysqldump on a Cloud SQL instance. It runs for about 20 minutes, then afterwards it fails with the following error:
mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table "table3" at row: 3518748
Then, if I look in the MySQL error log on the CloudSQL console, I see the following:
"2018-06-29T14:46:11.143774Z 4320729 [Note] Aborted connection 4320729 to db: 'bobs_db' user: 'bob' host: '1.2.3.4' (Got timeout writing communication packets)"
Here is the command I run:
mysqldump -h 1.1.1.1 --port=3306 -u bob -pbobs_pwd --net_buffer_length=16m --compatible=ansi --skip-extended-insert --compact --single-transaction --skip-triggers --where="created_at < '2018-06-29 00:00:00' OR updated_at < '2018-06-29 00:00:00'" bobs_db "table1 table2 table3 table4"
Seems everything I read around this issue points to making the net_read_timeout and net_write_timeout bigger but CloudSQL doesn't provide access to those variables from what I can tell.
I'm running with the default database flags except for the following one:
max_allowed_packet=1073741824

Related

MYSQLDUMP - disconnecting from localhost

I have to export an entire table from a database. It's a bit heavy (more than 10Gb).
Here's the command I use :
mysqldump -u root --password=PASSWORD DATABASE TABLENAME | gzip > FILENAME.sql.gz
Mysql version is 5.1.53.
The export stops at around 1.6Gb, with the message:
"Disconnecting from localhost".
Is there a max_size parameter, or a time_out ?
Thanks for your replies

Migration large MySQL 5.7 on AWS to Aurora 5.6

We have quite large (about 1TB) MySQL 5.7 DB, hosted on RDS. We want to migrate it to Aurora 5.6 - because of parallel queries (these are available only for 5.6).
It's not possible to do that by snapshot, because the version is not the same. We need to do mysqldump and then restore it.
I tried several options, but most of them always failed, because of the size of DB.
For example straight import
nohup mysqldump -h fmysql_5_7host.amazonaws.com -u user -pPass db_name | mysql -u user2 -pPAss2 -h aurora_5_6.amazonaws.com db_name
error in nohup.out :
mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table
Also dump to s3 file failed
nohup mysqldump -h mysql_5_7host.amazonaws.com -u user -pPAss db_name | aws s3 cp - s3://bucket/db-dump.sql
error:
An error occurred (InvalidArgument) when calling the UploadPart operation: Part number must be an integer between 1 and 10000, inclusive
Both of previous methods worked for me on smaller DB, about 10GB, but not on 1TB.
Is there any other way how to migrate such database?
Many thanks.

How to export a large MySQL table

I currently using mysqldump command as follows
mysqldump -u username -p -h hostName database tableName > dump.sql
and it fails with emitting the following error
mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table `table_name` at row: 1652788
Is there any other way (perhaps a parameter to mysqldump or etc) for exporting large MySQL tables?
You can add a parameter --single-transaction to the mysql dump command if you are using innodb engine. This will eliminate locks on the table and possible connection timeouts.
Also, ensure that you have given sufficient values for max packet size, and innodb lock wait timeout

How to set time_zone for Source DB in AWS DMS?

I am trying to migrate a MySQL database to Aurora, but can't get the timezone set.
According to the documentation: "Valid values are the standard time zone abbreviations for the operating system hosting the source MySQL database."
Executing date on my Linux (Ubuntu) machine shows: Thu Dec 7 10:27:48 AEDT 2017.
I have configured the Source Endpoint to use:
Extra connection attributes: initstmt=SET time_zone=AEDT
Which results in my connection test to fail with:
Error Details: [errType=ERROR_RESPONSE, status=1022502, errMessage=Cannot connect to ODBC provider ODBC general error., errDetails= RetCode: SQL_ERROR SqlState: HY000 NativeError: 1298 Message: [unixODBC][MySQL][ODBC 5.3(w) Driver]Unknown or incorrect time zone: 'AEDT' ODBC general error.]
I've tried "Australia/Sydney" as well (same value as in RDS Parameter Groups) but getting the same error.
Any ideas?
I am totally aware of that this should be UTC. Not my choice - legacy.
Update: It seems initstmt=SET time_zone="+11:00" works, but leading to this issue.
Do you want to Migrate MySQL from a Linux server to Aurora, the user replication instead of DMS. DMS will not support many DDL during the data load. So create a replication between MySQL to Aurora.
Master - Your current MySQL.
slave - Aurora.
Prepare:
Both instances are able to communicate with each other.
The master must be 5.5 or greater version.
binlog_format to ROW on the Master MySQL.
Create user for replication:
CREATE USER 'rep_user'#'%' IDENTIFIED BY 'rep_user';
GRANT REPLICATION slave ON *.* TO 'rep_user'#'%' IDENTIFIED BY 'rep_user';
FLUSH PRIVILEGES;
Take dump with Binlog details:
mysqldump -u user -p dbname --master-data=2 > backup.sql
less backup.sql
this will give you the binlog file name and its position.
Restore the Backup:
mysql -h auroraendpoint -u user -p dbname < backup.sql
Enable replication:
CALL mysql.rds_set_external_master ('Master_RDS_EndPoint', 3306, 'Replication_user_name', 'Replication_User_password', 'BinLog_FileName', BingLog_Position, 0);
Strat replication:
call mysql.rds_start_replication;
Cutover:
During the maintenance window do a cutover and make the Aurora as a master.
call mysql.rds_stop_replication;
CALL mysql.rds_reset_external_master;

Running mysqldump throws error 2013

I'm using MySQL Workbench 6.3.6 build 511 CE (64 bits) with MySQL 5.6.25 on Windows 10 that was bundled with XAMPP 5.6.11.
It used to work fine for almost a month in this configuration. I don't recall changing any settings, but suddenly now when I want to export my db it throws this error
mysqldump: Got error: 2013: Lost connection to MySQL server at
'reading authorization packet', system error: 2 when trying to connect
Operation failed with exitcode 2
The error appears even when I try calling mysqldump myself from cmd.
The command that workbench used was this
14:23:26 Dumping invento (all tables)
Running: mysqldump.exe --defaults-file="c:\users\rog\appdata\local\temp\tmp0apjw4.cnf" --host=127.0.0.1 --insert-ignore=TRUE --protocol=tcp --user=root --force=TRUE --port=3306 --default-character-set=utf8 --routines --events "invento"
I should add that the error doesn't always appear
Exit code 2 usually indicates that there is a permissions problem. The most usual suspect is the lack of permissions for the LOCK TABLES command on the given database or table(s) you are trying to dump.
Make sure that the user you use to create the backup with has this permission on the given database table(s). Alternatively you can use the --skip-lock-tables mysqldump option (also see the documentation), so you'll get something like:
mysqldump.exe --defaults-file="c:\users\rog\appdata\local\temp\tmp0apjw4.cnf"
--host=127.0.0.1 --insert-ignore=TRUE --protocol=tcp
--user=root --force=TRUE --port=3306 --default-character-set=utf8
--routines --events "invento"
--skip-lock-tables