MyDumper / MyLoader unable to import large tables into azure MySql database - mysql

I’m in the progress of migration a large mysql database to a “Azure-database voor MySQL Flexible Server”.
The database has a few tables that are larger than 1GB, the largest one being 200GB. All tables are InnoDB tables.
Because of the size of the tables, a normal mysql dump didn’t work, so as suggested here, I resorted to MyDumper/MyLoader: https://learn.microsoft.com/en-us/azure/mysql/concepts-migrate-mydumper-myloader
I dumped one of the large tables (a 31GB table) with the following command:
mydumper --database mySchema \
--tables-list my_large_table
--host database
--user root
--ask-password
--compress-protocol
--chunk-filesize 500
--verbose 3
--compress
--statement-size 104857600
I then copied the files over to a VM in the same region/zone as the Azure database and started the import with the following command:
myloader --directory mydumpdir \
--host dbname.mysql.database.azure.com \
--user my_admin \
--queries-per-transaction 100 \
--ask-password \
--verbose 3 \
--enable-binlog \
--threads 4 \
--overwrite-tables \
--compress-protocol
MyLoader seems to start loading and produced the following output:
** Message: 08:37:56.624: Server version reported as: 5.7.32-log
** Message: 08:37:56.674: Thread 1 restoring create database on `mySchema` from mySchema-schema-create.sql.gz
** Message: 08:37:56.711: Thread 2 restoring table `mySchema`.`my_large_table` from export-20220217-073020/mySchema.my_large_table-schema.sql.gz
** Message: 08:37:56.711: Dropping table or view (if exists) `mySchema`.`my_large_table`
** Message: 08:37:56.979: Creating table `mySchema`.`my_large_table` from export-20220217-073020/mySchema.my_large_table-schema.sql.gz
** Message: 08:37:57.348: Thread 2 restoring `mySchema`.`my_large_table` part 3 of 0 from mySchema.my_large_table.00003.sql.gz. Progress 1 of 26 .
** Message: 08:37:57.349: Thread 1 restoring `mySchema`.`my_large_table` part 0 of 0 from mySchema.my_large_table.00000.sql.gz. Progress 2 of 26 .
** Message: 08:37:57.349: Thread 4 restoring `mySchema`.`my_large_table` part 1 of 0 from mySchema.my_large_table.00001.sql.gz. Progress 3 of 26 .
** Message: 08:37:57.349: Thread 3 restoring `mySchema`.`my_large_table` part 2 of 0 from mySchema.my_large_table.00002.sql.gz. Progress 4 of 26 .
When I execute a "show full processlist" command on the Azure database, I see the 4 connected threads, but I see they are all sleeping, it seems like nothing is happening.
When I don't kill the command, it errors out after a long time:
** (myloader:31323): CRITICAL **: 17:07:27.642: Error occours between lines: 6 and 1888321 on file mySchema.my_large_table.00002.sql.gz: Lost connection to MySQL server during query
** (myloader:31323): CRITICAL **: 17:07:27.642: Error occours between lines: 6 and 1888161 on file mySchema.my_large_table.00001.sql.gz: MySQL server has gone away
** (myloader:31323): CRITICAL **: 17:07:27.642: Error occours between lines: 6 and 1888353 on file mySchema.my_large_table.00003.sql.gz: Lost connection to MySQL server during query
** (myloader:31323): CRITICAL **: 17:07:27.642: Error occours between lines: 6 and 1888284 on file mySchema.my_large_table.00000.sql.gz: MySQL server has gone away
After these errors, the table is still empty.
I tried a few different settings when dumping/loading, but to no avail:
start only 1 thread
make smaller chunks (100mb)
remove --compress-protocol
I also tried importing a smaller table (400MB in chunks of 100MB ), with exactly the same settings, and that did actually work.
I tried to import the tables into a mysql database on my local machine, and there I experienced exactly the same problem: the large table (31GB) import created 4 sleeping threads and didn't do anything, while the smaller table import (400MB in chunks of 100MB) did work.
So the problem doesn't seem to be related to the Azure database.
I now have no clue what the problem is, any ideas?

I had a similar problem, for me it ended up being the instance I was restoring into was too small, the server kept running out of memory. Try temporarily increasing the instance size to a much larger size, and once the data is imported shrink the size of the instance back down.

Related

Wordpress mysql service error ERROR: Failed to start "mysql": cannot start service: Process exited with status 3

I'm hoping someone can help me, i've been working on a website using wordpress for the past 3 months on my mac, only when i tried to access it this past weekend the mysql service won't run and i keep getting the error
ERROR: Failed to start "mysql": cannot start service: Process exited with status 3
Can anyone help me get my site up and running again?
This is the first tie using Wordpress so i'm not really sure what I'm doing.
Any help would be greatly appreciated
i've managed to run mysql in safe mode and this is the code it gives
Last login: Mon Jul 13 22:23:09 on ttys001
Lukes-iMac:~ lukejackson$/Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/mysqld_safe ; exit;
/Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/my_print_defaults: line 12: /Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/my_print_defaults.bin: cannot execute binary file
/Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/my_print_defaults: line 12: /Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/my_print_defaults.bin: Undefined error: 0
/Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/my_print_defaults: line 12: /Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/my_print_defaults.bin: cannot execute binary file
/Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/my_print_defaults: line 12: /Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/my_print_defaults.bin: Undefined error: 0
/Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/mysqld_safe: line 674: /opt/bitnami/mysql/data/Lukes-iMac.err: No such file or directory
Logging to '/opt/bitnami/mysql/data/Lukes-iMac.err'.2020-07-13T21:43:10.6NZ mysqld_safe Starting mysqld daemon with databases from /opt/bitnami/mysql/data/Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/mysqld_safe: line 144: /opt/bitnami/mysql/data/Lukes-iMac.err: No such file or directory
/Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/mysqld_safe: line 199: /opt/bitnami/mysql/data/Lukes-iMac.err: No such file or directory
/Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/mysqld_safe: line 937: /opt/bitnami/mysql/data/Lukes-iMac.err: No such file or directory
2020-07-13T21:43:10.6NZ mysqld_safe mysqld from pid file /opt/bitnami/mysql/data/Lukes-iMac.pid ended
/Users/lukejackson/.bitnami/stackman/machines/wordpress/volumes/root/mysql/bin/mysqld_safe: line 144: /opt/bitnami/mysql/data/Lukes-iMac.err: No such file or directory
logout
Saving session...
...copying shared history...
...saving history...truncating history files...
...completed.
[Process completed]
Your local database server is failing to start up, and it's being started with enough layers between you and it that any reporting it's doing on why is being hidden. To attempt to start it while being able to see what it's complaining about, try the methodology here: start MySQL server from command line on Mac OS Lion
Bitnami Engineer here.
Thank you for using our solution. Please note that the OS X VM solution is a VM we build and configure with the application. You need to open the Console of the VM when running any command, can you try to open the console and run the start command?
sudo /opt/bitnami/ctlscript.sh start
You can use the ctlscript.sh file to stop the services and get the status as well. If the database can't be started, you can take a look at the database's log file (/opt/bitnami/mysql/data/mysqld.log) to get more information
sudo tail -n 30 /opt/bitnami/mysql/data/mysqld.log

how to pull 300-400k data stored in mysql to mongoDB

I was developing a spring boot api to pull data from remote mysql database table. This table contains 300k - 400k data daily. We need to migrate this data to mongoDB now. I tried GridFS technique to store collected json file to mongoDB. I was able to this on local machine. But when I tried this scenario with live server, the JVM threw error :
2018-12-18 17:59:26.206 ERROR 4780 --- [r.BlockPoller-1] o.a.tomcat.util.net.NioBlockingSelector :
java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.ArrayList.iterator(ArrayList.java:840) ~[na:1.8.0_181]
at sun.nio.ch.WindowsSelectorImpl.updateSelectedKeys(WindowsSelectorImpl.java:496) ~[na:1.8.0_181]
at sun.nio.ch.WindowsSelectorImpl.doSelect(WindowsSelectorImpl.java:172) ~[na:1.8.0_181]
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) ~[na:1.8.0_181]
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) ~[na:1.8.0_181]
at org.apache.tomcat.util.net.NioBlockingSelector$BlockPoller.run(NioBlockingSelector.java:339) ~[tomcat-embed-core-8.5.14.jar:8.5.14]
2018-12-18 17:59:27.865 ERROR 4780 --- [nio-8083-exec-1] o.a.c.c.C.[.[.[.[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [/datapuller/v1] threw exception [Handler dispatch failed; nested exception is java.lang.OutOfMemoryError: GC overhead limit exceeded] with root cause
java.lang.OutOfMemoryError: GC overhead limit exceeded
I tried to exceed the heap size with -Xmx3048m by opening java utility from control panel.. but same result. What should I do next to resolve this issue. I have not posted code here because I hope it was all fine as it was running OK on local machine with 60k to 70k record data.
The most performant way is always to bypass all those abstraction.
Since you are not locked to Sprint Boot, I will suggest you to dump the data as csv from mysql, either via mysqldump, or
echo 'SELECT * FROM table' | mysql -h your_host -u user -p -B <db_schema>
Then you can import this csv to MongoDB.
mongoimport --host=127.0.0.1 -d database_name -c collection_name --type csv --file csv_location --headerline
https://docs.mongodb.com/manual/reference/program/mongoimport/

SLES crash dump

I would like to test whether my server creating a crash dump upon a OS crash. I can see the /etc/sysconfig/kdump config file is configured.
So I issued the command to kernel panic echo c > /proc/sysrq-trigger so it crashed the server but it never create a dump file for some reason. This is HP BL460g7 blade with ASR disabled.
When I trigger the kernel panic it crashed but stays about 10 minutes (looks like its trying to save a crash dump) but it never. I checked the message logs but cannot see reason why its not dumping. Main problem is how do I find why it's not dumping a crash file, is there are any logs I can check what has really gone wrong?
I'm using SUSE Linux Enterprise Server 11 (x86_64) SP 1.
Did you follow the steps explained here?
SUSE Support - Configure kernel core dump capture
The most important tasks should be:
install kdump, kexec-tools and makedumpfile
add crashkernel=... to the kernel command line (Grub)
chkconfig boot.kdump on
ensure that you have enough free space in /var/crash (default dir)
Then please reboot your system and run:
sync; echo c >/proc/sysrq-trigger
After another boot please check for new files in /var/crash. If this doesn't work for you, please show us the content of /etc/sysconfig/kdump and at least the output of
cat /proc/cmdline
chkconfig boot.kdump
Do you have a display connected to the machine?

Error (Code 1034) : Load data error

My current version of mysql is 5.0.77. I created a database and is trying to load my data into the MyISAM tables with the "load data infile" command. The data are ~3.5GB in size.
I encounter this error while loading:-
Error (Code 3): Error writing file '/tmp/STH06V6g' (Errcode: 28)
Error (Code 1034): 28 when fixing table
Error (Code 1034): Number of rows changed from 106558636 to 237525263
When i check /var/logs/mysqld.log, it displays this warning:-
120420 9:33:10 [Warning] Warning: Enabling keys got errno 28 on sample.X004,retrying
I did a df -h to check on my file usage:-
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/KusuVolGroup00-VAR
2.0G 1.6G 342M 82% /var
I did not enable/disable any keys prior to loading. May I know how do I go about this error?
Thank you so much in advance!
Joanne
Error code 28 means "no space left on device".

Can't create tables in MySQL workbench

I've just installed MySQL workbench on my computer and have importend an old database into the system which seems to be working find. All the data is there and tables. I can do select, insert, updates etc.
However, if I expand a database, I see tables, views, routines. If I then right click on tables, nothing happens if I click create table... However, if I manually type in the create sql command, it creates a table just fine.
The old laptop has:
OS: Ubuntu 10.04.3
MySQL: 5.1.41
MySQL Workbench: 5.2.33
The new laptop has:
OS: Ubuntu 10.04.3
MySQL: 5.1.41
MySQL Workbench: 5.2.37
I have also tried starting mysql workbench using sudo mysql-workbench and I get the same problem.
However, it does give the following output at command line if I start it from the command line on the new laptop:
oshirowanen#laptop:~$ mysql-workbench
Ready.
** Message: query.save_edits built-in command is being overwritten
** Message: query.discard_edits built-in command is being overwritten
** (mysql-workbench-bin:2737): CRITICAL **: murrine_style_draw_box: assertion `height >= -1' failed
(mysql-workbench-bin:2737): glibmm-CRITICAL **:
unhandled exception (type Glib::Error) in signal handler:
domain: gtk-builder-error-quark
code : 6
what : Unknown internal child: selection
(mysql-workbench-bin:2737): glibmm-CRITICAL **:
unhandled exception (type Glib::Error) in signal handler:
domain: gtk-builder-error-quark
code : 6
what : Unknown internal child: selection
oshirowanen#laptop:~$
On the old laptop I get:
oshirowanen#laptop:~$ mysql-workbench
Log levels '0111000'
disabling log level 0
enabling log level 1
enabling log level 2
enabling log level 3
disabling log level 4
disabling log level 5
disabling log level 6
Ready.
Any idea why I can't create tables using the mouse?
this is a known issue with Ubuntu 10.04:
go to:
/usr/share/mysql-workbench/modules/data/editor_mysql_table_live.glade
and delete all the nodes that look like this:
<child internal-child="selection">
<object class="GtkTreeSelection" id="treeview-selection5"/>
</child>