Trying to import 11GB+ SQL file to mysql - mysql

For an ongoing project I need to install a local server using XAMPP. The colleague exported the mySQL DB from his nGINX environment (I suppose it uses the same mySQL as my apache) but the size is an insane 11GB+. In order to prepare, I did the necessary php.ini modifications listed here, entered myisam_sort_buffer_size=16384M to my.ini and also followed a step-by-step tutorial from here. I have 8GB DDR4 RAM and 8th generation i3, so this should not be a problem. The SQL import was running from 0:30 to 14:30 when I noticed that it simply stopped.
Unfortunately the shell import command seems to stop at the 13149th line of the 18061 lines. I see no error messages, and I do not see the imported database in phpMyAdmin. I see the flashing underscore, but no more SQL commands are executed.
I am wondering if there is a solution to this - I want to ensure that the cc 14 hours of processing does not go to waste, so my question is:
If I terminate the ongoing but seemingly frozen CMD, can I continue importing the remaining 4912 lines from a separate SQL file?

16GB sort buffer size is absurdly high. I suspect you mean maximum_packet_size, and even for that you pobably men 16M rather than 16G.
If you are certain which line is stuck yes, you can resume from there, including the line that failed.

Related

my.cnf >> fully change it, having real production data on our DB

Good morning.
I've a MariaDB environtment, with a very basic my.cnf on it. Coming by default on product installation:
We've been working with this configuration from the beggining. So we have real productive data for a year long, on our DB.
If I substitute this my.cnf, by this one here:
https://gist.github.com/fevangelou/fb72f36bbe333e059b66
which is close to our instance specs (VM, quad-core 2.20Ghz, 8GB RAM), adapting it accordingly, do I have risk to corrupt, break, lose or dirtying existing data in any sense or meaning on DB? I mean, block size changes, buffer changes, etc. Is it dangerous?
In my case, we have this MariaDB on a WINDOWS instance Server 2k16 edition. So I would've to adapt this optimized config file. Then I should ask you too, do you have a basic/recommended optimized my.cnf file for Windows, like the one adove or so?
Thanks and regards.

MySQL 5.7 default install running much slower than 5.6

I have a java / spring / hibernate app running with connections to a MySQL db.
We recently upgraded from 5.6 to 5.7 (on a Windows server) and the app has gone from taking 3 hours to 3 days to complete. It essentially uses hibernate connections to retrieve read only data from the db before processing it and dumping the result elsewhere.
However as a first step, partly to check it was the upgraded version that is causing the problem, I installed 5.7.21 on my dev machine. I then noticed that even doing a db restore took several hours rather than what used to take about 10 minutes on 5.6. This has lead me to believe it may be more of a config issue than 'drivers' being out of date (I did think my first step was going to be upgrading app dependencies). I didn't install the server but I installed my dev machine with a default 'developer' install. Both the server and the dev machine are 64 bit Windows.
I've had a scoot around for obvious gotchas and not found anything yet. I just wondered if anyone could point me in the right direction before I start seriously thrashing about ? I have a good basic understanding of out of the box MySQL but I haven't done much config so even pointers to likely suspects in my.ini and best ways to investigate would be helpful.
When upgrading, pay attention to innodb_buffer_pool_size variable value.
It controls how much memory MySQL uses to make I/O operations faster. Usually, this is the one that makes it fly or that makes it crawl like a snail. There's a lot to be written about this particular variable, there's a plethora of excellent blog posts about it so I'll avoid explaining it into detail.
To see the current value, type in MySQL terminal:
SHOW VARIABLES LIKE '%innodb_buffer_pool%';
Change the value in config file and restart MySQL.
For the value, don't go overboard, don't exceed your entire RAM. You want it as high as possible, especially for servers with a lot of data.

MySQLDump on Windows performance degrading over time

I have an issue that is driving me (and my customer) up the wall. They are running MySQL on Windows - I inherited this platform, I didn't design it and changing to MSSQL on Windows or migrating the MySQL instance to a *Nix VM isn't an option at this stage.
The Server is a single Windows VM, reasonably specced (4 vCores, 16 Gb RAM etc.)
Initially - they had a single Disk for the OS, MySQL and the MySQL backup location and they were getting inconsistent backups, regularly failing with the error message:
mysqldump: Got errno 22 on write
Eventually we solved this by simply moving the Backup destination to a second virtual disk (even though it is is on the same underlying Storage network, we believed that the above error was being caused by the underlying OS)
And life was good....
For about 2-3 months
Now we have a different (but my gut is telling me related) issue:
The MySQL Dump process is taking increasingly longer (over the last 4 days, the time taken for the dump has increased by about 30 mins per backup).
The Database itself is a bit large - 58 Gb, however the delta in size is only about 100 mb per day (and unless I'm missing something - it shouldn't take 30 minutes extra to dump 100 mb of Data).
Initially we thought that this was the underlying Storage network I/O - however as part of the backup script, once the .SQL file is created, it gets zipped up (to about 8.5 Gb) - and this process is very consistent in the time taken - which leads me not to suspect the disk I/O (as my presumption is that the Zip time would also increase if this were the case).
the script that I use to invoke the backup is this:
%mysqldumpexe% --user=%dbuser% --password=%dbpass% --single-transaction --databases %databases% --routines --force=true --max_allowed_packet=1G --triggers --log-error=%errorLogPath% > %backupfldr%FullBackup
the version of MySQLDump is C:\Program Files\MySQL\MySQL Server 5.7\bin\mysqldump.exe
Now to compound matters - I'm primarily a Windows guy (so have limited MySQL experience) and all the Linux Guys at the office won't touch it because it's on Windows.
I suspect the cause for the increase time is that there is something funky happening with the row Locks (possibly due to the application that uses the MySQL instance) - but I'm not sure.
Now for my questions: Has anyone experienced anything similar with a gradual increase of time for a MySQL dump over time?
Is there a way on Windows to natively monitor the performance of MySQLdump to find where the bottlenecks/locks are?
Is there a better way of doing a regular MySQL backup on a Windows platform?
Should I just tell the customer it's unsupported and to migrate it to a supported Platform?
Should I simply utter a 4 letter word and go to the Pub and drown my sorrows?
Eventually found the culprit was the underlying storage network.

Sql databases get corrupt after every backup

I have a problem with my unix server. This started a week ago. One day after a backup (I used to keep 3 backup files) I visited a website on the server but it wouldn't work. I restarted the server and it seemed to be working fine except the mysql service. My attempts to restart it failed. Then I figured that was because the server was full, so I deleted one of the backups, cleaned up some space and the mysql service restarted successfully. Than I figured tables in one of the databases (MYIsam tables) were corrupt. So I repaired them through myisamchk command via ssh and all worked fine. However, the very next day I woke up they were corrupt again (despite mysql was working fine), and this time there was no disk space problem on the server. I repaired them again. The next day the same thing happenned; and this time innodb tables that were part of another database were corrupt as well. I've fixed them too, so now all is working well but I guess the same thing will happen after tonight's backup.
I can't identify the problem and I don't know what logs to look into to understand the problem. Can anyone please help me out? Thanks very much in advance.
No easy answer here. My immediate thought is that the dbase is still busy when the backups commence, possibly corrupting indexes, interferring with caches, etc. Turn on full logging and check for problems when the backup starts happens. Maybe you will find something.
Look for the my.cnf file. On my CentOs it is located in /etc/my.cnf. It will have a config setting for the location of the error log.
My strongest suspect is OOM kill by the kernel or some other issue that results from running the system out of memory. Try this:
Start top on the server and press M to sort by memory so the biggest memory user is at the top.
note the pid of mysqld
manually perform the backup as you observe the value of the RES column in the top output (resident memory size)
once the backup is over see if the pid of mysqld has changed
If the pid has changed (meaning restart took place), and you saw the memory footprint of mysqld take up something comparable to the total amount of system memory, then my suspicion is correct, and we need to lower some settings in my.cnf to make it use less memory, e.g key_buffer_size and innodb_buffer_pool_size.
EDIT - From the log you posted, there are additional issues although it is not clear how they could be contributing to the table corruption. Your server appears to be running with --skip-innodb and your backup script is not able to deal with the absence of InnoDB storage engine printing exception error messages, but nevertheless continuing. It is also attempting to do a repair, which is failing due to the lack of system privileges (error 1 is Operation not permitted). It is possible that encountering those errors triggers some faulty logic in your backup script that leaves the tables corrupted.
At this point I would recommend disabling MySQL backup using the cPanel tool, and using mysqldump or some other solution (e.g. Xtrabackup (https://www.percona.com/doc/percona-xtrabackup/2.3/index.html)) from a cron job instead.
EDIT2 - from the test results. The manual backup does not run the system out of memory and does not crash the server. The jury is still out on the automatic one.
Don't kill mysqld; shut it down gracefully.
Switch from MyISAM to InnoDB; the latter does not suffer from that 'error'.

PHP script timeout problem

Here's a short back story for context:
- Had a site hosted on a solaris machine, had a script that generated a report (pulled data from mysql, generally took ~60 seconds). Everything worked fine.
- Migrated that site and db to Ubuntu machines. Now, the script times out after 30 seconds.
Steps I've already taken:
Increased the max_execution_time to 3600 (I know it's high as hell, but that's what it was set to on the old solaris box)
Set max_input_time = -1
set memory_limit = 64MB
I've checked through phpinfo() that these changes are being accepted and show as the current configuration.
As hard as I pray, as crossed as I can get my fingers, or as many times as I can run the script over and over again...I still get the same result - 30 seconds, it seemingly times out, and tosses a 500 error back at me.
Thoughts? Thanks!
Double check your configuration. If this is being run as a CLI script then some distros have taken to seperating config files in separate directories to have different configuration values for CLI vs web/mod_php. You can run php -i from the command line to see what the CLI environment is configured to do.
First you need to be sure if mysql or php is restricting the time. If it's PHP set_time_limit(0) does it well.
A mysql abortion would propably lead into an unexpected result. For your 500 PHP should be responsible.