Import huge mysql dump with single insert - mysql

I need to import huge MySQL dump ~500mb of a single table but the problem is dump file with single insert statement and with multimple rows. When i try to import it to database it takes a so long time that i'm not shure will be it finished or MySQL just gone away. Thx in advance.

several things you can do:
get a binary dump from the original and import that.
get the original db data files and copy them across.
Edit the 500mb file into several, you edit the single sql insert statement to be many sql insert statements. All you need to do is put some closing brackets after, say, 100 rows of values, then put the 'insert into x' statement on the next line. Repeat.

Related

import sql file asreplace with phpmyadmin

How can I import a SQL-file (yes, sql not csv) with phpmyadmin so that it replaces or updates the data while importing?
I did not find option for that. I also created another temporary database where I imported the sql-file in question (having only INSERT -lines, only data no structure), and then went to export to select suitable option like INSERT ... SELECT ... ON DUPLICATE KEY UPDATE ..but did not find one or anything that would help in the situation.
So how can I achieve that? If not with phpMyAdmin, is there a program that transforms "insert" sql file to "update on duplicate", or even from "insert" to "delete" after which I could then re-import with original file?
How I came to this, if it helps the above or if someone has better solutions to previous steps:
I have a semi-large (1 GB) DB file to import, which I have then divided to multiple smaller files to get it imported. One of them being the structuce sql-dump and rest the data. When still trying to get the large file through, trying to adjust timeout settings through htaccess or phpmyadmin import options did not help - always getting the timeout anyway. Since those did not work, I found a program by Janos Rusiczki (https://rusiczki.net/2007/01/24/sql-dump-file-splitter/) to split the sql file into smaller ones (good program thanks Janos!). It also separated the structure from the data.
However after 8 succesfull imports I got timeout again, after phpmyadmin already imported part of the file. Thus I ended up in current situation. I know, I can always delete all and start over with even smaller partial files, but.. I am sure there is a better way to do this. There has to be a way to replace the files on import, or do some other way described above.
Thanks for any help! :)
Cribbing from INSERT ... ON DUPLICATE KEY (do nothing), you can use a regular expression to make every INSERT into an INSERT IGNORE in your sql file and it will pass over all the entries that have already been imported.
Note that will also ignore other errors, but other than timeout errors don't seem likely in this context.

how to FAST import a giant sql script for mysql?

Currently I have a situation which needs to import a giant sql script into mysql. The sql script content is mainly about INSERT operation. But there are so much records over there and the file size is around 80GB.
The machine has 8 cpus, 20GB mem. I have done something like:
mysql -h [*host_ip_address*] -u *username* -px xxxxxxx -D *databaseName* < giant.sql
But the whole process takes serveral days which is quite long.Is any other options to import the sql file into database?
Thanks so much.
I suggest you to try LOAD DATA INFILE. It is extremely fast. I've not used it for loading to remote server, but there is mysqlimport utility. See a comparison of different approaches: https://dev.mysql.com/doc/refman/5.5/en/insert-speed.html.
Also you need to convert your sql script to format suitable for LOAD DATA INFILE clause.
You can break the sql file into several files (on basis of tables) by using shell script & then prepare a shell script to one by one to import the file. This would speedy insert instead of one go.
The reason is that inserted records occupied space in memory for a single process and not remove. You can see when you are importing script after 5 hours the query execution speed would be slow.
Thanks for all your guys help.
I have taken some of your advices and done some comparison on this, now it is time to post the results. The target single sql script 15GB.
Overall, I tried:
importing data as single sql script with index; (Take Days, finally I killed it. DONOT TRY THIS YOURSELF, you will be pissed off.)
importing data as single sql script without index; (Same as above)
importing data as split sql script with index (Take the single sql as an example, I split the big file into small trunks around 41MB each. Each trunk takes around 2m19.586s, Total around );
importing data as split sql script without index; (Each trunk takes 2m9.326s.)
(Unfortunately I did not tried the Load Data method for this dataset)
Conclusion:
If you do not want to use Load Data method when you have to import a giant sql into mysql. It is better to:
Divide into small scripts;
Remove the index
You can add the index back after importing. Cheers
Thanks #btilly #Hitesh Mundra
Put the following commands at the head of giant.sql file
SET AUTOCOMMIT = 0;
SET FOREIGN_KEY_CHECKS=0;
and following at the end
SET FOREIGN_KEY_CHECKS = 1;
COMMIT;
SET AUTOCOMMIT = 1;

Import only one columns from SQL dump file

Is there a way to import only one column in a single table from SQL dump file for MySQL?
Also is there a way to extract one table out using Unix command as the file I have is 18GB
What is the most efficient way to import only one column?
MyDumpSplitter is a tool that uses Linux commands like sed to extract one table from a larger SQL dump file.
However, extracting one column from that table is harder, since the INSERT statements contain full rows.
Probably the easiest solution is to restore the one table (for instance to the test database), and then SELECT the one column you want out of it.

SSIS BCP & Flat File - Is there a limit to how many records can be in the file?

The file I am working with has about 207 million rows. In SSIS it keeps failing.
The column is too long in the data file for row 1, column 2. Verify that the field terminator and row terminator are specified correctly.
Now when I copy a chunk of rows and place into another txt and import I don't get the error.
I can get the rows into sql if I don't use Bulk Insert and use a regular data flow task.
There are two things you should check:
Column length definition for column 2. It's probably set to something like 100 and you try to import a row having a column with a length higher than that.
Check, if you got a column delimiter that may occur inside the data. Imaging you got a file with ; as delimiter, the flat file will run into problem when you got a value containing a semicolon.
The file is pretty long, but i don't think it has something to do with it because the error would be something else.
One other thing you can do is make sure bulk insert is turned off on the oledb destination. On rare occasions I'll get records that don't insert with it turned on.
In fact, if someone knows why that is, I'd love to know.

populating mysql database

I have a file with over a million lines of data, each line is a record.
I can go through the file, read the line and do a insert, but this can take up to 2 hours. Is there a faster way like uploading a sql file?
Use LOAD DATA INFILE
You can use Find and replace to build an insert statement around it.