How to import over 10 million rows - mysql

I want to import 10 million insert statements.
The queries are in one big SQL file (861 MB).
Currently, I am using the MySQL import option with Linux.
But it took too much time for it.
I need a faster method to import my big .sql file.
I am using this command right now:
mysql -u root -p -h localhost database < file.sql

Related

Import multiple databases from single .sql file windows

i am a mysql 5.6 user, trying to import three databases that exist in a single dump file. I have tried mysqldump -u root -p --all-databases > C:/Users/shiro/Downloads/SCH/SCH-26092022_1050.sql; on windows powershell which did not import anything, mysql> source C:/Users/shiro/Downloads/SCH/SCH-26092022_1050.sql; on the msql which required me to specify a database and using the import data on workbench which only dumps data for one db. Any advise or recommendations on what i can do will be appreciated. Thanks

Which one is faster to import 50GB data into MySQL? In-database source or shell command read file?

I used this command mysqldump -u root -p etl_db > ~/backup.sql to get the backup data.
Now I want to import into a new remote MySQL database.
I saw there are 2 ways to do it.
https://dev.mysql.com/doc/refman/8.0/en/mysql-batch-commands.html
I wonder which one would be faster?
shell> mysql < dump.sql
or
mysql> source dump.sql?
I saw some people says that source is for small data but there are others say that it's good for large data. I couldn't find much documentation.
Thanks!

Importing a MySQL Database on Localhost

So I wanted to format my system and I had a lot of works that I have done on my localhost that involves databases. I followed the normal way of backing up the database by exporting it into an SQL file but I think I made a mess by making a mistake of backing up everything in one SQL file (I mean the whole localhost was exported to just one SQL file).
The problem now is: when I try to import the backed up file I mean the (localhost.sql), I get an error like
tables already exist.
information_schema
performance_schema
an every other tables that comes with Xampp, which has been preventing me from importing the database.
These tables are the phpmyadmin tables that came with Xampp. I have been trying to get past this for days.
My question now is that can I extract different databases from the same compiled SQL database file?
To import a database you can do following things:
mysql -u username -p database_name < /path/to/database.sql
From within mysql:
mysql> use database_name;
mysql> source database.sql;
The error is quite self-explanatory. The tables information_schema and performance_schema are already in the MySQL server instance that you are trying to import to.
Both of these databases are default in MySQL, so it is strange that you would be trying to import these into another MySQL installation. The basic syntax to create a .sql file to import from the command line is:
$ mysqldump -u [username] -p [database name] > sqlfile.sql
Or for multiple databases:
$ mysqldump --databases db1 db2 db3 > sqlfile.sql
Then to import them into another MySQL installation:
$ mysql -u [username] -p [database name] < sqlfile.sql
If the database already exists in MySQL then you need to do:
$ mysqlimport -u [username] -p [database name] sqlfile.sql
This seems to be the command you want to use, however I have never replaced the information_schema or performance_schema databases, so I'm unsure if this will cripple your MySQL installation or not.
So an example would be:
$ mysqldump -uDonglecow -p myDatabase > myDatabase.sql
$ mysql -uDonglecow -p myDatabase < myDatabase.sql
Remember not to provide a password on the command line, as this will be visible in plain text in the command history.
The point the previous responders seem to be missing is that the dump file localhost.sql when fed into mysql using
% mysql -u [username] -p [databasename] < localhost.sql
generates multiple databases so specifying a single databasename on the command line is illogical.
I had this problem and my solution was to not specify [databasename] on the command line and instead run:
% mysql -u [username] -p < localhost.sql
which works.
Actually it doesn't work right away because of previous attempts
which did create some structure inside mysql, and those bits in localhost.sql
make mysql complain because they already exist from the first time around, so
now they can't be created on the second time around.
The solution to THAT is to manually edit localhost.sql with modifications like
INSERT IGNORE for INSERT (so it doesn't re-insert the same stuff, nor complain),
CREATE DATABASE IF NOT EXISTS for CREATE DATABASE,
CREATE TABLE IF NOT EXISTS for CREATE TABLE,
and to delete ALTER TABLE commands entirely if they generate errors because by then
they've already been executed ((and INSERTs and CREATEs perhaps too for the same reasons). You can check the tables with DESCRIBE TABLE and SELECT commands to make sure that the ALTERations, etc. have taken hold, for confidence.
My own localhost.sql file was 300M which my favorite editor emacs complained about, so I had to pull out bits using
% head -n 20000 localhost.sql | tail -n 10000 > 2nd_10k_lines.sql
and go through it 10k lines at a time. It wasn't too hard because drupal was responsible for an enormous amount, the vast majority, of junk in there, and I didn't want to keep any of that, so I could carve away enormous chunks easily.
unzip -p /pathoffile/database_file.zip | mysql -uusername -p databsename;
Best way to import database in localhost has simple 5 steps:
zip sql file first to compress databse size.
go to termianl.
create empty database.
Run Command unzip databse With Import database: unzip -p /pathoffile/database_file.zip | mysql -uusername -p databsename;
Enter Password

How to speed up import of .sql dump file in to AWS RDS instance?

I'm trying to import a 55 meg .sql file in to an AWS RDS instance. The dump file was generated with this command:
mysqldump -u root -ppassword dbname > dbname.sql
I'm running the import with this command:
mysql -u root -ppassword --host=x.rds.amazonaws.com dbname < dbname.sql
The import was running for about 20 minutes, so I decided to abort thinking something was hanging. It aborted on a line which is 20% of the through, so it seems the import was working, but set to take about an hour and 40 minutes to import 55 megs of SQL.
Is this normal? If not, how can I do it right?
Turns out doing the dump according to this made the import take about 2 minutes: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Importing.SmallExisting.html

shell command to import a portion of mysql dump of a table (from a specific row)

Specifically:
is there a reverse command for this:
mysqldump --user=... --password=... --host=... DB_NAME --where=<YOUR CLAUSE> > /path/to/output/file.sql
Can i import part of a table dump (from a specific row onwards) thru shell ?
Situation:
have a decent (2G) db table to import from PC to lynux mysql, i succeded to import approx 30% of that table using shell commands, then my connection to ISP went dead.
Problem:
i wish to continue uploading the dump from the row where it stopped. I specifically do not want to create a new dump from the row at wich the import stopped.
Pipe the output of
sed -n $ROW,\$p file.sql
to your import command.