Restoring SQL from multiple SQL files - mysql

I have a database backup with 400+ sql files. foreach table there is a separate sql file. Is it possible to import all this files together to a database? If so could you tell me how to do this?
Also the backup is a gzipped tar file. Is there a way to restore from a compressed file.?

If you are using linux Concatenate all the sql files using and
cat *.sql > fullBackup.sql
then you can restore the database using this backup file

I have found the answer for my question here. Import Multiple .sql dump files into mysql database from shell
find . -name '*.sql' | awk '{ print "source",$0 }' | mysql --batch works perfectly. Thanks for #Haim to pointing out the correct post.

Nowdays processors have many cores. To use all the cores:
for s in *.sql.gz ; do gunzip -c $s | mysql -u sql_user -p'password' database_name & done
This command opens background process for each sql-dump file.

Or, with pv installed, you can see also the progress by using:
pv -p *.sql | mysql database

Related

How to download, unzip and import to mysql dump all via piping?

Is it possible to unzip a gzipped file while it's downloading and feed it to mysql all in one go without having to create a physical file?
So far I've been able to unzip and feed it to mysql using the following:-
gunzip < somefile.sql.gz | pv | mysql -u myself -p somedb #pv for viewing process
The above process expects the gzipped file to be downloaded already while unzipping and feeding it to mysql.
But I haven't been able to feed to mysql while it's unzipping and downloading simultaneously.
And if this is not possible I'd like to know the why too so as to get a peace of mind.
wget -qO- URL | gunzip | pv | mysql -u myself -p somedb
Okay, I've figured out. Thanks to #TGray for pointing me to the right direction.
The key was named pipes. Basically, you create a named pipe and start the download on that named pipes while piping that named pipe to gunzip as shown above in my post.
mkfifo mypipe
ssh username#server.com cat /source/file.gz > mypipe &
gunzip < mypipe | pv | mysql -uusername -ppassword dbname

Export a large MySQL table as multiple smaller files

I have a very large MySQL table on my local dev server: over 8 million rows of data. I loaded the table successfully using LOAD DATA INFILE.
I now wish to export this data and import it onto a remote host.
I tried LOAD DATA LOCAL INFILE to the remote host. However, after around 15 minutes the connection to the remote host fails. I think that the only solution is for me to export the data into a number of smaller files.
The tools at my disposal are PhpMyAdmin, HeidiSQL and MySQL Workbench.
I know how to export as a single file, but not multiple files. How can I do this?
I just did an import/export of a (partitioned) table with 50 millions record, it needed just 2 minutes to export it from a reasonably fast machine and 15 minutes to import it on my slower desktop. There was no need to split the file.
mysqldump is your friend, and knowing that you have a lot of data it's better to compress it
#host1:~ $ mysqldump -u <username> -p <database> <table> | gzip > output.sql.gz
#host1:~ $ scp output.sql.gz host2:~/
#host1:~ $ rm output.sql.gz
#host1:~ $ ssh host2
#host2:~ $ gunzip < output.sql.gz | mysql -u <username> -p <database>
#host2:~ $ rm output.sql.gz
Take a look at mysqldump
Your lines should be (from terminal):
export to backupfile.sql from db_name in your mysql:
mysqldump -u user -p db_name > backupfile.sql
import from backupfile to db_name in your mysql:
mysql -u user -p db_name < backupfile.sql
You have two options in order to split the information:
Split the output text file into smaller files (as many as you need, many tools to do this, e.g. split).
Export one table each time using the option to add a table name after the db_name, like so:
mysqldump -u user -p db_name table_name > backupfile_table_name.sql
Compressing the file(s) (a text file) is very efficient and can minimize it to about 20%-30% of it's original size.
Copying the files to remote servers should be done with scp (secure copy) and interaction should take place with ssh (usually).
Good luck.
I found that the advanced options in phpMyAdmin allow me to select how many rows to export, plus the start point. This allows me to create as many dump files as required to get the table onto the remote host.
I had to adjust my php.ini settings, plus the phpMyAdmin config 'ExecTimeLimit' setting
as generating the dump files takes some time (500,000 rows in each).
I use HeidiSQL to do the imports.
As an example of the mysqldump approach for a single table
mysqldump -u root -ppassword yourdb yourtable > table_name.sql
Importing is then as simple as
mysql -u username -ppassword yourotherdb < table_name.sql
Use mysqldump to dump the table into a file.
Then use tar with -z option to zip the file.
Transfer it to your remote server (with ftp, sftp or other file transfer utility).
Then untar the file on remote server
Use mysql to import the file.
There is no reason to split the original file or to export in multiple files.
If you are not comfortable with using the mysqldump command line tool, here are two GUI tools that can help you with that problem, although you have to be able to upload them to the server via FTP!
Adminer is a slim and very efficient DB Manager tool that is at least as powerful as PHPMyAdmin and has only ONE SINGLE FILE that has to be uploaded to the server which makes it extremely easy to install. It works way better with large tables / DB than PMA does.
MySQLDumper is a tool developed especially to export / import large tables / DBs so it will have no problem with the situation you describe. The only dowside is that it is a bit more tedious to install as there are more files and folders (~350 files in ~1.5MB), but it shouldn't be a problem to upload it via FTP either, and it will definately get the job done :)
So my advice would be to first try Adminer and if that one also fails go the MySQLDumper route.
How do I split a large MySQL backup file into multiple files?
You can use mysql_export_explode
https://github.com/barinascode/mysql-export-explode
<?php
#Including the class
include 'mysql_export_explode.php';
$export = new mysql_export_explode;
$export->db = 'dataBaseName'; # -- Set your database name
$export->connect('host','user','password'); # -- Connecting to database
$export->rows = array('Id','firstName','Telephone','Address'); # -- Set which fields you want to export
$export->exportTable('myTableName',15); # -- Table name and in few fractions you want to split the table
?>
At the end of the SQL files are created in the directory where the script is executed in the following format
---------------------------------------
myTableName_0.sql
myTableName_1.sql
myTableName_2.sql
...

mysqldump compression

I am trying to understand how mysqldump works:
if I execute mysqldump on my pc and connect to a remote server:
mysqldump -u mark -h 34.32.23.23 -pxxx --quick | gzip > dump.sql.gz
will the server compress it and send it over to me as gzip or will my computer receive all the data first and then compress it?
Because I have a very large remote db to export, and I would like to know the fastest way to do it over a network!
You should make use of ssh + scp,
because the dump on localhost is faster,
and you only need to scp over the gzip (lesser network overhead)
likely you can do this
ssh $username#34.32.23.23 "mysqldump -u mark -h localhost -pxxx --quick | gzip > /tmp/dump.sql.gz"
scp $username#34.32.23.23:/tmp/dump.sql.gz .
(optional directory of /tmp, should be change to whatever directory you comfortable with)
Have you tried the --compress parameter?
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_compress
This is how I do it:
Do a partial export using SELECT INTO OUTFILE and create the files on the same server.
If your table contains 10 million rows. Do a partial export of 1 million rows at a time, each time in a separate file.
Once the 1st file is ready you can compress and transfer it. In the meantime MySQL can continue exporting data to the next file.
On the other server you can start loading the file into the new database.
BTW, lot of this can be scripted.

How to easily import multiple sql files into a MySQL database?

I have several sql files and I want to import all of them at once into a MySQL database.
I go to PHPMyAdmin, access the database, click import, select a file and import it. When I have more than a couple of files it takes a long time.
I would like to know if there is a better way to import multiple files, something like one file which will import the other files or similar.
I'm using WAMP and I would like a solution that does not require installing additional programs on my computer.
In Windows, open a terminal, go to the content folder and write:
copy /b *.sql all_files.sql
This concate all files in only one, making it really quick to import with PhpMyAdmin.
In Linux and macOS, as #BlackCharly pointed out, this will do the trick:
cat *.sql > .all_files.sql
Important Note: Doing it directly should go well, but it could end up with you stuck in a loop with a massive output file getting bigger and bigger due to the system adding the file to itself. To avoid it, two possible solutions.
A) Put the result in a separate directory to be safe (Thanks #mosh):
mkdir concatSql
cat *.sql > ./concatSql/all_files.sql
B) Concat them in a file with a different extension and then change it the name. (Thanks #William Turrell)
cat *.sql > all_files.sql1
mv all_files.sql1 all_files.sql
This is the easiest way that I have found.
In Windows (powershell):
cat *.sql | C:\wamp64\bin\mysql\mysql5.7.21\bin\mysql.exe -u user -p database
You will need to insert the path to your WAMP - MySQL above, I have used my systems path.
In Linux (Bash):
cat *.sql | mysql -u user -p database
Goto cmd
Type in command prompt
C:\users\Usersname>cd [.sql tables folder path ]
Press Enter
Ex: C:\users\Usersname>cd E:\project\database
Type command prompt
C:\users\Usersname>[.sql folder's drive (directory)name]
Press Enter
Ex: C:\users\Usersname>E:
Type command prompt for marge all .sql file(table) in a single file
copy /b *.sql newdatabase.sql
Press Enter
EX: E:\project\database>copy /b *.sql newdatabase.sql
You can see Merge Multiple .sql(file) tables Files Into A Single File in your directory folder
Ex: E:\project\database
I know it's been a little over two years... but I was looking for a way to do this, and wasn't overly happy with the solution posted (it works fine, but I wanted a little more information as the import happens). When combining all the SQL files in to one, you don't get any sort of progress updates.
So I kept digging for an answer and thought this might be a good place to post what I found for future people looking for the same answer. Here's a command line in Windows that will import multiple SQL files from a folder. You run this from the command line while in the directory where mysql.exe is located.
for /f %f in ('dir /b <dir>\<mask>') do mysql --user=<user> --password=<password> <dbname> < <dir>\%f
With some assumed values (as an example):
for /f %f in ('dir /b c:\sqlbackup\*.sql') do mysql --user=mylogin --password=mypass mydb < c:\sqlbackup\%f
If you had two sets of SQL backups in the folder, you could change the *.sql to something more specific (like mydb_*.sql).
just type:
cat *.sql |mysql -uroot -p
and mysql will import all the sql file in sequence
Enter the mysql shell like this.
mysql --host=localhost --user=username --password --database=db
Then use the source command and a semicolon to seperate the commands.
source file1.sql; source file2; source file3;
You could also a for loop to do so:
#!/bin/bash
for i in *.sql
do
echo "Importing: $i"
mysql your_db_name < $i
wait
done
Source
Save this file as .bat and run it , change variables inside parenthesis ...
#echo off
title Mysql Import Script
cd (Folder Name)
for %%a in (*) do (
echo Importing File : %%a
mysql -u(username) -p(password) %%~na < %%a
)
pause
if it's only one database modify (%%~na) with the database name .
The easiest solution is to copy/paste every sql files in one.
You can't add some sql markup for file importation (the imported files will be in your computer, not in the server, and I don't think MySQL manage some import markup for external sql files).
in windows open windows powershell and go to the folder where sql files are then run this command
cat *.sql | C:\xampp\mysql\bin\mysql.exe -u username -p databasename
Just type below command on your command prompt & it will bind all sql file into single sql file,
c:/xampp/mysql/bin/sql/ type *.sql > OneFile.sql;
Import From multiple SQL files into one Database.
Step 1: Goto to the folder and create file 'import-script.sh' with execute permission
(give Permission to file is chmod u+x import-script.sh )
#!/bin/bash
for i in *.sql
do
echo "Importing: $i"
mysql -u USERNAME -pPASSWORD DBNAME < $i
wait
done
The main thing is -p and PASSWORD didn't add any space.
Step 2: then in your terminal run this command ./import-script.sh
for windows users,
You can select the database in the phpMyadmin interface on the left, drag and drop all your files from your windows folder onto the web UI of phpMyadmin.

How do I get a tab delimited MySQL dump from a remote host ?

A mysqldump command like the following:
mysqldump -u<username> -p<password> -h<remote_db_host> -T<target_directory> <db_name> --fields-terminated-by=,
will write out two files for each table (one is the schema, the other is CSV table data). To get CSV output you must specify a target directory (with -T). When -T is passed to mysqldump, it writes the data to the filesystem of the server where mysqld is running - NOT the system where the command is issued.
Is there an easy way to dump CSV files from a remote system ?
Note: I am familiar with using a simple mysqldump and handling the STDOUT output, but I don't know of a way to get CSV table data that way without doing some substantial parsing. In this case I will use the -X option and dump xml.
mysql -h remote_host -e "SELECT * FROM my_schema.my_table" --batch --silent > my_file.csv
I want to add to codeman's answer. It worked but needed about 30 minutes of tweaking for my needs.
My webserver uses centos 6/cpanel and the flags and sequence which codeman used above did not work for me and I had to rearrange and use different flags, etc.
Also, I used this for a local file dump, its not just useful for remote DBs, because I had too many issues with selinux and mysql user permissions for SELECT INTO OUTFILE commands, etc.
What worked on my Centos+Cpanel Server
mysql -B -s -uUSERNAME -pPASSWORD < query.sql > /path/to/myfile.txt
Caveats
No Column Names
I cant get column names to appear at the top. I tried adding the flag:
--column-names
but it made no difference. I am still stuck on this one. I currently add it to the file after processing.
Selecting a Database
For some reason, I couldn't include the database name in the commandline. I tried with
-D databasename
in the commandline but I kept getting permission errors, so I ended using the following the top of my query.sql:
USE database_name;
On many systems, MySQL runs as a distinct user (such as user "mysql") and your mysqldump will fail if the MySQL user does not have write permissions in the dump directory - it doesn't matter what your own write permissions are in that directory. Changing your directory (at least temporarily) to world-writable (777) will often fix your export problem.