how import a mysqldump file locally - mysql

I want to import an sqldump file in my database "dbname" into the table "data" without using the network interface.
when i import the file via
mysql dbname -u databaseuser -pdatabasepass<data.sql
this is really slow on my ubuntu 12.04
but when i use this instead:
mysqlimport -u databaseuser -pdatabasepass --local data.sql
it is as fast as normal.
i think, because on my mashine it is parsing each line separately when using "mysql" and "<"
is there a way to use the mysqlimport syntax for importing an sql-file with CREATE and DROP TABLE stuff?
maybe just split the upper part in the sql-dump (with the drop table and create table statement) automatically and then only send the rest of the file with the INPUT-statements to mysqlinsert, that should work

There is no way to do that directly. What you need to do is run mysqldump --tab instead of the normal mysqldump, that will create both a .sql files containing table definitions and a CSV file containing data that can be imported with mysqlimport.

You can specify the path to a UNIX socket file with --socket=/path/to/... (or -S), which may be faster. You can find the socket path in the server's my.cnf. So, for example (and I'm just making up the path, here);
mysql dbname -u databaseuser -pdatabasepass -S /var/run/mysqld/mysqld.sock < data.sql

You can use this script: How do I split the output from mysqldump into smaller files?
and modify it, so you have only the create-statements in one file and the data statements in another.

Related

Can i import a .sql file from wamp to xampp?

i have a .sql file that was exported from wamp, my friend only uses xampp. Is it possible to import my .sql file to xampp?
Yes because the .sql file is most likely independent of the server stack.
Of course it is possible, when wamp give you a export of .sql, this file can be used with all standard web server. (xampp)
I think this may help
Depending on the tool and parameters used to create the .sql dump file of multiple databases, the file will normally have
CREATE DATABASE DBn...;
and
USE DBn;
statements that will allow your import to proceed without hiccups. For example, both the mysqldump command mysqldump command and phpMyAdmin's Export function for multiple database insert them.
Assuming you have exported with a sensible tool like those above, then you can import the database with a command line like this:
mysql -u username -p < dumpfile.sql
Your mysql username account needs to have appropriate privileges to, for example, create databases.
You can even run this command in a more familiar way by naming the startingDB database to use before running the commands in dumpfile.sql:
mysql -u username -p startingDB < dumpfile.sql

Importing 40GB+ .sql file

What is a way to import a 40GB+ .sql file into a MySQL database, that works?
I tried big dump. I tried loading the .sql into workbench and executing.
None of the methods I tried so far, have worked.
I was wondering what is the best way or even a-way (that works) of importing such a large database .sql into mysql? Note that splitting the SQL file into chunks may not work, as some tables are larger than 5GB.
I appreciate any suggestions.
I imported a big dump file like this: mysql -u username -p database_name < your_dump.sql
Step 1: Make changes to the configuration file: my.cnf (located in C:\ProgramData directory in Windows if running MySQL V5.6, Note: this directory is hidden by default, you must enable visibility of hidden files in Folder Options).
max_allowed_packet=2000M
Step 2: Restart MySQL service
Step 3: Use the following dump command:
mysql -hhostname -uusername -ppassword db_name
< C:\sql_input_path\db.sql > C:\error_log_output_path\error.txt

Export a large MySQL table as multiple smaller files

I have a very large MySQL table on my local dev server: over 8 million rows of data. I loaded the table successfully using LOAD DATA INFILE.
I now wish to export this data and import it onto a remote host.
I tried LOAD DATA LOCAL INFILE to the remote host. However, after around 15 minutes the connection to the remote host fails. I think that the only solution is for me to export the data into a number of smaller files.
The tools at my disposal are PhpMyAdmin, HeidiSQL and MySQL Workbench.
I know how to export as a single file, but not multiple files. How can I do this?
I just did an import/export of a (partitioned) table with 50 millions record, it needed just 2 minutes to export it from a reasonably fast machine and 15 minutes to import it on my slower desktop. There was no need to split the file.
mysqldump is your friend, and knowing that you have a lot of data it's better to compress it
#host1:~ $ mysqldump -u <username> -p <database> <table> | gzip > output.sql.gz
#host1:~ $ scp output.sql.gz host2:~/
#host1:~ $ rm output.sql.gz
#host1:~ $ ssh host2
#host2:~ $ gunzip < output.sql.gz | mysql -u <username> -p <database>
#host2:~ $ rm output.sql.gz
Take a look at mysqldump
Your lines should be (from terminal):
export to backupfile.sql from db_name in your mysql:
mysqldump -u user -p db_name > backupfile.sql
import from backupfile to db_name in your mysql:
mysql -u user -p db_name < backupfile.sql
You have two options in order to split the information:
Split the output text file into smaller files (as many as you need, many tools to do this, e.g. split).
Export one table each time using the option to add a table name after the db_name, like so:
mysqldump -u user -p db_name table_name > backupfile_table_name.sql
Compressing the file(s) (a text file) is very efficient and can minimize it to about 20%-30% of it's original size.
Copying the files to remote servers should be done with scp (secure copy) and interaction should take place with ssh (usually).
Good luck.
I found that the advanced options in phpMyAdmin allow me to select how many rows to export, plus the start point. This allows me to create as many dump files as required to get the table onto the remote host.
I had to adjust my php.ini settings, plus the phpMyAdmin config 'ExecTimeLimit' setting
as generating the dump files takes some time (500,000 rows in each).
I use HeidiSQL to do the imports.
As an example of the mysqldump approach for a single table
mysqldump -u root -ppassword yourdb yourtable > table_name.sql
Importing is then as simple as
mysql -u username -ppassword yourotherdb < table_name.sql
Use mysqldump to dump the table into a file.
Then use tar with -z option to zip the file.
Transfer it to your remote server (with ftp, sftp or other file transfer utility).
Then untar the file on remote server
Use mysql to import the file.
There is no reason to split the original file or to export in multiple files.
If you are not comfortable with using the mysqldump command line tool, here are two GUI tools that can help you with that problem, although you have to be able to upload them to the server via FTP!
Adminer is a slim and very efficient DB Manager tool that is at least as powerful as PHPMyAdmin and has only ONE SINGLE FILE that has to be uploaded to the server which makes it extremely easy to install. It works way better with large tables / DB than PMA does.
MySQLDumper is a tool developed especially to export / import large tables / DBs so it will have no problem with the situation you describe. The only dowside is that it is a bit more tedious to install as there are more files and folders (~350 files in ~1.5MB), but it shouldn't be a problem to upload it via FTP either, and it will definately get the job done :)
So my advice would be to first try Adminer and if that one also fails go the MySQLDumper route.
How do I split a large MySQL backup file into multiple files?
You can use mysql_export_explode
https://github.com/barinascode/mysql-export-explode
<?php
#Including the class
include 'mysql_export_explode.php';
$export = new mysql_export_explode;
$export->db = 'dataBaseName'; # -- Set your database name
$export->connect('host','user','password'); # -- Connecting to database
$export->rows = array('Id','firstName','Telephone','Address'); # -- Set which fields you want to export
$export->exportTable('myTableName',15); # -- Table name and in few fractions you want to split the table
?>
At the end of the SQL files are created in the directory where the script is executed in the following format
---------------------------------------
myTableName_0.sql
myTableName_1.sql
myTableName_2.sql
...

How do I get a tab delimited MySQL dump from a remote host ?

A mysqldump command like the following:
mysqldump -u<username> -p<password> -h<remote_db_host> -T<target_directory> <db_name> --fields-terminated-by=,
will write out two files for each table (one is the schema, the other is CSV table data). To get CSV output you must specify a target directory (with -T). When -T is passed to mysqldump, it writes the data to the filesystem of the server where mysqld is running - NOT the system where the command is issued.
Is there an easy way to dump CSV files from a remote system ?
Note: I am familiar with using a simple mysqldump and handling the STDOUT output, but I don't know of a way to get CSV table data that way without doing some substantial parsing. In this case I will use the -X option and dump xml.
mysql -h remote_host -e "SELECT * FROM my_schema.my_table" --batch --silent > my_file.csv
I want to add to codeman's answer. It worked but needed about 30 minutes of tweaking for my needs.
My webserver uses centos 6/cpanel and the flags and sequence which codeman used above did not work for me and I had to rearrange and use different flags, etc.
Also, I used this for a local file dump, its not just useful for remote DBs, because I had too many issues with selinux and mysql user permissions for SELECT INTO OUTFILE commands, etc.
What worked on my Centos+Cpanel Server
mysql -B -s -uUSERNAME -pPASSWORD < query.sql > /path/to/myfile.txt
Caveats
No Column Names
I cant get column names to appear at the top. I tried adding the flag:
--column-names
but it made no difference. I am still stuck on this one. I currently add it to the file after processing.
Selecting a Database
For some reason, I couldn't include the database name in the commandline. I tried with
-D databasename
in the commandline but I kept getting permission errors, so I ended using the following the top of my query.sql:
USE database_name;
On many systems, MySQL runs as a distinct user (such as user "mysql") and your mysqldump will fail if the MySQL user does not have write permissions in the dump directory - it doesn't matter what your own write permissions are in that directory. Changing your directory (at least temporarily) to world-writable (777) will often fix your export problem.

using mysqldump to restore database to mysql located on another computer

Any idea how to do this restore ?
I looked into help of mysqldump but couldn't see it there .
If so can you give me some example.
With mysqldump you will generate a script you can use for restore on a different computer like this:
$ mysql -U user_name < your_backup.sql
Run on your favorite shell (windows command prompt, bash, csh...).
I think you can use CMD to navigate to the mysqldump location, then type this command,
mysqldump database_name -u username >location\to\save\dump.sql
change database_name to the database you want to backup, username to the username associated with the database, and location\to\save\dump.sql to the location where you want to save the output sql file, for me I wrote it D:\dump.sql
Then on the other machine you can import the SQL file using the PHPMyAdmin.
You can just execute the SQL using the mysql command-line command. There is a switch to specify which file to import, I think it is -I but I'm not sure.
It's just plain SQL. Pass the file to mysql (the mysql command line tool) and it will execute it:
mysql < backup.sql
From the shell prompt, using
parameters form the mysqldump
doc, mysqldump the database using a > redirect to a
human readable .sql file. E.g.
$ mysqldump --databases src_db > src_db.sql
Transfer the human readable file to
another machine.
After making sure the destination database exists has been created, redirect < the .sql file into the destination database.
$ mysql dest_db < src_db.sql