Show a warning during import, when MySQL dump contains `use` - mysql

Let's suppose I have large MySql dump which I want to import to a specific database.
I could use
mysql -D bar --one-database < foo.mysql
foo.mysql has a use foo; somewhere.
This command is already doing most of what I want: Ignoring data which would be important to another database than bar.
I could use a grep -e "^use " foo.mysql to check if the database dump contains a use statement.
But can I do this also during the import, so I do not have to read the dump twice?

Reading while importing example:
< dump.sql tee >(sed -n '/^USE `[^`]*`;$/ p' 1>&2) | mysql ...
The example will import the file dump.sql into mysql while printing the use-statements as they come by:
...
USE `blue-racoon`;
USE `funny-basil`;
USE `purple-fish`;
...
Explanation: If you have a full dump with all databases of the mysql server (--all-databases long option) and you would like to review all SQL USE-statements while the file pipes into mysql, you could make use of tee to duplicate the content on the fly and sed to only print from those duplicated lines if a line is a USE-statement.
Then the filtered output is redirected to STDERR for review while the unfiltered output can be imported as normal by mysql.
I hope this helps.

Related

export to disk csv using pipe as delimiter in myql

I need a CSV from a remote mysql server (so "TO OUTFILE" is not possible) for further processing.
The issue is the fields are text that usually contains tabs (\t) so is not possible to use sed or awk to post-process and separate fields as we will end with more columns than expected for some rows.
My first approach reading Mysql manual is:
content=""$(mysql -u my_user -pthepass -h remote.sql.com -D my_db -s -e "${query}" --delimiter="|")
but to no avail. I get tab separated csv.
is there any way avoiding to make a replacement after the data is loaded?

Import single table from .sql file into database? [duplicate]

I have a mysqldump backup of my mysql database consisting of all of our tables which is about 440 megs. I want to restore the contents of just one of the tables from the mysqldump. Is this possible? Theoretically, I could just cut out the section that rebuilds the table I want but I don't even know how to effectively edit a text document that size.
You can try to use sed in order to extract only the table you want.
Let say the name of your table is mytable and the file mysql.dump is the file containing your huge dump:
$ sed -n -e '/CREATE TABLE.*`mytable`/,/Table structure for table/p' mysql.dump > mytable.dump
This will copy in the file mytable.dump what is located between CREATE TABLE mytable and the next CREATE TABLE corresponding to the next table.
You can then adjust the file mytable.dump which contains the structure of the table mytable, and the data (a list of INSERT).
I used a modified version of uloBasEI's sed command. It includes the preceding DROP command, and reads until mysql is done dumping data to your table (UNLOCK). Worked for me (re)importing wp_users to a bunch of Wordpress sites.
sed -n -e '/DROP TABLE.*`mytable`/,/UNLOCK TABLES/p' mydump.sql > tabledump.sql
This can be done more easily? This is how I did it:
Create a temporary database (e.g. restore):
mysqladmin -u root -p create restore
Restore the full dump in the temp database:
mysql -u root -p --one-database restore < fulldump.sql
Dump the table you want to recover:
mysqldump restore mytable > mytable.sql
Import the table in another database:
mysql -u root -p database < mytable.sql
A simple solution would be to simply create a dump of just the table you wish to restore separately. You can use the mysqldump command to do so with the following syntax:
mysqldump -u [user] -p[password] [database] [table] > [output_file_name].sql
Then import it as normal, and it will only import the dumped table.
One way or another, any process doing that will have to go through the entire text of the dump and parse it in some way. I'd just grep for
INSERT INTO `the_table_i_want`
and pipe the output into mysql. Take a look at the first table in the dump before, to make sure you're getting the INSERT's the right way.
Edit: OK, got the formatting right this time.
Backup
$ mysqldump -A | gzip > mysqldump-A.gz
Restore single table
$ mysql -e "truncate TABLE_NAME" DB_NAME
$ zgrep ^"INSERT INTO \`TABLE_NAME" mysqldump-A.gz | mysql DB_NAME
You should try #bryn command but with the ` delimiter otherwise you will also extract the tables having a prefix or a suffix, this is what I usually do:
sed -n -e '/DROP TABLE.*`mytable`/,/UNLOCK TABLES/p' dump.sql > mytable.sql
Also for testing purpose, you may want to change the table name before importing:
sed -n -e 's/`mytable`/`mytable_restored`/g' mytable.sql > mytable_restored.sql
To import you can then use the mysql command:
mysql -u root -p'password' mydatabase < mytable_restore.sql
One possible way to deal with this is to restore to a temporary database, and dump just that table from the temporary database. Then use the new script.
sed -n -e '/-- Table structure for table `my_table_name`/,/UNLOCK TABLES/p' database_file.sql > table_file.sql
This is a better solution than some of the others above because not all SQL dumps contain a DROP TABLE statement. This one will work will all kinds of dumps.
This tool may be is what you want: tbdba-restore-mysqldump.pl
https://github.com/orczhou/dba-tool/blob/master/tbdba-restore-mysqldump.pl
e.g. Restore a table from database dump file:
tbdba-restore-mysqldump.pl -t yourtable -s yourdb -f backup.sql
Table should present with same structure in both dump and database.
`zgrep -a ^"INSERT INTO \`table_name" DbDump-backup.sql.tar.gz | mysql -u<user> -p<password> database_name`
or
`zgrep -a ^"INSERT INTO \`table_name" DbDump-backup.sql | mysql -u<user> -p<password> database_name`
This may help too.
# mysqldump -u root -p database0 > /tmp/database0.sql
# mysql -u root -p -e 'create database database0_bkp'
# mysql -u root -p database0_bkp < /tmp/database0.sql
# mysql -u root -p database0 -e 'insert into database0.table_you_want select * from database0_bkp.table_you_want'
Most modern text editors should be able to handle a text file that size, if your system is up to it.
Anyway, I had to do that once very quickly and i didnt have time to find any tools. I set up a new MySQL instance, imported the whole backup and then spit out just the table I wanted.
Then I imported that table into the main database.
It was tedious but rather easy. Good luck.
You can use vi editor. Type:
vi -o mysql.dump mytable.dump
to open both whole dump mysql.dump and a new file mytable.dump.
Find the appropriate insert into line by pressing / and then type a phrase, for example: "insert into `mytable`", then copy that line using yy. Switch to next file by ctrl+w then down arrow key, paste the copied line with pp. Finally save the new file by typing :wq and quite vi editor by :q.
Note that if you have dumped the data using multiple inserts you can copy (yank) all of them at once using Nyy in which N is the number of lines to be copied.
I have done it with a file of 920 MB size.
I tried a few options, which were incredibly slow. This split a 360GB dump into its tables in a few minutes:
How do I split the output from mysqldump into smaller files?
The 'sed' solutions mentioned earlier are nice but as mentioned not 100% secure
You may have INSERT commands with data containing:
... CREATE TABLE...(whatever)...mytable...
or even the exact string "CREATE TABLE `mytable`;"
if you are storing DML commands for instance!
(and if the table is huge you don't want to check that manually)
I would verify the exact syntax of the dump version used, and have a more restrictive pattern search:
Avoid ".*" and use "^" to ensure we start at the begining of the line.
And I'd prefer to grab the initial 'DROP'
All in all, this works better for me:
sed -n -e '/^DROP TABLE IF EXISTS \`mytable\`;/,/^UNLOCK TABLES;/p' mysql.dump > mytable.dump
Get a decent text editor like Notepad++ or Vim (if you're already proficient with it). Search for the table name and you should be able to highlight just the CREATE, ALTER, and INSERT commands for that table. It may be easier to navigate with your keyboard rather than a mouse. And I would make sure you're on a machine with plenty or RAM so that it will not have a problem loading the entire file at once. Once you've highlighted and copied the rows you need, it would be a good idea to back up just the copied part into it's own backup file and then import it into MySQL.
The chunks of SQL are blocked off with "Table structure for table my_table" and "Dumping data for table my_table."
You can use a Windows command line as follows to get the line numbers for the various sections. Adjust the searched string as needed.
find /n "for table `" sql.txt
The following will be returned:
---------- SQL.TXT
[4384]-- Table structure for table my_table
[4500]-- Dumping data for table my_table
[4514]-- Table structure for table some_other_table
... etc.
That gets you the line numbers you need... now, if I only knew how to use them... investigating.
You can import single table using terminal line as given below.
Here import single user table into specific database.
mysql -u root -p -D my_database_name < var/www/html/myproject/tbl_user.sql
I admire some of the ingenuity here, but there is literally no reason to use sed at all to address the OP's question.
The comment "use --one-database" is the correct answer, built into MySQL/MariaDB. No need for third-party hacks.
mysql -u root -p databasename --one-database < localhost.sql will just import the desired database.
I also found in some cases, when using this to import a series of databases, it would create the next database in the list for me (but not put anything in it). Not sure why it did that, but it made the restore easier.
With this command, enter the password interactively and it will import the requested database.

How to ignore certain MySQL tables when importing a database?

I have a large SQL file with one database and about 150 tables. I would like to use mysqlimport to import that database, however, I would like the import process to ignore or skip over a couple of tables. What is the proper syntax to import all tables, but ignore some of them? Thank you.
The accepted answer by RandomSeed could take a long time! Importing the table (just to drop it later) could be very wasteful depending on size.
For a file created using
mysqldump -u user -ppasswd --opt --routines DBname > DBdump.sql
I currently get a file about 7GB, 6GB of which is data for a log table that I don't 'need' to be there; reloading this file takes a couple of hours. If I need to reload (for development purposes, or if ever required for a live recovery) I skim the file thus:
sed '/INSERT INTO `TABLE_TO_SKIP`/d' DBdump.sql > reduced.sql
And reload with:
mysql -u user -ppasswd DBname < reduced.sql
This gives me a complete database, with the "unwanted" table created but empty. If you really don't want the tables at all, simply drop the empty tables after the load finishes.
For multiple tables you could do something like this:
sed '/INSERT INTO `TABLE1_TO_SKIP`/d' DBdump.sql | \
sed '/INSERT INTO `TABLE2_TO_SKIP`/d' | \
sed '/INSERT INTO `TABLE3_TO_SKIP`/d' > reduced.sql
There IS a 'gotcha' - watch out for procedures in your dump that might contain "INSERT INTO TABLE_TO_SKIP".
mysqlimport is not the right tool for importing SQL statements. This tool is meant to import formatted text files such as CSV. What you want to do is feed your sql dump directly to the mysql client with a command like this one:
bash > mysql -D your_database < your_sql_dump.sql
Neither mysql nor mysqlimport provide the feature you need. Your best chance would be importing the whole dump, then dropping the tables you do not want.
If you have access to the server where the dump comes from, then you could create a new dump with mysqldump --ignore-table=database.table_you_dont_want1 --ignore-table=database.table_you_dont_want2 ....
Check out this answer for a workaround to skip importing some table
For anyone working with .sql.gz files; I found the following solution to be very useful. Our database was 25GB+ and I had to remove the log tables.
gzip -cd "./mydb.sql.gz" | sed -r '/INSERT INTO `(log_table_1|log_table_2|log_table_3|log_table_4)`/d' | gzip > "./mydb2.sql.gz"
Thanks to the answer of Don and comment of Xosofox and this related post:
Use zcat and sed or awk to edit compressed .gz text file
Little old, but figure it might still come in handy...
I liked #Don's answer (https://stackoverflow.com/a/26379517/1446005) but found it very annoying that you'd have to write to another file first...
In my particular case this would take too much time and disc space
So I wrote a little bash script:
#!/bin/bash
tables=(table1_to_skip table2_to_skip ... tableN_to_skip)
tableString=$(printf "|%s" "${tables[#]}")
trimmed=${tableString:1}
grepExp="INSERT INTO \`($trimmed)\`"
zcat $1 | grep -vE "$grepExp" | mysql -uroot -p
this does not generate a new sql script but pipes it directly to the database
also, it does create the tables, just doesn't import the data (which was the problem I had with huge log tables)
Unless you have ignored the tables during the dump with mysqldump --ignore-table=database.unwanted_table, you have to use some script or tool to filter out the data you don't want to import from the dump file before passing it to mysql client.
Here is a bash/sh function that would exclude the unwanted tables from a SQL dump on the fly (through pipe):
# Accepts one argument, the list of tables to exclude (case-insensitive).
# Eg. filt_exclude '%session% action_log %_cache'
filt_exclude() {
local excl_tns;
if [ -n "$1" ]; then
# trim & replace /[,;\s]+/ with '|' & replace '%' with '[^`]*'
excl_tns=$(echo "$1" | sed -r 's/^[[:space:]]*//g; s/[[:space:]]*$//g; s/[[:space:]]+/|/g; s/[,;]+/|/g; s/%/[^\`]\*/g');
grep -viE "(^INSERT INTO \`($excl_tns)\`)|(^DROP TABLE (IF EXISTS )?\`($excl_tns)\`)|^LOCK TABLES \`($excl_tns)\` WRITE" | \
sed 's/^CREATE TABLE `/CREATE TABLE IF NOT EXISTS `/g'
else
cat
fi
}
Suppose you have a dump created like so:
MYSQL_PWD="my-pass" mysqldump -u user --hex-blob db_name | \
pigz -9 > dump.sql.gz
And want to exclude some unwanted tables before importing:
pigz -dckq dump.sql.gz | \
filt_exclude '%session% action_log %_cache' | \
MYSQL_PWD="my-pass" mysql -u user db_name
Or you could pipe into a file or any other tool before importing to DB.
If desired, you can do this one table at a time:
mysqldump -p sourceDatabase tableName > tableName.sql
mysql -p -D targetDatabase < tableName.sql
Here is my script to exclude some tables from mysql dump
I use it to restore DB when need to keep orders and payments data
exclude_tables_from_dump.sh
#!/bin/bash
if [ ! -f "$1" ];
then
echo "Usage: $0 mysql_dump.sql"
exit
fi
declare -a TABLES=(
user
order
order_product
order_status
payments
)
CMD="cat $1"
for TBL in "${TABLES[#]}";do
CMD+="|sed 's/DROP TABLE IF EXISTS \`${TBL}\`/# DROP TABLE IF EXIST \`${TBL}\`/g'"
CMD+="|sed 's/CREATE TABLE \`${TBL}\`/CREATE TABLE IF NOT EXISTS \`${TBL}\`/g'"
CMD+="|sed -r '/INSERT INTO \`${TBL}\`/d'"
CMD+="|sed '/DELIMITER\ \;\;/,/DELIMITER\ \;/d'"
done
eval $CMD
It avoid DROP and reCREATE of tables and inserting data to this tables.
Also it strip all FUNCTIONS and PROCEDURES that stored between DELIMITER ;; and DELIMITER ;
I would not use it on production but if I would have to import some backup quickly that contains many smaller table and one big monster table that might take hours to import I would most probably "grep -v unwanted_table_name original.sql > reduced.sql
and then mysql -f < reduced.sql

How do I get a tab delimited MySQL dump from a remote host ?

A mysqldump command like the following:
mysqldump -u<username> -p<password> -h<remote_db_host> -T<target_directory> <db_name> --fields-terminated-by=,
will write out two files for each table (one is the schema, the other is CSV table data). To get CSV output you must specify a target directory (with -T). When -T is passed to mysqldump, it writes the data to the filesystem of the server where mysqld is running - NOT the system where the command is issued.
Is there an easy way to dump CSV files from a remote system ?
Note: I am familiar with using a simple mysqldump and handling the STDOUT output, but I don't know of a way to get CSV table data that way without doing some substantial parsing. In this case I will use the -X option and dump xml.
mysql -h remote_host -e "SELECT * FROM my_schema.my_table" --batch --silent > my_file.csv
I want to add to codeman's answer. It worked but needed about 30 minutes of tweaking for my needs.
My webserver uses centos 6/cpanel and the flags and sequence which codeman used above did not work for me and I had to rearrange and use different flags, etc.
Also, I used this for a local file dump, its not just useful for remote DBs, because I had too many issues with selinux and mysql user permissions for SELECT INTO OUTFILE commands, etc.
What worked on my Centos+Cpanel Server
mysql -B -s -uUSERNAME -pPASSWORD < query.sql > /path/to/myfile.txt
Caveats
No Column Names
I cant get column names to appear at the top. I tried adding the flag:
--column-names
but it made no difference. I am still stuck on this one. I currently add it to the file after processing.
Selecting a Database
For some reason, I couldn't include the database name in the commandline. I tried with
-D databasename
in the commandline but I kept getting permission errors, so I ended using the following the top of my query.sql:
USE database_name;
On many systems, MySQL runs as a distinct user (such as user "mysql") and your mysqldump will fail if the MySQL user does not have write permissions in the dump directory - it doesn't matter what your own write permissions are in that directory. Changing your directory (at least temporarily) to world-writable (777) will often fix your export problem.

MySQL - SELECT * INTO OUTFILE LOCAL ?

MySQL is awesome! I am currently involved in a major server migration and previously, our small database used to be hosted on the same server as the client. So we used to do this : SELECT * INTO OUTFILE .... LOAD DATA INFILE ....
Now, we moved the database to a different server and SELECT * INTO OUTFILE .... no longer works, understandable - security reasons I believe.
But, interestingly LOAD DATA INFILE .... can be changed to LOAD DATA LOCAL INFILE .... and bam, it works.
I am not complaining nor am I expressing disgust towards MySQL. The alternative to that added 2 lines of extra code and a system call form a .sql script. All I wanted to know is why LOAD DATA LOCAL INFILE works and why is there no such thing as SELECT INTO OUTFILE LOCAL?
I did my homework, couldn't find a direct answer to my questions above. I couldn't find a feature request # MySQL either. If someone can clear that up, that had be awesome!
Is MariaDB capable of handling this problem?
From the manual: The SELECT ... INTO OUTFILE statement is intended primarily to let you very quickly dump a table to a text file on the server machine. If you want to create the resulting file on some client host other than the server host, you cannot use SELECT ... INTO OUTFILE. In that case, you should instead use a command such as mysql -e "SELECT ..." > file_name to generate the file on the client host."
http://dev.mysql.com/doc/refman/5.0/en/select.html
An example:
mysql -h my.db.com -u usrname--password=pass db_name -e 'SELECT foo FROM bar' > /tmp/myfile.txt
You can achieve what you want with the mysql console with the -s (--silent) option passed in.
It's probably a good idea to also pass in the -r (--raw) option so that special characters don't get escaped. You can use this to pipe queries like you're wanting.
mysql -u username -h hostname -p -s -r -e "select concat('this',' ','works')"
EDIT: Also, if you want to remove the column name from your output, just add another -s (mysql -ss -r etc.)
The path you give to LOAD DATA INFILE is for the filesystem on the machine where the server is running, not the machine you connect from. LOAD DATA LOCAL INFILE is for the client's machine, but it requires that the server was started with the right settings, otherwise it's not allowed. You can read all about it here: http://dev.mysql.com/doc/refman/5.0/en/load-data-local.html
As for SELECT INTO OUTFILE I'm not sure why there is not a local version, besides it probably being tricky to do over the connection. You can get the same functionality through the mysqldump tool, but not through sending SQL to the server.
Since I find myself rather regularly looking for this exact problem (in the hopes I missed something before...), I finally decided to take the time and write up a small gist to export MySQL queries as CSV files, kinda like https://stackoverflow.com/a/28168869 but based on PHP and with a couple of more options. This was important for my use case, because I need to be able to fine-tune the CSV parameters (delimiter, NULL value handling) AND the files need to be actually valid CSV, so that a simple CONCAT is not sufficient since it doesn't generate valid CSV files if the values contain line breaks or the CSV delimiter.
Caution: Requires PHP to be installed on the server!
(Can be checked via php -v)
"Install" mysql2csv via
wget https://gist.githubusercontent.com/paslandau/37bf787eab1b84fc7ae679d1823cf401/raw/29a48bb0a43f6750858e1ddec054d3552f3cbc45/mysql2csv -O mysql2csv -q && (sha256sum mysql2csv | cmp <(echo "b109535b29733bd596ecc8608e008732e617e97906f119c66dd7cf6ab2865a65 mysql2csv") || (echo "ERROR comparing hash, Found:" ;sha256sum mysql2csv) ) && chmod +x mysql2csv
(download content of the gist, check checksum and make it executable)
Usage example
./mysql2csv --file="/tmp/result.csv" --query='SELECT 1 as foo, 2 as bar;' --user="username" --password="password"
generates file /tmp/result.csv with content
foo,bar
1,2
help for reference
./mysql2csv --help
Helper command to export data for an arbitrary mysql query into a CSV file.
Especially helpful if the use of "SELECT ... INTO OUTFILE" is not an option, e.g.
because the mysql server is running on a remote host.
Usage example:
./mysql2csv --file="/tmp/result.csv" --query='SELECT 1 as foo, 2 as bar;' --user="username" --password="password"
cat /tmp/result.csv
Options:
-q,--query=name [required]
The query string to extract data from mysql.
-h,--host=name
(Default: 127.0.0.1) The hostname of the mysql server.
-D,--database=name
The default database.
-P,--port=name
(Default: 3306) The port of the mysql server.
-u,--user=name
The username to connect to the mysql server.
-p,--password=name
The password to connect to the mysql server.
-F,--file=name
(Default: php://stdout) The filename to export the query result to ('php://stdout' prints to console).
-L,--delimiter=name
(Default: ,) The CSV delimiter.
-C,--enclosure=name
(Default: ") The CSV enclosure (that is used to enclose values that contain special characters).
-E,--escape=name
(Default: \) The CSV escape character.
-N,--null=name
(Default: \N) The value that is used to replace NULL values in the CSV file.
-H,--header=name
(Default: 1) If '0', the resulting CSV file does not contain headers.
--help
Prints the help for this command.
Using mysql CLI with -e option as Waverly360 suggests is a good one, but that might go out of memory and get killed on large results. (Havent find the reason behind it).
If that is the case, and you need all records, my solution is: mysqldump + mysqldump2csv:
wget https://raw.githubusercontent.com/jamesmishra/mysqldump-to-csv/master/mysqldump_to_csv.py
mysqldump -u username -p --host=hostname database table | python mysqldump_to_csv.py > table.csv
Re: SELECT * INTO OUTFILE
Check if MySQL has permissions to write a file to the OUTFILE directory on the server.
Try setting path to /var/lib/mysql-files/filename.csv (MySQL 8). Determine what files directory is yours by typping SHOW VARIABLES LIKE "secure_file_priv"; in mysql client command line.
See answer about here: (...) --secure-file-priv in MySQL answered in 2015 by vhu user