dumping data from views in mysql - mysql

i have a view and want to extract its data into a file that has create table statement as well data.
i know that mysqldump doesn't work on views.

Obviously, there isn't an automated way to generate the CREATE TABLE statement of a table that does not exist. So you basically have two options:
Create an actual table, dump it and remove it afterwards.
Write a lot of code to analyse the view and underlying tables and generate the appropriate SQL.
First option is not optimal at all but it's easy to implement:
CREATE TABLE my_table AS
SELECT *
FROM my_view
You can now dump the table with mysqldump. When you're done:
DROP TABLE my_table
Second option can be as optimal as you need but it can get pretty complicate and it depends a lot on your actual needs and tool availability. However, if performance is an issue you can combine both approaches in a quick and dirty trick:
CREATE TABLE my_table AS
SELECT *
FROM my_view
LIMIT 1;
SHOW CREATE TABLE my_table;
Now, you use your favourite language to read values from my_view and build the appropriate INSERT INTO code. Finally:
DROP TABLE my_table;
In any case, feel free to explain why you need to obtain SQL code from views and we may be able to find better solutions.

Use SELECT ... INTO OUTFILE to create a dump of the data.

I have written a bash function to export the "structure" and data of a VIEW without creating a full copy of the data. I tested it with MySQL 5.6 on a CentOS 7 server. It properly takes into account columns with JSON values and strings like "O'Mally", though you may need to tweak it further for other special cases.
For the sake of brevity, I did not make it robust in terms of error checks or anything else.
function export_data_from_view
{
local DB_HOST=$1
local SCHEMA=$2
local VIEW=$3
local TMP_TABLE_NAME="view_as_table_$RANDOM"
local SQL1="
create table $TMP_TABLE_NAME as
(select * from $VIEW where 1=0);
show create table $TMP_TABLE_NAME \G
"
# Create an empty table with the structure of all columns in the VIEW.
# Display the structure. Delete lines not needed.
local STRUCT=$(
mysql -h $DB_HOST -BANnq -e "$SQL1" $SCHEMA |
egrep -v "\*\*\*.* row \*\*\*|^${TMP_TABLE_NAME}$" |
sed "s/$TMP_TABLE_NAME/$VIEW/"
)
echo
echo "$STRUCT;"
echo
local SQL2="
select concat( 'quote( ', column_name, ' ),' )
from information_schema.columns
where table_schema = '$SCHEMA'
and table_name = '$VIEW'
order by ORDINAL_POSITION
"
local COL_LIST=$(mysql -h $DB_HOST -BANnq -e "$SQL2")
# Remove the last comma from COL_LIST.
local COL_LIST=${COL_LIST%,}
local SQL3="select $COL_LIST from $VIEW"
local INSERT_STR="insert into $VIEW values "
# Fix quoting issues to produce executeable INSERT statements.
# \x27 is the single quote.
# \x5C is the back slash.
mysql -h $DB_HOST -BANnq -e "$SQL3" $SCHEMA |
sed '
s/\t/,/g; # Change each TAB to a comma.
s/\x5C\x5C\x27/\x5C\x27/g; # Change each back-back-single-quote to a back-single-quote.
s/\x27NULL\x27/NULL/g; # Remove quotes from around real NULL values.
s/\x27\x27{/\x27{/g; # Remove extra quotes from the beginning of a JSON value.
s/}\x27\x27/}\x27/g; # Remove extra quotes from the end of a JSON value.
' |
awk -v insert="$INSERT_STR" '{print insert "( " $0 " );"}'
local SQL4="drop table if exists $TMP_TABLE_NAME"
mysql -h $DB_HOST -BANnq -e "$SQL4" $SCHEMA
echo
}

Related

Batch for mysql dump - exclude some table based on SELECT

We have a hundred tables on our database, the data of twenty of these tables are generated thanks to a LOAD DATA INFILE
(there is therefore no point in us saving them with a MYSQLDUMP knowing that they are the heaviest - around 80% of the size of the database)
These LOAD DATA INFILE are managed using a PHP form, the names of the tables are therefore saved in the database like this :
Name of the table : import_table
Columns of the table : table_name, table_column, date_creation, etc ...
So when I make this query :
SELECT table_name FROM import_table
I have this result:
list_of_customer
all_order
all_invoice
...
I would therefore like, thanks to this table containing all the tables to ignore (and which can change at any time) create my batch in order to perform MYSQLDUMP
So I did that :
#ECHO OFF
"C:\wamp\bin\mysql\mysql8.0.18\bin\mysqldump.exe" mydatabase --result-file="C:\test1\test2\databases.sql" --user=**** --password=****
How can I integrate my SELECT table_name FROM import_table in order to use the option --ignore-table ?
Take of table_name in a file
Read that file in variable
Use that variable for --ignore-table
#ECHO OFF
"C:\wamp\bin\mysql\mysql8.0.18\bin\mysql.exe" mydatabase -e "SELECT group_concat(table_name) FROM import_table" --user=**** --password=**** > queryresult.txt
set /p ExcludedTables=<queryresult.txt
"C:\wamp\bin\mysql\mysql8.0.18\bin\mysqldump.exe" mydatabase --result-file="C:\test1\test2\databases.sql" --user=**** --password=**** --ignore-table-=%ExcludedTables%
I can't test as I am on mac but I hope you get the idea about how you can do it.

Show query result ONLY in MySQL

How to execute a MySQL query and avoid having the query or an alias in the output ? I tried "" (empty string) as an alias but I didn't get the results expected as I ended up having a blank line.
edit: added some code
SELECT
CONCAT("{\"counters\":{",
-- Total memory used calculation
"\"mysql.total_memory\":",
((##read_buffer_size + ##sort_buffer_size) * ##max_connections + ##key_buffer_size),",",
-- other monitored server status variables
GROUP_CONCAT(
CONCAT("\"mysql.",LCASE(VARIABLE_NAME),"\":",VARIABLE_VALUE)
)
,"}}")
FROM INFORMATION_SCHEMA.GLOBAL_STATUS
WHERE VARIABLE_NAME = "SLOW_QUERIES"
OR VARIABLE_NAME="Qcache_lowmem_prunes"
OR VARIABLE_NAME="SELECT_FULL_JOIN"
OR VARIABLE_NAME="SELECT_RANGE_CHECK"
OR VARIABLE_NAME="SELECT_SCAN"
OR VARIABLE_NAME="SELECT_RANGE";
I want to have a json format as output. I need this as an input for another software. This software doesn't accept a blank line among other things (compressed json format).
edit2: added output
CONCAT("{\"counters\":{",
"\"mysql.total_memory\":",
((##read_buffer_size + ##sort_buffer_size) * ##max_connections + ##key_buffer_size),",",
GROUP_CONCAT(
CONCAT("\"mysql.",LCASE(VARIABLE_NAME),"\":",VARIABLE_VALUE)
)
,"}}")
{"counters":{"mysql.total_memory":39108608,"mysql.qcache_lowmem_prunes":0,"mysql.select_full_join":0,"mysql.select_range":0,"mysql.select_range_check":0,"mysql.select_scan":84,"mysql.slow_queries":0}}
I want to remove the "CONCAT(...)" part and only have the result as output.
I think I know what you mean, you only want the outcome of the query printed without boxes and headers, therefore start your mysql client the following way: mysql -uroot -p -s -r -N.
This will supress output of the boxes arround the querys and also the column names. You can also use the -e Parameter to execute your query and then exit the mysql client after printing the results of your query to stdout, this is usefule when using it in scripts. Please see below (a simplified) example:
[root#db1 ~]# mysql -uroot -p******* -s -r -N -e "select 1+1"
2
[root#db1 ~]#
Every column in the resultset must be named. But you can name with whatever explicit (non-empty) alias you wish. As documented under SELECT Syntax:
A select_expr can be given an alias using AS alias_name.
Therefore, in your case:
SELECT CONCAT(
'{"counters":{',
-- Total memory used calculation
'"mysql.total_memory":', (
(##read_buffer_size + ##sort_buffer_size) * ##max_connections
+ ##key_buffer_size
),',',
-- other monitored server status variables
GROUP_CONCAT('"mysql.',LCASE(VARIABLE_NAME),'":',VARIABLE_VALUE),
'}}'
) AS my_column -- assign your chosen alias here <===================
FROM INFORMATION_SCHEMA.GLOBAL_STATUS
WHERE VARIABLE_NAME IN (
'SLOW_QUERIES',
'Qcache_lowmem_prunes',
'SELECT_FULL_JOIN',
'SELECT_RANGE_CHECK',
'SELECT_SCAN',
'SELECT_RANGE'
);

Using mysqldump to format one insert per line?

This has been asked a few times but I cannot find a resolution to my problem. Basically when using mysqldump, which is the built in tool for the MySQL Workbench administration tool, when I dump a database using extended inserts, I get massive long lines of data. I understand why it does this, as it speeds inserts by inserting the data as one command (especially on InnoDB), but the formatting makes it REALLY difficult to actually look at the data in a dump file, or compare two files with a diff tool if you are storing them in version control etc. In my case I am storing them in version control as we use the dump files to keep track of our integration test database.
Now I know I can turn off extended inserts, so I will get one insert per line, which works, but any time you do a restore with the dump file it will be slower.
My core problem is that in the OLD tool we used to use (MySQL Administrator) when I dump a file, it does basically the same thing but it FORMATS that INSERT statement to put one insert per line, while still doing bulk inserts. So instead of this:
INSERT INTO `coupon_gv_customer` (`customer_id`,`amount`) VALUES (887,'0.0000'),191607,'1.0300');
you get this:
INSERT INTO `coupon_gv_customer` (`customer_id`,`amount`) VALUES
(887,'0.0000'),
(191607,'1.0300');
No matter what options I try, there does not seem to be any way of being able to get a dump like this, which is really the best of both worlds. Yes, it take a little more space, but in situations where you need a human to read the files, it makes it MUCH more useful.
Am I missing something and there is a way to do this with MySQLDump, or have we all gone backwards and this feature in the old (now deprecated) MySQL Administrator tool is no longer available?
Try use the following option:
--skip-extended-insert
It worked for me.
With the default mysqldump format, each record dumped will generate an individual INSERT command in the dump file (i.e., the sql file), each on its own line. This is perfect for source control (e.g., svn, git, etc.) as it makes the diff and delta resolution much finer, and ultimately results in a more efficient source control process. However, for significantly sized tables, executing all those INSERT queries can potentially make restoration from the sql file prohibitively slow.
Using the --extended-insert option fixes the multiple INSERT problem by wrapping all the records into a single INSERT command on a single line in the dumped sql file. However, the source control process becomes very inefficient. The entire table contents is represented on a single line in the sql file, and if a single character changes anywhere in that table, source control will flag the entire line (i.e., the entire table) as the delta between versions. And, for large tables, this negates many of the benefits of using a formal source control system.
So ideally, for efficient database restoration, in the sql file, we want each table to be represented by a single INSERT. For an efficient source control process, in the sql file, we want each record in that INSERT command to reside on its own line.
My solution to this is the following back-up script:
#!/bin/bash
cd my_git_directory/
ARGS="--host=myhostname --user=myusername --password=mypassword --opt --skip-dump-date"
/usr/bin/mysqldump $ARGS --database mydatabase | sed 's$VALUES ($VALUES\n($g' | sed 's$),($),\n($g' > mydatabase.sql
git fetch origin master
git merge origin/master
git add mydatabase.sql
git commit -m "Daily backup."
git push origin master
The result is a sql file INSERT command format that looks like:
INSERT INTO `mytable` VALUES
(r1c1value, r1c2value, r1c3value),
(r2c1value, r2c2value, r2c3value),
(r3c1value, r3c2value, r3c3value);
Some notes:
password on the command line ... I know, not secure, different discussion.
--opt: Among other things, turns on the --extended-insert option (i.e., one INSERT per table).
--skip-dump-date: mysqldump normally puts a date/time stamp in the sql file when created. This can become annoying in source control when the only delta between versions is that date/time stamp. The OS and source control system will date/time stamp the file and version. Its not really needed in the sql file.
The git commands are not central to the fundamental question (formatting the sql file), but shows how I get my sql file back into source control, something similar can be done with svn. When combining this sql file format with your source control of choice, you will find that when your users update their working copies, they only need to move the deltas (i.e., changed records) across the internet, and they can take advantage of diff utilities to easily see what records in the database have changed.
If you're dumping a database that resides on a remote server, if possible, run this script on that server to avoid pushing the entire contents of the database across the network with each dump.
If possible, establish a working source control repository for your sql files on the same server you are running this script from; check them into the repository from there. This will also help prevent having to push the entire database across the network with every dump.
As others have said using sed to replace "),(" is not safe as this can appear as content in the database.
There is a way to do this however:
if your database name is my_database then run the following:
$ mysqldump -u my_db_user -p -h 127.0.0.1 --skip-extended-insert my_database > my_database.sql
$ sed ':a;N;$!ba;s/)\;\nINSERT INTO `[A-Za-z0-9$_]*` VALUES /),\n/g' my_database.sql > my_database2.sql
you can also use "sed -i" to replace in-line.
Here is what this code is doing:
--skip-extended-insert will create one INSERT INTO for every row you have.
Now we use sed to clean up the data. Note that regular search/replace with sed applies for single line so we cannot detect the "\n" character as sed works one line at a time. That is why we put ":a;N;$!ba;" which basically tells sed to search multi-line and buffer the next line.
Hope this helps
What about storing the dump into a CSV file with mysqldump, using the --tab option like this?
mysqldump --tab=/path/to/serverlocaldir --single-transaction <database> table_a
This produces two files:
table_a.sql that contains only the table create statement; and
table_a.txt that contains tab-separated data.
RESTORING
You can restore your table via LOAD DATA:
LOAD DATA INFILE '/path/to/serverlocaldir/table_a.txt'
INTO TABLE table_a FIELDS TERMINATED BY '\t' ...
LOAD DATA is usually 20 times faster than using INSERT statements.
If you have to restore your data into another table (e.g. for review or testing purposes) you can create a "mirror" table:
CREATE TABLE table_for_test LIKE table_a;
Then load the CSV into the new table:
LOAD DATA INFILE '/path/to/serverlocaldir/table_a.txt'
INTO TABLE table_for_test FIELDS TERMINATED BY '\t' ...
COMPARE
A CSV file is simplest for diffs or for looking inside, or for non-SQL technical users who can use common tools like Excel, Access or command line (diff, comm, etc...)
I'm afraid this won't be possible. In the old MySQL Administrator I wrote the code for dumping db objects which was completely independent of the mysqldump tool and hence offered a number of additional options (like this formatting or progress feedback). In MySQL Workbench it was decided to use the mysqldump tool instead which, besides being a step backwards in some regards and producing version problems, has the advantage to stay always up-to-date with the server.
So the short answer is: formatting is currently not possible with mysqldump.
Try this:
mysqldump -c -t --add-drop-table=FALSE --skip-extended-insert -uroot -p<Password> databaseName tableName >c:\path\nameDumpFile.sql
I found this tool very helpful for dealing with extended inserts: http://blog.lavoie.sl/2014/06/split-mysqldump-extended-inserts.html
It parses the mysqldump output and inserts linebreaks after each record, but still using the faster extended inserts. Unlike a sed script, there shouldn't be any risk of breaking lines in the wrong place if the regex happens to match inside a string.
I liked Ace.Di's solution with sed, until I got this error:
sed: Couldn't re-allocate memory
Thus I had to write a small PHP script
mysqldump -u my_db_user -p -h 127.0.0.1 --skip-extended-insert my_database | php mysqlconcatinserts.php > db.sql
The PHP script also generates a new INSERT for each 10.000 rows, again to avoid memory problems.
mysqlconcatinserts.php:
#!/usr/bin/php
<?php
/* assuming a mysqldump using --skip-extended-insert */
$last = '';
$count = 0;
$maxinserts = 10000;
while($l = fgets(STDIN)){
if ( preg_match('/^(INSERT INTO .* VALUES) (.*);/',$l,$s) )
{
if ( $last != $s[1] || $count > $maxinserts )
{
if ( $count > $maxinserts ) // Limit the inserts
echo ";\n";
echo "$s[1] ";
$comma = '';
$last = $s[1];
$count = 0;
}
echo "$comma$s[2]";
$comma = ",\n";
} elseif ( $last != '' ) {
$last = '';
echo ";\n";
}
$count++;
}
add
set autocommit=0;
to first line of your sql script file, then import by:
mysql -u<user> -p<password> --default-character-set=utf8 db_name < <path>\xxx.sql
, it will fast 10x.

Find and Replace text in the entire table using a MySQL query

Usually I use manual find to replace text in a MySQL database using phpMyAdmin. I'm tired of it now, how can I run a query to find and replace a text with new text in the entire table in phpMyAdmin?
Example: find keyword domain.example, replace with www.domain.example.
For a single table update
UPDATE `table_name`
SET `field_name` = replace(same_field_name, 'unwanted_text', 'wanted_text')
From multiple tables-
If you want to edit from all tables, best way is to take the dump and then find/replace and upload it back.
The easiest way I have found is to dump the database to a text file, run a sed command to do the replace, and reload the database back into MySQL.
All commands below are bash on Linux.
Dump database to text file
mysqldump -u user -p databasename > ./db.sql
Run sed command to find/replace target string
sed -i 's/oldString/newString/g' ./db.sql
Reload the database into MySQL
mysql -u user -p databasename < ./db.sql
Easy peasy.
Running an SQL query in phpMyAdmin to find and replace text in all WordPress blog posts, such as finding mysite.example/wordpress and replacing that with mysite.example/news
Table in this example is tj_posts
UPDATE `tj_posts`
SET `post_content` = replace(post_content, 'mysite.example/wordpress', 'mysite.example/news')
Put this in a php file and run it and it should do what you want it to do.
// Connect to your MySQL database.
$hostname = "localhost";
$username = "db_username";
$password = "db_password";
$database = "db_name";
mysql_connect($hostname, $username, $password);
// The find and replace strings.
$find = "find_this_text";
$replace = "replace_with_this_text";
$loop = mysql_query("
SELECT
concat('UPDATE ',table_schema,'.',table_name, ' SET ',column_name, '=replace(',column_name,', ''{$find}'', ''{$replace}'');') AS s
FROM
information_schema.columns
WHERE
table_schema = '{$database}'")
or die ('Cant loop through dbfields: ' . mysql_error());
while ($query = mysql_fetch_assoc($loop))
{
mysql_query($query['s']);
}
phpMyAdmin includes a neat find-and-replace tool.
Select the table, then hit Search > Find and replace
This query took about a minute and successfully replaced several thousand instances of oldurl.ext with the newurl.ext within Column post_content
Best thing about this method : You get to check every match before committing.
N.B. I am using phpMyAdmin 4.9.0.1
Another option is to generate the statements for each column in the database:
SELECT CONCAT(
'update ', table_name ,
' set ', column_name, ' = replace(', column_name,', ''www.oldDomain.example'', ''www.newDomain.example'');'
) AS statement
FROM information_schema.columns
WHERE table_schema = 'mySchema' AND table_name LIKE 'yourPrefix_%';
This should generate a list of update statements that you can then execute.
UPDATE table SET field = replace(field, text_needs_to_be_replaced, text_required);
Like for example, if I want to replace all occurrences of John by Mark I will use below,
UPDATE student SET student_name = replace(student_name, 'John', 'Mark');
If you are positive that none of the fields to be updated are serialized, the solutions above will work well.
However, if any of the fields that need updating contain serialized data, an SQL Query or a simple search/replace on a dump file, will break serialization (unless the replaced string has exactly the same number of characters as the searched string).
To be sure, a "serialized" field looks like this:
a:1:{s:13:"administrator";b:1;}
The number of characters in the relevant data is encoded as part of the data.
Serialization is a way to convert "objects" into a format easily stored in a database, or to easily transport object data between different languages.
Here is an explanation of different methods used to serialize object data, and why you might want to do so, and here is a WordPress-centric post: Serialized Data, What Does That Mean And Why is it so Important? in plain language.
It would be amazing if MySQL had some built in tool to handle serialized data automatically, but it does not, and since there are different serialization formats, it would not even make sense for it to do so.
wp-cli
Some of the answers above seemed specific to WordPress databases, which serializes much of its data. WordPress offers a command line tool, wp search-replace, that does handle serialization.
A basic command would be:
wp search-replace 'an-old-string' 'a-new-string' --dry-run
However, WordPress emphasizes that the guid should never be changed, so it recommends skipping that column.
It also suggests that often times you'll want to skip the wp_users table.
Here's what that would look like:
wp search-replace 'https://old-domain.example' 'https://shiney-new-domain.com' --skip-columns=guid --skip-tables=wp_users --dry-run
Note: I added the --dry-run flag so a copy-paste won't automatically ruin anyone's database. After you're sure the script does what you want, run it again without that flag.
Plugins
If you are using WordPress, there are also many free and commercial plugins available that offer a gui interface to do the same, packaged with many additional features.
Interconnect/it PHP script
Interconnect/it offers a PHP script to handle serialized data: Safe Search and Replace tool. It was created for use on WordPress sites, but it looks like it can be used on any database serialized by PHP.
Many companies, including WordPress itself, recommends this tool. Instructions here, about 3/4 down the page.
UPDATE `MySQL_Table`
SET `MySQL_Table_Column` = REPLACE(`MySQL_Table_Column`, 'oldString', 'newString')
WHERE `MySQL_Table_Column` LIKE 'oldString%';
I believe "swapnesh" answer to be the best ! Unfortunately I couldn't execute it in phpMyAdmin (4.5.0.2) who although illogical (and tried several things) it kept saying that a new statement was found and that no delimiter was found…
Thus I came with the following solution that might be usefull if you exeprience the same issue and have no other access to the database than PMA…
UPDATE `wp_posts` AS `toUpdate`,
(SELECT `ID`,REPLACE(`guid`,'http://old.tld','http://new.tld') AS `guid`
FROM `wp_posts` WHERE `guid` LIKE 'http://old.tld%') AS `updated`
SET `toUpdate`.`guid`=`updated`.`guid`
WHERE `toUpdate`.`ID`=`updated`.`ID`;
To test the expected result you may want to use :
SELECT `toUpdate`.`guid` AS `old guid`,`updated`.`guid` AS `new guid`
FROM `wp_posts` AS `toUpdate`,
(SELECT `ID`,REPLACE(`guid`,'http://old.tld','http://new.tld') AS `guid`
FROM `wp_posts` WHERE `guid` LIKE 'http://old.tld%') AS `updated`
WHERE `toUpdate`.`ID`=`updated`.`ID`;
the best you export it as sql file and open it with editor such as visual studio code and find and repalace your words.
i replace in 1 gig file sql in 1 minutes for 16 word that total is 14600 word.
its the best way.
and after replace it save and import it again.
do not forget compress it with zip for import.
In the case of sentences with uppercase - lowercase letters,
We can use BINARY REPACE
UPDATE `table_1` SET `field_1` = BINARY REPLACE(`field_1`, 'find_string', 'replace_string')
Here's an example of how to find and replace in Database
UPDATE TABLE_NAME
SET COLUMN = replace(COLUMN,'domain.example', 'www.domain.example')
TABLE_NAME => Change it with your table name
COLUMN => Change it to your column make sure it exists
I have good luck with this query when doing a search and replace in phpmyadmin:
UPDATE tableName SET fieldName1 = 'foo' WHERE fieldName1 = 'bar';
Of course this only applies to one table at a time.
Generate change SQL queries (FAST)
mysql -e "SELECT CONCAT( 'update ', table_name , ' set ', column_name, ' = replace(', column_name,', ''www.oldsite.example'', ''www.newsite.example'');' ) AS statement FROM information_schema.columns WHERE table_name LIKE 'wp_%'" -u root -p your_db_name_here > upgrade_script.sql
Remove any garbage at the start of the file. I had some.
nano upgrade_script.sql
Run generated script with --force options to skip errors. (SLOW - grab a coffee if big DB)
mysql -u root -p your_db_name_here --force < upgrade_script.sql

Create table if not exists from mysqldump

I'm wondering if there is any way in mysqldump to add the appropriate create table option [IF NOT EXISTS]. Any ideas?
Try to use this on your SQL file:
sed 's/CREATE TABLE/CREATE TABLE IF NOT EXISTS/g' <file-path>
or to save
sed -i 's/CREATE TABLE/CREATE TABLE IF NOT EXISTS/g' <file-path>
it's not ideal but it works :P
According to one source, mysqldump does not feature this option.
You could use the --force option when importing the dump file back, where MySQL will ignore the errors generated from attempts to create duplicate tables. However note that with this method, other errors would be ignored as well.
Otherwise, you can run your dump file through a script that would replace all occurrences of CREATE TABLE with CREATE TABLE IF NOT EXISTS.
Using sed as described by #Pawel works well. Nevertheless you might not like the idea of piping your data through more potential error sources than absolutely necessary. In this case one may use two separate dumps:
first dump containing table definitions (--no-data --skip-add-drop-table)
second dump with only data (--no-create-info --skip-add-drop-table)
There are some other things to take care of though (e.g. triggers). Check the manual for details.
Not what you might want, but with --add-drop-table every CREATE is prefixed with the according DROP TABLE statement.
Otherwise, I'd go for a simple search/replace (e.g., with sed).
The dump output is the combination of DROP and CREATE, so you must remove DROP statement and change the CREATE statement to form a valid (logical) output:
mysqldump --no-data -u root <schema> | sed 's/^CREATE TABLE /CREATE TABLE IF NOT EXISTS /'| sed 's/^DROP TABLE IF EXISTS /-- DROP TABLE IF EXISTS /' > <schema>.sql
create a bash script with this...
make sure you make it executable. (chmod 0777 dump.sh)
dump.sh
#!/bin/bash
name=$HOSTNAME
name+="-"
name+=$(date +"%Y-%m-%d.%H:%M")
name+=".sql"
echo $name;
mysqldump --replace --skip-add-drop-table --skip-comments izon -p > "$name"
sed -i 's/CREATE TABLE/CREATE TABLE IF NOT EXISTS/g' "$name"
The sed will be much faster without the 'g' (global) at its end:
eg:
mysqldump -e <database> | sed 's/^CREATE TABLE /CREATE TABLE IF NOT EXISTS /' > <database>.sql
To Find and replace the text On Windows 7 Using PowerShell
Open command prompt and use the command below
powershell -Command "(gc E:\map\map_2017.sql) -replace 'CREATE TABLE', 'CREATE TABLE IF NOT EXISTS' | Out-File E:\map\map_replaced.sql"
First param is the filepath
Second param is 'find string'
Third param is 'replace string'
This command will create a new file with the replaced text.
Remove the Command starting from '|' (pipe) if you want to replace and save contents on the same file.