Append .sql file to existing database - mysql

I need to export rows from one database (db1) and append them in another database (db2). Basically it is a kind of appending data from operational database db1 into archive database db2, once a month.
I'm trying to use mysqldump to create partial dump file from db1 and then import that dump by appending data into db2 but facing problems. I tried adding --skip-add-drop-table to mysqldump to avoid dropping tables at import with
mysql -u user -p db2 < db1_partial_dump.sql
Also consider important: there are new tables in db1 from time to time. What would need to be done is to append rows to existing tables, and create new table if does not exist. To achieve that I tried to replace CREATE TABLE in sql dump by
sed -i 's/CREATE TABLE/CREATE TABLE IF NOT EXISTS/g' db1_partial_dump.sql
however I have error "ERROR 1062 (23000) at line 136: Duplicate entry '809' for key 'PRIMARY'".
I don't know how to proceed from here nor if it is possible at all to do such appending. Please advise.

Related

MYSQL command line, import command override current database?

If I use command
mysql -u root -pdb_pass testdb < database.sql
I was wondering if a database which already exist and has data, does it override current database? Or must to DROP DATABASE (and create it) before to import SQL? For example: I have database_one (origin) and database_two (copy of database_one), I want to update my database_two, but on database_one I created and edited some tables and indices/foreign keys and columns. If on database_two the table already exist does it create the new column which I created on database_one (or FK...)?
this is based on the sql in database.sql.
if it has drop database sql in it,it will doing drop database.
I suppose the database.sql is created by mysqldump,typically it will not include drop database sql.
but in case it has drop database,you could open the sql with editor check on it.
for tables,has the same logic with database,but typically it would drop table.
actually database.sql from mysqldump is purely sql we known,there is no mystery in it.

ignore create table from sql dump

I have many sql dump files for the same table with different data. I have it saved and the source database has been dropped long back. I have to restore from these dumps, but each dump have create table command as well, which fails once table is created. is there a way I can ignore this create table command when restoring the dump?
Apparently I can do something like following:
cat /tmp/sample_entity1_490_387 | sed "s/DROP/-- DROP/"|sed "s/CREATE TABLE /CREATE TABLE IF NOT EXISTS/" | mysql -uroot -p entity_restored

How to skip already created tables while importing mysql dump by command line

Is there any way to skip already created tables while importing ? I am trying to import 2GB of database using command prompt but the operation is aborted by mistake. Now if i will do the import again it will drop each table and create it again, That will take very long time.
I want to skip those tables from import which is already created or can i start it from where it was aborted ? I am using this command
mysql -u root -p my_database_name < db_dump.sql
Run your dump through a filter which replaces each 'CREATE TABLE' with 'CREATE TABLE IF NOT EXISTS', like in
cat db_dump_sql | sed "s/CREATE TABLE /CREATE TABLE IF NOT EXISTS /g" | mysql -uroot -p my_database_name
Or edit db_dump.sql and search/replace interactively.
rename table name to new table name
add ignore keyword, example (insert ignore into table)

MySQL: Create consistent dump of audit tables

I have set up a system whose purpose it is to generate incremental dumps of our production data to our data warehouse. "Incremental" in this sense means that we can "synchronize" the production database with our data warehouse every minute or so without having to generate a full dump. Instead, we are just dumping and inserting the new/changed data.
On our replication save, I have set up a system where every relevant table of our production database has one insert TRIGGER and one update TRIGGER. These copy every row which is inserted or updated into an "audit table" in a different schema. This audit schema contains tables with the same structure as the production tables, but no indexes, and by using those TRIGGERs the audit tables will only contain the new or updated rows since the last export.
At the moment I'm using the mysql command line client to do the following for each of these audit tables:
LOCK TABLES table WRITE
SELECT * FROM table
DELETE FROM table
UNLOCK TABLES
I then pipe the output to some other scripts for further processing.
Now this works perfectly fine, however it creates the problem that while the state of every individual table will be consistent, the entire set of tables won't be. For example, if I have a clicks table and an impressions table and there is a 1 minute delay between dumping the former and the latter, the entire dump will be in a state which is inconsistent, obviously.
Now my question is: How do I do the following:
Lock ALL tables
Generate dumps for ALL tables
Delete all data from ALL tables
Unlock tables
I cannot use the mysql command line client because I cannot keep the lock across different sessions, and each table requires a new command. Also, I checked mysqldump which allows dumping multiple tables at a time, but I didn't find a way to delete all data from the tables before releasing the locks.
Any ideas?
To perform the first two points the command could be this one :
mysqldump --lock-all-tables -u root -p DATABASENAME > nameofdumpfile.sql
Since it is not possible to perform step 3 and 4 without releasing the lock, at least with mysqldump utility, why don't copy all the tables into another database (backup db) and then export the dump file from it ?
CREATE DATABASE backupdb;
USE originaldb;
FLUSH TABLES WITH READ LOCK;
keep this prompt(Prompt 1) open and then clone the database from another command prompt(Prompt 2) :
mysqldump --lock-all-tables -u root -p originaldb | mysql -u backup -p password backupdb;
Drop the original database from the Prompt 1 :
USE backupdb;
DROP DATABASE originaldb;
Then restore the empty database back with its original name (note the -d flag ) :
mysqldump --lock-all-tables -u root -p backupdb | mysql -u backup -p password originaldb;
This could be an example of a workaround that you can apply to achieve what you need.

Restoring selective tables from an entire database dump?

I have a mysql dump created with mysqldump that holds all the tables in my database and all their data. However I only want to restore two tables. (lets call them kittens and kittens_votes)
How would I restore those two tables without restoring the entire database?
Well, you have three main options.
You can manually find the SQL statements in the file relating to the backed up tables and copy them manually. This has the advantage of being simple, but for large backups it's impractical.
Restore the database to a temporary database. Basically, create a new db, restore it to that db, and then copy the data from there to the old one. This will work well only if you're doing single database backups (If there's no CREATE DATABASE command(s) in the backup file).
Restore the database to a new database server, and copy from there. This works well if you take full server backups as opposed to single database backups.
Which one you choose will depend upon the exact situation (including how much data you have)...
You can parse out CREATE TABLE kittens|kitten_votes AND INSERT INTO ... using regexp, for example, and only execute these statements. As far as I know, there's no other way to "partially restore" from dump.
Open the .sql file and copy the insert statements for the tables you want.
create a new user with access to only those 2 tables. Now restore the DB with -f (force) option that will ignore the failed statements and execute only those statements it has permission to.
What you want is a "Single Table Restore"
http://hashmysql.org/wiki/Single_table_restore
A few options are outlined above ... However the one which worked for me was:
Create a new DB
$ mysql -u root -p CREATE DATABASE temp_db
Insert the .sql file ( the one with the desired table ) into the new DB
$ mysql -u root -p temp_db < ~/full/path/to/your_database_file.sql
dump the desired table
$ mysqldump -u root -p temp_db awesome_single_table > ~/awesome_single_table.sql
import desired table
$ mysql -u root -p original_database < ~/awesome_single_table.sql
Then delete the temp_db and you're all golden!