Migration of MYSQL database without losing records - mysql

I'm migrating a MYSQL DB from one host to another so I run the following command to backup the DB from the old hosting:
mysqldump -u **** -p **** | gzip > /home/***/***.sql.gz
And then use the following command to import the DB to the new host:
zcat /home/***/***.sql.gz | mysql -u *** -p ***
After successfully importing the DB, I point the domain to the new DNS.
The problem is that the website is active so new records are very likely to get inserted after the last backup. So, I may need to run the command once again after full DNS propagation.
So, my question, does the mysql command insert the new rows and update the existing ones or does it actually totally drop the tables and start over with the backup? If that happens, the records that have been inserted after DNS propagation might get lost!
Thanks

If you look at the output of mysqldump (before you gzip it) you will see that it contains a sequence of
DROP TABLE x;
CREATE TABLE x (...);
INSERT INTO x (...) VALUES (...);
So, no, it does not do an insert / replace, it drops and recreates the tables.

Related

MySQL/Amazon RDS error on import

I'm attempting to dump all the databases from a 500Gb RDS instance into a smaller instance (100Gb). I have a lot of user permissions saved so I need to dump the mysql table.
mysqldump -h hostname -u username -ppassword --all-databases > dump.sql
Now when I try to upload the data to my new instance I get the following error:
mysql -h hostname -u username -ppassword < dump.sql`
ERROR 1044 (42000) at line 2245: Access denied for user 'staging'#'%' to database 'mysql'
I would just use a database snapshot to accomplish this, but my instance is smaller in size.
As a sanity check, I tried dumping the data into the original instance but got the same error. Can someone please advise on what I should do here? Thanks!
You may need to do the databases individually, or at least remove the mysql schema from the existing file (perhaps using grep to find the line counts for the USE database; statements and then sed to trim out the troublesome section, or see below), and then generate a dump file that doesn't monkey with the table structures or the proprietary RDS triggers in the MySQL schema.
I have not tried to restore the full mysql schema onto an RDS instance, but I can certainly see where it would go awry with the customizations in RDS and the lack of SUPER privilege... but it seems like these options on mysqldump should get you close, at least.
mysqldump --no-create-info # don't try to drop and recreate the mysql schema tables
--skip-triggers # RDS has proprietary triggers in the mysql schema
--insert-ignore # write INSERT IGNORE statements to ignore duplicates
--databases mysql # only one database, "mysql"
--skip-lock-tables # don't generate statements to LOCK TABLES/UNLOCK TABLES during restore
--single-transaction # to avoid locking up the source instance during the dump
If this is still too aggressive, then you will need to resort to dumping only the rows from the specific tables whose content you need to preserve ("user" and the other grant tables).
THERE IS NO WARRANTY on the following, but it's one from my collection. It's a one-liner that reads "old_dumpfile.sql" and writes "new_dumpfile.sql"... but switching the output off when it sees the USE or CREATE DATABASE statements with `mysql` on the same line, and switching it back on again the next time such a statement occurs without `mysql` in it. This will need to be modified if your dump file also has the DROP DATABASE statements in it, or you could generate a new dumpfile with --skip-add-drop-database.
Running your existing dump file through this should essentially remove only the mysql schema from that file, allowing you to easily restore it manually, first, and then let the rest of the database data flow in more smoothly.
perl -pe 'if (/(^USE\s|^CREATE\sDATABASE.*\s)`mysql`/) { $x = 1; } elsif (/^USE\s`/ || /^CREATE\sDATABASE/) { $x = 0; }; $_ = "" if $x;' old_dumpfile.sql > new_dumpfile.sql
I guess you can try to use workbench. There is a migration function there, create the smaller instance (100GB) first, then use that migration feature to migrate from 500GB to the 100GB one see if it works.
I have had too many access denied issues with the RDS MySQL. So running below command on RDS is my way out:
GRANT ALL ON `%`.* to '<type_the_usernamne_here>'#'%';
I am not sure whether this will be helpful in your case. But it has always been a life saviour for me.

MySQL: Create consistent dump of audit tables

I have set up a system whose purpose it is to generate incremental dumps of our production data to our data warehouse. "Incremental" in this sense means that we can "synchronize" the production database with our data warehouse every minute or so without having to generate a full dump. Instead, we are just dumping and inserting the new/changed data.
On our replication save, I have set up a system where every relevant table of our production database has one insert TRIGGER and one update TRIGGER. These copy every row which is inserted or updated into an "audit table" in a different schema. This audit schema contains tables with the same structure as the production tables, but no indexes, and by using those TRIGGERs the audit tables will only contain the new or updated rows since the last export.
At the moment I'm using the mysql command line client to do the following for each of these audit tables:
LOCK TABLES table WRITE
SELECT * FROM table
DELETE FROM table
UNLOCK TABLES
I then pipe the output to some other scripts for further processing.
Now this works perfectly fine, however it creates the problem that while the state of every individual table will be consistent, the entire set of tables won't be. For example, if I have a clicks table and an impressions table and there is a 1 minute delay between dumping the former and the latter, the entire dump will be in a state which is inconsistent, obviously.
Now my question is: How do I do the following:
Lock ALL tables
Generate dumps for ALL tables
Delete all data from ALL tables
Unlock tables
I cannot use the mysql command line client because I cannot keep the lock across different sessions, and each table requires a new command. Also, I checked mysqldump which allows dumping multiple tables at a time, but I didn't find a way to delete all data from the tables before releasing the locks.
Any ideas?
To perform the first two points the command could be this one :
mysqldump --lock-all-tables -u root -p DATABASENAME > nameofdumpfile.sql
Since it is not possible to perform step 3 and 4 without releasing the lock, at least with mysqldump utility, why don't copy all the tables into another database (backup db) and then export the dump file from it ?
CREATE DATABASE backupdb;
USE originaldb;
FLUSH TABLES WITH READ LOCK;
keep this prompt(Prompt 1) open and then clone the database from another command prompt(Prompt 2) :
mysqldump --lock-all-tables -u root -p originaldb | mysql -u backup -p password backupdb;
Drop the original database from the Prompt 1 :
USE backupdb;
DROP DATABASE originaldb;
Then restore the empty database back with its original name (note the -d flag ) :
mysqldump --lock-all-tables -u root -p backupdb | mysql -u backup -p password originaldb;
This could be an example of a workaround that you can apply to achieve what you need.

Rails - Archive tables into another database

We have some tables that have huge number of records and which are not used often(e.g. user_activities) and we want to have ability to archive(I mean move) records from target table into archive table in separate database.
My question is: are there known solutions for that?
Additional explanation:
I'd like to have some kind of a rake task that would trigger archiving process. The process would go through tables marked as 'archived' (or whatever) and move outdated records to archive table in separate database.
Example: user_activities has 30 000 records. I mark the table as archived and set cutoff by id - last 2000 records. I expect the following results:
user_activities contains latest 2000 records only
28 000 outdated records have been moved to archived_user_activities table in my_super_cool_named_database
PS we use mysql2 adapter (if it helps)
Thank you!
There is the dump command and restore command I have shown below that work with the entire database.
copy the database:
mysqldump -u [uname] -p[pass] [dbname] > [backupfile.sql]
Use this method to rebuild a database from scratch:
$ mysql -u [username] -p [password] [database_to_restore] < [backupfile]
Use this method to import into an existing database (i.e. to restore a database that already exists):
$ mysqlimport [options] database textfile1
To restore your previously created custback.sql dump back to your 'Customers' MySQL database, you'd use:
$ mysqlimport -u sadmin -p pass21 Customers custback.sql
Although if you only want a specific part of the db you can do something like this...
CREATE TABLE db2.table LIKE db1.table;
INSERT INTO db2.table SELECT * FROM db1.table;

Restoring selective tables from an entire database dump?

I have a mysql dump created with mysqldump that holds all the tables in my database and all their data. However I only want to restore two tables. (lets call them kittens and kittens_votes)
How would I restore those two tables without restoring the entire database?
Well, you have three main options.
You can manually find the SQL statements in the file relating to the backed up tables and copy them manually. This has the advantage of being simple, but for large backups it's impractical.
Restore the database to a temporary database. Basically, create a new db, restore it to that db, and then copy the data from there to the old one. This will work well only if you're doing single database backups (If there's no CREATE DATABASE command(s) in the backup file).
Restore the database to a new database server, and copy from there. This works well if you take full server backups as opposed to single database backups.
Which one you choose will depend upon the exact situation (including how much data you have)...
You can parse out CREATE TABLE kittens|kitten_votes AND INSERT INTO ... using regexp, for example, and only execute these statements. As far as I know, there's no other way to "partially restore" from dump.
Open the .sql file and copy the insert statements for the tables you want.
create a new user with access to only those 2 tables. Now restore the DB with -f (force) option that will ignore the failed statements and execute only those statements it has permission to.
What you want is a "Single Table Restore"
http://hashmysql.org/wiki/Single_table_restore
A few options are outlined above ... However the one which worked for me was:
Create a new DB
$ mysql -u root -p CREATE DATABASE temp_db
Insert the .sql file ( the one with the desired table ) into the new DB
$ mysql -u root -p temp_db < ~/full/path/to/your_database_file.sql
dump the desired table
$ mysqldump -u root -p temp_db awesome_single_table > ~/awesome_single_table.sql
import desired table
$ mysql -u root -p original_database < ~/awesome_single_table.sql
Then delete the temp_db and you're all golden!

Mysql restore to restore structure and no data from a given backup (schema.sql)

Hi I use mysql administrator and have restored backup files (backup.sql). I would like to use restore the structure without data and it is not giving me an option to do so. I understand phpadmin provides this. I can not use this however. Any one can tell me an easy way?
Dump database structure only:
cat backup.sql | grep -v ^INSERT | mysql -u $USER -p
This will execute everything in the backup.sql file except the INSERT lines that would have populated the tables. After running this you should have your full table structure along with any stored procedures / views / etc. that were in the original databse, but your tables will all be empty.
You can change the ENGINE to BLACKHOLE in the dump using sed
cat backup.sql | sed 's/ENGINE=(MYISAM|INNODB)/ENGINE=BLACKHOLE/g' > backup2.sql
This engine will just "swallow" the INSERT statements and the tables will remain empty. Of course you must change the ENGINE again using:
ALTER TABLE `mytable` ENGINE=MYISAM;
IIRC the backup.sql files (if created by mysqldump) are just SQL commands in a text file. Just copy-paste all the "create ..." statements from the beginning of the file, but not the "insert" statements in to another file and "mysql < newfile" you should have the empty database without any data in it.
there is no way to tell the mysql client to skip the INSERT commands. the least-hassle way to do this is run the script as-is and let it load the data, then just TRUNCATE all of the tables.
you can write a script to do the following:
1 : import the dump into a new database.
2 : truncate all the tables with a loop.
3 : export the db again.
4 : now u just have the structure
You can backup you MYSQL database structure with
mysqldump -u username –p -d database_name > backup.sql
(You should not supply password at command line as it leads to security risks.MYSQL will ask for password by default.)