Transfer mySQL from development to production - mysql

I need to synch development mysql db with the production one.
Production db gets updated by user clicks and other data generated via web.
Development db gets updated with processing data.
What's the best practice to accomplish this?
I found some diff tools (eg. mySQL diff), but they don't manage updated records.
I also found some application solution: http://www.isocra.com/2004/10/dumptosql/
but I'm not sure it's a good practice as in this case I need to retest my code each time I add new innodb related tables.
Any ideas?

Take a look at mysqldump. It may serve you well enough for this.
Assuming your tables are all indexed with some sort of unique key you could do a dump and have it leave out the 'drop/create table' bits. Have it run as 'insert ignore' and you'll get the new data without effecting the existing data.
Another option would be to use the query part of mysqldump to dump only the new records from the production side. Again - have mysqldump leave off the 'drop/create' bits.

Related

Duplicate a whole database on the same server?

We are running a service where we have to setup a new database for each new site. The database is exactly the same so we can simply dump from a backup file or clone from a sample database (which is created only for clone purpose, no transaction will be run there thus no worry about corrupting data) from the same server. The database it self contains around 100 tables and with some data, taking around 1-2mins to import, which is too slow.
I'm trying to find a way to do it as fast as possible, the first thought came to mind was to copy the files within the sample database data_dir, but it seems like I also need to somehow edit the table lists or mysql wont be able to read my new database's tables eventhough it still shows them there.
You're duplicating the database the wrong way, it will be much faster if you do it properly.
Here is how you duplicate a database:
create database new_database;
create table new_database.table_one select * from source_database.table_one;
create table new_database.table_two select * from source_database.table_two;
create table new_database.table_three select * from source_database.table_three;
...
I just did a performance test, this takes 81 seconds to duplicate 750MB of data across 7 million table rows. Presumably your database is smaller than that?
I don't think you are going to find anything faster. One thing you could do is already have a queue of duplicate databases on standby ready to be picked up and used at any time. So you don't need to create a new database at all, you just rename an existing database from a queue of available ones. And have a cron job running to make sure the queue never runs empty.
Why mysql not able to read or what you changes in table lists?
I think there may be problem of permissions to read by mysql, otherwise it would be fine..
Thanks

Automatic MySQL backup on sharedhost

As I am not a coder so I should stop here but I am interested how far I could go to build an automatic SQL script.
Case:
A website is hosted on a shared host server which uses CPanel. The website use only one DB and one of the table is the log. Now the log table has reached 300k rows... (I might do something wrong here... it is a popular website?:))
So I need to reduce the log table but I would like to do a backup rather. So here is my idea:
Setup a backup DB and copy the old entries meanwhile use tables for only quarter years so the log from Jan-April would be stored in table_2012-q1 etc.
Method:
I would like to use cron and email alert.
Questions:
is there any better and easier solution to do the back up with this row num.
if I do a "move rows" by INSERT/DELETE rows how can I check which one is ready on time?
do I need to focus on the performance of this process as it should work in the background? In other words is it a select or a dump?
Sorry if it is too Dummy but I would like to learn! I also don't want to use too much processor for this.
Thanks Andras
Since you are using shared hosting, I'm pretty sure you will not be able to access cron, so here is an alternative:
Since the database is filled with log data :
1. Create a new table, regardless of the name or time period
2. Move the files (from a certain id) from one table to the next
This link will explain better : mysqldump partial database
If this is active DB, I would clone it and then play around with the ways the data will be moved since you do not consider youreself a coder.
Hope this helps.

Reducing priority of MySQL commands/jobs (add an index/other commands)?

We have a moderately large production MySQL database. Periodically, we will run commands, usually via a rails migration, that while running, bog down the database. As a specific example, we might add an index to a large table.
Is there any method that can reduce the priority MySQL gives to a specific task. A sort of "nice" within MySQL itself? I found this, which is what inspired the question:
PostgreSQL tips and tricks
Since adding an index causes the work to be done within the DB and MySQL process, lowering the priority of the Rails migration process seems like it won't help. Are there other ways we can lower the priority?
We use multiple, replicated database servers to make changes like this.
In our case, db1 is the master, replicated to db2. (db1->db2).
Start by making the change to db2. If things lock, replication will stall, but that's OK.
Move your traffic to db2. Any remnant traffic going to db1 will replicate over, so you won't lose anything.
Once there's no traffic on db1, rebuild it as a slave of db2 (db2->db1).
That's the general idea and you get very little downtime and you don't have to pull an all-nighter! We actually have three servers, so it's a little more complicated, but not much.
Good luck.
Unfortunately, there is no simple way to do this: commands that alter the database structure don't have a priority option.
If your tables are MyISAM, you can try this:
mysqlhotcopy to make a backup of the table
import that backup it into a different database server (one that's not under load)
make the changes there
make a mysqlhotcopy backup of the altered table
import it into the live server
Note that this may or may not be faster than adding the index on the live server, depending on the time it takes you to transfer the table back and forth.

How to update DB structure when updating production system without doing a teardown / rebuild

If I'm working on a development server and have updates to the database structure for some of our releases, what is the best way to update the structure on the production server?
Currently we create a new production database containing the structure only, do a SQL dump of the data on the 'old' production database, then run a SQL query to insert the data into the new database.
I know there is an easier way to do these updates, right?
Thanks in advance.
We don't run anything on prod without a script and that script must be in source control. Additionally we have to write a rollback script in case the initial script goes bad and we have to back it out. And when we move to prod configuration management does a differential compare between prod and dev to see if we have missed anything in the production script (any differences have to be traceable to development we are not yet ready to move to prod and documented). A product like Red-gate's SQL compare can do this. Our process is very formalized so that we can maintain a certification required by our larger clients.
If you have large tables even alter table can be slow, but it's still generally more efficient in total time than making a copy of the table with a new name and structure, copying the data to that table, renaming the old table, then naming the new table the name of the orginal table, then deleting the old table.
However, there are times when that is a preferable process as the total down time apparent to the user in this case is the time it takes to rename two tables, so this is good for tables where the data only is filled from the backend not the application (if the application can update the tables, it is a dangerous practice to do this as you may lose changes made while the tables were in transition). A lot of what process to use depends on the nature of the change you are making. Some changes should be done in a maintenance window where the users are not allowed to access the database. For instance if you are adding a new field with a default value to a table with 100,000,000 records, you are liable to lock up the users from using the table while the update happens. It is better to do this in single user mode during off hours (and when the users are told in advance the database will not be available). Other changes only take milliseconds and can happen easily while users are logged in.
Look at alter table to change the schema
It might not be easier than your method but it means less copying of the database
This is actually quite a deep question. If the only changes you've made are to add some columns then ALTER TABLE is probably sufficient. But if you're renaming or deleting columns then ALTER statements may break various foreign key constraints. In addition, sometimes you need to make changes both to the database and the data, which is pretty much unscriptable.
Most likely the best way to automate this would be to write a simple script for each deployment (along with a script to roll back!) which is basically what some systems like Rails will do for you I believe. Some scripts might be simply ALTER statements, some might temporarily disable foreign-key checking and triggers etc, some might run some update statements as well. And some might be dumping the db and rebuilding it. I don't think there's a one-size-fits-all solution here, sorry :)
Use the ALTER TABLE command: http://dev.mysql.com/doc/refman/5.0/en/alter-table.html

Best way to archive live MySQL database

We have a live MySQL database that is 99% INSERTs, around 100 per second. We want to archive the data each day so that we can run queries on it without affecting the main, live database. In addition, once the archive is completed, we want to clear the live database.
What is the best way to do this without (if possible) locking INSERTs? We use INSERT DELAYED for the queries.
http://www.maatkit.org/ has mk-archiver
archives or purges rows from a table to another table and/or a file. It is designed to efficiently “nibble” data in very small chunks without interfering with critical online transaction processing (OLTP) queries. It accomplishes this with a non-backtracking query plan that keeps its place in the table from query to query, so each subsequent query does very little work to find more archivable rows.
Another alternative is to simply create a new database table each day. MyIsam does have some advantages for this, since INSERTs to the end of the table don't generally block anyway, and there is a merge table type to being them all back together. A number of websites log the httpd traffic to tables like that.
With Mysql 5.1, there are also partition tables that can do much the same.
I use mysql partition tables and I've achieve wonderful results in all aspects.
Sounds like replication is the best solution for this. After the initial sync the slave gets updates via the Binary Log, thus not affecting the master DB at all.
More on replication.
MK-ARCHIVER is a elegant tool to archive MYSQL data.
http://www.maatkit.org/doc/mk-archiver.html
MySQL replication would work perfectly for this.
Master -> the live server.
Slave -> a different server on the same network.
Could you keep two mirrored databases around? Write to one, keep the second as an archive. Switch every, say, 24 hours (or however long you deem appropriate). Into the database that was the archive, insert all of todays activity. Then the two databases should match. Use this as the new live db. Take the archived database and do whatever you want to it. You can backup/extract/read all you want now that its not being actively written to.
Its kind of like having mirrored raid where you can take one drive offline for backup, resync it, then take the other drive out for backup.