Updating the database with the data added to a backup after a crash - mysql

Our MySQL database application crashed. We had a backup and restored it about a week after the system crashed. Meanwhile we used a backup database application. How can I add the data from this week's gap to the database.
What would be the best way to do this?
EDIT.
The table structure is the same. There are a number of tables with foreign keys.
Essentially my question boils down to this:
Primary keys on the two servers look like this:
serv1: 123456---
serv2: 123---456
All these are foreign keys in the secondary table
I would like to merge the two, but have all the primary keys in the second table to be reflected in the foreign key relationship when I move the corresponding data from the other tables.

If you have few versions of the database which i guess you do now, the best way is to synchronize the data between the online database and the missing data.
You can try with: http://www.red-gate.com/products/mysql/mysql-data-compare/
Or: http://www.devart.com/dbforge/sql/datacompare/
With workbench you can compare the schemas:
http://dev.mysql.com/doc/workbench/en/wb-database-diff-report.html
But the best first make an backup on test environment then try the compares, it could be that the same ID exists on two locations and you would need to find the best solution for your database.
So put both databases (the live one and the latest database you have on two test environments)
Synchronize them and check the differences
Run the fix on test if everything goes fine then do that on production.

Related

In MySQL, can I ask it to completely clone a database to a new database in the same instance? [duplicate]

This question already has answers here:
MySQL: Cloning a MySQL database on the same MySql instance
(16 answers)
Closed 2 years ago.
We are wondering if we can run some command in such a way that our prod_addressdb can be cloned along with constraints, table schemas, and then all the data as well.
This would avoid any real data transfer and help us immensely in the short term. Is this even a possibility without doing a mysql dump that transfers it to my slow machine and then import which is way way slow from my machine.
and then later, a point to point like mysqlinstance.prod_addressb -> mysqlstaginginstance.staging_adddressdb would be super super nice as well.
Is there a way to do any of this?
We are in google cloud and the export dumps a file with "create database prod_addressdb" so when we try to import, it fails to go to the staging_addressdb location :(. The exported file is in cloud storage and I don't know of a way to automatically go through and find and replace all the prod_addressdb with staging_addressdb :(. Looking at this problem from many different angles to try to create a pre-production testing location of deploy prod to staging and upgrade and test.
thanks,
Dean
There is no single SQL statement to clone multiple tables.
You can use CREATE TABLE <newschema>.<newtable> LIKE <oldschema>.<oldtable> but this doesn't include foreign keys.
You can use SHOW CREATE TABLE <oldtable> and then execute it in the new schema. That includes foreign keys.
You will find it's easier to load data into the new tables before you create foreign keys. Then run ALTER TABLE to add the foreign keys after you're done copying data.
Then you can copy data from an old table to a new table, one table at a time:
INSERT INTO <newschema>.<newtable> SELECT * FROM <oldschema>.<oldtable>;
Note this locks the data you're reading from the old table while it runs the copy. I'm not sure of the size of your data or how long this would take on your cloud instance. But it does achieve the goal of avoiding data transfer to the client and back to the server.

mysqldump: how to fetch dependent rows

I'd like a snapshot of a live MySQL DB to work with on my development machine. The problem is that the DB is too large, so my thought was to execute:
mysqldump [connection-info-here] --no-autocommit --where="1 limit 1000" mydb > /dump.sql
I think this will give me the first thousand rows of every table in database mydb. I anticipate that the resulting dataset will break a lot of foreign key constraints since some records will be missing. As a result the application I mean to run on the dev machine will fail.
Is there a way to mysqldump a sample of the database while ensuring that all records dumped abide by key constraints? (for instance if a foreign key is dumped, the matching record in the foreign table will also be dumped).
If that isn't possible, how do you guys deal with this problem?
No, there's no option for mysqldump to dump only rows that match in foreign key relationships. You already know about the --where option, and that won't do it.
I've had the same task as you, to dump a subset of data but only data that is related. For example, for creating a test instance.
I've been using MySQL for many years, I've worked as a MySQL consultant and trainer, and I try to keep up with current tools. I have never heard of any MySQL tool that does this operation.
The only solution I can suggest is to write your own script to dump table by table using SELECT...INTO OUTFILE.
It's sometimes easier to write a custom script just for your specific schema, than for someone to write a general-purpose tool that works for everyone's schema.
How I have dealt with this problem in the past is I don't copy data from the live database. I find some other way to create a subset of fake data for testing. It's probably better to create synthetic data anyway, because then you don't risk accidentally using live data in your dev/test environment, in case some of it is private data.

How to create alter table statement for a table?

I have two databases one for production and another for staging.
Now I have made many changes in the staging database and I want to make the production the same structure as the testing without dropping any table or losing any data.
I want to find a way where I can create alter table from staging database and apply it on the production database. Note the tables have many columns so I don't want to have to do that manually..
You can use SQLyog trial or SQLyog Community edition, with this tool you be able to apply the changes to the production database using Schema and Data Synchronization feature.

mySQL combining data from 2 databases

The Problem.
I have a website which was recently switched over to a new server php/mysql.
Its an e-commerce site, when it was switched over the person who did the switch did not switch over the database for all the pages on the site, so I have some data that exists on both mysql databases (the new and old server), and some data that exists on the old server but not on the new server and vice versa.
I need to merge the data from the 2 databases into one database with all the data.
My solution:
I am thinking the best way to go about this is too write a php script that gets the data from the old server, checks to see if the fields (other than the primary id) exists on the new server, if the record does not exist then insert it into the new table on the new server.
The structure is not so complex, but the orders table has a look-up field to the order details table (using the primary key of the orders table as the foreign key)
Any ideas on an easier quicker way to do this, is there something in phpmyadmin that can merge two databases?
Any suggestions much appreciated.
You could create another table using the federated storage engine on your new server.
http://dev.mysql.com/doc/refman/5.5/en/federated-usagenotes.html
Then you can have access to both within single sql queries.
Assuming you have privileges to allow other hosts to connect to your old server.

Setting up a master database to control the structure of other databases

I got a case where I have several databases running on the same server. There's one database for each client (company1, company2 etc). The structure of each of these databases should be identical with the same tables etc, but the data contained in each db will be different.
What I want to do is keep a master db that will contain no data, but manage the structure of all the other databases, meaning if I add, remove or alter any tables in the master db the changes will also be mirrored out to the other databases.
Example: If a table named Table1 is created in the master DB, the other databases (company1, company2 etc) will also get a table1.
Currently it is done by a script that monitors the database logs for changes made to the master database and running the same queries on each of the other databases. Looked into database replication, but from what I understand this will also bring along the data from the master database, which is not an option in this case.
Can I use some kind of logic against database schemas to do it?
So basicly what I'm asking here is:
How do I make this sync happen in the best possible way? Should I use a script monitoring the logs for changes or some other method?
How do I avoid existing data getting corrupted if a table is altered? (data getting removed if a table is dropped is okay)
Is syncing from a master database considered a good way to do what I wish (having an easy maintainable structure across several datbases)?
How will making updates like this affect the performance of the databases?
Hope my question was clear and that this is not a duplicate of some other thread. If more information and/or a better explantion of my problem is needed, let me know:)
You can get the list of tables for a given schema using:
select TABLE_NAME from information_schema.tables where TABLE_SCHEMA='<master table name>';
Use this list for a script or stored procedure ala:
create database if not exists <name>;
use <name>;
for each ( table_name in list )
create table if not exists <name>.table_name like <master_table>.table_name;
Now that Im thinking about it you might be able to put a trigger on the 'information_schema.tables' db that would call the 'create/maintain' script. Look for inserts and react accordingly.