The Problem.
I have a website which was recently switched over to a new server php/mysql.
Its an e-commerce site, when it was switched over the person who did the switch did not switch over the database for all the pages on the site, so I have some data that exists on both mysql databases (the new and old server), and some data that exists on the old server but not on the new server and vice versa.
I need to merge the data from the 2 databases into one database with all the data.
My solution:
I am thinking the best way to go about this is too write a php script that gets the data from the old server, checks to see if the fields (other than the primary id) exists on the new server, if the record does not exist then insert it into the new table on the new server.
The structure is not so complex, but the orders table has a look-up field to the order details table (using the primary key of the orders table as the foreign key)
Any ideas on an easier quicker way to do this, is there something in phpmyadmin that can merge two databases?
Any suggestions much appreciated.
You could create another table using the federated storage engine on your new server.
http://dev.mysql.com/doc/refman/5.5/en/federated-usagenotes.html
Then you can have access to both within single sql queries.
Assuming you have privileges to allow other hosts to connect to your old server.
Related
This question already has answers here:
MySQL: Cloning a MySQL database on the same MySql instance
(16 answers)
Closed 2 years ago.
We are wondering if we can run some command in such a way that our prod_addressdb can be cloned along with constraints, table schemas, and then all the data as well.
This would avoid any real data transfer and help us immensely in the short term. Is this even a possibility without doing a mysql dump that transfers it to my slow machine and then import which is way way slow from my machine.
and then later, a point to point like mysqlinstance.prod_addressb -> mysqlstaginginstance.staging_adddressdb would be super super nice as well.
Is there a way to do any of this?
We are in google cloud and the export dumps a file with "create database prod_addressdb" so when we try to import, it fails to go to the staging_addressdb location :(. The exported file is in cloud storage and I don't know of a way to automatically go through and find and replace all the prod_addressdb with staging_addressdb :(. Looking at this problem from many different angles to try to create a pre-production testing location of deploy prod to staging and upgrade and test.
thanks,
Dean
There is no single SQL statement to clone multiple tables.
You can use CREATE TABLE <newschema>.<newtable> LIKE <oldschema>.<oldtable> but this doesn't include foreign keys.
You can use SHOW CREATE TABLE <oldtable> and then execute it in the new schema. That includes foreign keys.
You will find it's easier to load data into the new tables before you create foreign keys. Then run ALTER TABLE to add the foreign keys after you're done copying data.
Then you can copy data from an old table to a new table, one table at a time:
INSERT INTO <newschema>.<newtable> SELECT * FROM <oldschema>.<oldtable>;
Note this locks the data you're reading from the old table while it runs the copy. I'm not sure of the size of your data or how long this would take on your cloud instance. But it does achieve the goal of avoiding data transfer to the client and back to the server.
I want to query data from two different MySQL databases to a new MySQL database.
I have two databases with a lot of irrelevant data and I want to create what can be seen as a data warehouse where only relevent data should be present coming from the two databases.
As of now all data gets sent to the two old databases, however I would like to have scheduled updating so the new database is up to speed. There is a key between the two databases so in best case I would like all data to be present in one table however this is not crucial.
I have done similar work with Logstash and ES, however I do not know how to do it when it comes to MySQL.
Best way to do that is create a ETL process with Pentaho Data Integrator or any ETL tool. Where your source will be two different databases, in the transformation part you can remove or add any business logic then load those data into new database.
If you create this ETL you can schedule it once a day so that your database will be up to date.
If you want to do this without an ETL than your database must be in same host. Than you can just add database name just before table name in query. like SELECT * FROM database.table_name
Our MySQL database application crashed. We had a backup and restored it about a week after the system crashed. Meanwhile we used a backup database application. How can I add the data from this week's gap to the database.
What would be the best way to do this?
EDIT.
The table structure is the same. There are a number of tables with foreign keys.
Essentially my question boils down to this:
Primary keys on the two servers look like this:
serv1: 123456---
serv2: 123---456
All these are foreign keys in the secondary table
I would like to merge the two, but have all the primary keys in the second table to be reflected in the foreign key relationship when I move the corresponding data from the other tables.
If you have few versions of the database which i guess you do now, the best way is to synchronize the data between the online database and the missing data.
You can try with: http://www.red-gate.com/products/mysql/mysql-data-compare/
Or: http://www.devart.com/dbforge/sql/datacompare/
With workbench you can compare the schemas:
http://dev.mysql.com/doc/workbench/en/wb-database-diff-report.html
But the best first make an backup on test environment then try the compares, it could be that the same ID exists on two locations and you would need to find the best solution for your database.
So put both databases (the live one and the latest database you have on two test environments)
Synchronize them and check the differences
Run the fix on test if everything goes fine then do that on production.
We are handling a data aggregation project by having several microsoft sql server databases combining to one mysql database. all mssql database have the same schema.
The requirements are :
each mssql database can be imported to mysql independently
before being able to import each record to mysql we need to validates each records with a specific createrias via php.
each imported mssql database can be rollbacked. It means even it already imported to mysql, all the mssql database can be removed from the mysql.
we would still like to know where does each record imported to the mysql come from what mssql database.
All import process will be done with PHP .
we have difficulty in many aspects. we don't know what is the best approach to solve our problem.
your help will be highly appreciated.
ps: each mssql database has around 60 tables and each table can have a few hundred thousands .
Don't use PHP as a database administration utility. Any time you build a quick PHP script to transfer records directly from one database to another, you're going to cause yourself a world of hurt when that script becomes required for production operation.
You have a number of problems that you need solved:
You have multiple MSSQL databases with similar if not identical tables.
You have a single MySQL database that you want to merge the data into.
The imported data must be altered in a specific way before being merged.
You want to prevent all duplicate records in your import.
You want to know what database each record originally came from.
The solution?
Analyze the source MSSQL databases and create a merge strategy for them.
Create a database structure on the MySQL database that fits the merge strategy in #1, including all the new key constraints (like unique and foreign keys) required for the consolidation.
At this point you have two options left:
Dump the data from each of the source databases into raw data using your RDBMS administration utility of choice. Alter that data to fit your merge strategy and constraints. Document this, and then merge all of the data into your new database structure.
Use a tool like opendbcopy to map columns from one database to another and run a mass import.
Hope this helps.
I got a case where I have several databases running on the same server. There's one database for each client (company1, company2 etc). The structure of each of these databases should be identical with the same tables etc, but the data contained in each db will be different.
What I want to do is keep a master db that will contain no data, but manage the structure of all the other databases, meaning if I add, remove or alter any tables in the master db the changes will also be mirrored out to the other databases.
Example: If a table named Table1 is created in the master DB, the other databases (company1, company2 etc) will also get a table1.
Currently it is done by a script that monitors the database logs for changes made to the master database and running the same queries on each of the other databases. Looked into database replication, but from what I understand this will also bring along the data from the master database, which is not an option in this case.
Can I use some kind of logic against database schemas to do it?
So basicly what I'm asking here is:
How do I make this sync happen in the best possible way? Should I use a script monitoring the logs for changes or some other method?
How do I avoid existing data getting corrupted if a table is altered? (data getting removed if a table is dropped is okay)
Is syncing from a master database considered a good way to do what I wish (having an easy maintainable structure across several datbases)?
How will making updates like this affect the performance of the databases?
Hope my question was clear and that this is not a duplicate of some other thread. If more information and/or a better explantion of my problem is needed, let me know:)
You can get the list of tables for a given schema using:
select TABLE_NAME from information_schema.tables where TABLE_SCHEMA='<master table name>';
Use this list for a script or stored procedure ala:
create database if not exists <name>;
use <name>;
for each ( table_name in list )
create table if not exists <name>.table_name like <master_table>.table_name;
Now that Im thinking about it you might be able to put a trigger on the 'information_schema.tables' db that would call the 'create/maintain' script. Look for inserts and react accordingly.