How to copy data from one table to another same table? - mysql

I have a master database and several child databases on the same server and all the databases have identical tables. I have to copy data from master to child databases but on each child database there's going to be different data from the tables.
Right now, I'm selecting data, comparing it and inserting / deleting it using PHP, which was working fine when there were only 2-3 child databases, but now as the child databases are growing the copying is getting slower.
I even tried to replicate the database tables using the following queries though it worked but later I realized that child dB's don't need all master data, rather they require some specific data only.
TRUNCATE master_db.papers;
INSERT INTO child_1.papers SELECT * FROM master_db.papers;
The above copies all the database based on WHERE condition. but after understanding all the requirements, I have to do the following:
I also tried replacing INSERT with UPDATE but that is causing mysql error.
Copy anything that may have updated in the master to child (UDPATE ONLY)
Copy any new data that needs to go into child.
How can I achieve that?
Thanks in advance.

I worked it out using the following:
Incase you want to copy all data from source to destination:
INSERT INTO source_db.source_table SELECT * FROM target_db.target_table WHERE some_condition = '0';
And if you want to update according to the target table:
UPDATE target_db.target_table
INNER JOIN source_db.source_table USING (some_field) SET
target_db.target_table.id = source_db.source_table.id,
target_db.target_table.name = source_db.source_table.name,
target_db.target_table.phone = source_db.source_table.phone;
Hope this helps anyone is looking to do similar task.

Related

How to properly wipe a database, and re-import?

I am unsure about the best way to do this. As I'm getting ready to put a new database into production, I need to import data from the old database that has been formed in the meantime of me working on it. The new database now also contains a lot of fake data that was used for testing, which I have to get rid of, so a fresh complete re-import seems reasonable.
Now, truncating all the tables in the new database cannot go through, because the foreign keys prevent it. Simply deleting the data instead would solve that problem, but it leaves the AUTO_INCREMENT indexes to the values where they were, so it's not a "proper" wipe. Now, there could be more properties such as that one, that would be left over (so to say), but this is the only one that I'm aware of.
So my question now is, how much of a problem could these "leftover" pieces of data pose to performance, if I were to go with the simple DELETE solution?
And also; is there a way that would be more thorough in cleaning it out, and also allow me to, of course, keep the defined constraints?
First i would use some gui tool to create the dump for the old DB ( like mySql workbench, or what ever you prefer ). Check options "Export to self-contained file", and check "Dump stored procedures and functions","Dump events" and "Dump triggers".
Then get create scripts for all tables not included in the old DB.
You can do this via "reverse engineer" option.
If you have trouble with this part this post will help.
How to get a table creation script in MySQL Workbench?
When you have old DB dump and create scripts for new sql tables, combine them to a single sql file.
On the first row add:
SET FOREIGN_KEY_CHECKS = 0;
On the last row add:
SET FOREIGN_KEY_CHECKS = 1;
Run the script. As a result you should have all tables ( new without data and old with data ), with all relations set properly. Hope it will work for you.

Query 2 databases on 2 different SQL Servers

In reviewing many of the answers, don't see a solution something I feel should be simple.
I'm attempting to update a couple of fields in my Production database on one server from a restored database on another server because of a loss of data by our ERP vendor's update.
Anyway, have both servers connected in SSMS and just want to run the query below:
USE coll18_production;
GO
USE coll18_test2;
GO
UPDATE coll18_Production.dbo.PERSON
SET W71_ID_CRD_NO = T2.PERSON_USER1, W71_ID_CRD_DATE = T2.PERSON_USER9
FROM coll18_test2.dbo.PERSON as T2
WHERE coll18_Production.dbo.PERSON.ID = T2.ID;
I would think this would be a simple update, but can't make a query for 2 different server databases via the same tables.
Thanks if anyone can make this simple,
Donald
Okay, thanks for the input. In the essence of time I'm going to do something similar to what cpaccho recommended. Create a temp table containing the 2 fields that I want to update from in my Production database. Then I'm going to connect to my Test2 database that I restored from backup. Export these two fields as a csv file with the primary key and simply restore this table data into the temp. table in my production database. Then simply run my update from this temp table into the 2 fields in my production PERSON table where the ID's equal each other.
Have a great weekend,
Donald
The problem is that since the databases are on 2 different servers in order to join between them you will need a way for the servers to talk to each other.
The way to do that is through linked servers. Then you can set up your query to join the 2 tables together using 4 part naming (server.DB.Schema.Table) and accomplish your goal. The query will look sort of like this:
UPDATE Server.DB.Schema.Table1
SET column = b.column
FROM Server1.DB.Schema.Table1 a
INNER JOIN Server2.DB.Schema.Table2 b
ON a.column = b.column
Where a.column = something
You will only need to set up the linked server on one side and the Server name in the query will be the name you give the linked server. The only caveat is that this can be slow because in order to join the tables SQL Server may have to copy the entire table from one server to the other. I would also set up the linked server on the server you are updating (so that you run the update on the same server as the DB you are updating)
How to set up Linked Server Microsoft KB
A simple, rather hacky way would be to hard copy the table from database to database...
First create a table that containts the changes you want:
USE coll18_test2;
GO
SELECT PERSON_USER1, PERSON_USER9, ID
INTO dbo.MyMrigationOrWhateverNameYouLike
FROM coll18_test2.dbo.PERSON
Then go to SSMS, right click on coll18_test2 database --> Task --> Generate scripts and go with the assistant to generate a script for the newly created table. Don't forget to setup, in the advanced options, "Type of data to script" to "Schema and "Data".
Now that you have your script, just run it in your production database, and make your query based on that table.
UPDATE dbo.PERSON
SET W71_ID_CRD_NO = T2.PERSON_USER1, W71_ID_CRD_DATE = T2.PERSON_USER9
FROM dbo.MyMrigationOrWhateverNameYouLike as T2
WHERE coll18_Production.dbo.PERSON.ID = T2.ID;
Finally drop the MyMrigationOrWhateverNameYouLike table and you're done...

Merging Two databases with the same exact structure [MYSQL]

I just had my site hacked and they were able to do some damage to my current database.
Anyway, I have a backup from few days ago. The problem is that the current database had a few thousand more posts and threads / users.
I am simply wondering how I could possibly go about merging the two databases?
The two databases have the exact structure, and I want the backup to overwrite any record from the current database, just that I want the current database to populate any new records like posts, threads, users and such.
I know you can merge two tables, if they have the exact structure, but what about two databases that have the same structure?
I assume you have a schema s1 and a schema s2.
To insert all rows of a table in s1 into a table in s2, while overwriting existing lines, you can use:
REPLACE INTO s2.table_name
SELECT * FROM s1.table_name;
If you do not want to touch existing lines:
INSERT INTO s2.table_name
SELECT * FROM s1.table_name
ON DUPLICATE KEY IGNORE;
there was some ways to do it:
1.) use Command line tools like schema Sync and mysql diff
2.) or Using SQLyog
find out more here
http://blog.webyog.com/2012/10/16/so-how-do-you-sync-your-database-schema/
In my experience ON DUPLICATE KEY IGNORE did not work. Instead I found
INSERT IGNORE ta2_table
SELECT * FROM ta1_table;
worked like a charm

Setting up a master database to control the structure of other databases

I got a case where I have several databases running on the same server. There's one database for each client (company1, company2 etc). The structure of each of these databases should be identical with the same tables etc, but the data contained in each db will be different.
What I want to do is keep a master db that will contain no data, but manage the structure of all the other databases, meaning if I add, remove or alter any tables in the master db the changes will also be mirrored out to the other databases.
Example: If a table named Table1 is created in the master DB, the other databases (company1, company2 etc) will also get a table1.
Currently it is done by a script that monitors the database logs for changes made to the master database and running the same queries on each of the other databases. Looked into database replication, but from what I understand this will also bring along the data from the master database, which is not an option in this case.
Can I use some kind of logic against database schemas to do it?
So basicly what I'm asking here is:
How do I make this sync happen in the best possible way? Should I use a script monitoring the logs for changes or some other method?
How do I avoid existing data getting corrupted if a table is altered? (data getting removed if a table is dropped is okay)
Is syncing from a master database considered a good way to do what I wish (having an easy maintainable structure across several datbases)?
How will making updates like this affect the performance of the databases?
Hope my question was clear and that this is not a duplicate of some other thread. If more information and/or a better explantion of my problem is needed, let me know:)
You can get the list of tables for a given schema using:
select TABLE_NAME from information_schema.tables where TABLE_SCHEMA='<master table name>';
Use this list for a script or stored procedure ala:
create database if not exists <name>;
use <name>;
for each ( table_name in list )
create table if not exists <name>.table_name like <master_table>.table_name;
Now that Im thinking about it you might be able to put a trigger on the 'information_schema.tables' db that would call the 'create/maintain' script. Look for inserts and react accordingly.

How do I rescue a small portion of data from a SQL Server database backup?

I have a live database that had some data deleted from it and I need that data back. I have a very recent copy of that database that has already been restored on another machine. Unrelated changes have been made to the live database since the backup, so I do not want to wipe out the live database with a full restore.
The data I need is small - just a dozen rows - but those dozen rows each have a couple rows from other tables with foreign keys to it, and those couple rows have god knows how many rows with foreign keys pointing to them, so it would be complicated to restore by hand.
Ideally I'd be able to tell the backup copy of the database to select the dozen rows I need, and the transitive closure of everything that they depend on, and everything that depends on them, and export just that data, which I can then import into the live database without touching anything else.
What's the best approach to take here? Thanks.
Everyone has mentioned sp_generate_inserts. When using this, how do you prevent Identity columns from messing everything up? Do you just turn IDENTITY INSERT on?
I've run into similar situations before, but found that doing it by hand worked the best for me.
I restored the backup to a second server and did my query to get the information that I needed, I then build a script to insert the data sp_generate_inserts and then repeated for each of my tables that had relational rows.
In total I only had about 10 master records with relational data in 2 other tables. It only took me about an hour to get everything back the way it was.
UPDATE To answer your question about sp_generate_inserts, as long as you specify #owner='dbo', it will set identity insert to ON and then set it to off at the end of the script for you.
you'll have to restore by hand. The sp_generate_inserts is good for new data. but to update data I do it this way:
SELECT 'Update YourTable '
+'SET Column1='+COALESCE(''''+CONVERT(varchar,Column1Name)+'''','NULL')
+', Column2='+COALESCE(''''+CONVERT(varchar,Column2Name)+'''','NULL')
+' WHERE Key='+COALESCE(''''+CONVERT(varchar,KeyColumn)+'''','NULL') FROM backupserver.databasename.owner.YourTable
you could create inserts this way too, but sp_generate_inserts is better. Watch those identity values, and good luck (I've had this problem before and know where you're at right now).
useful queries:
--find out if there are missing rows, and which ones
SELECT
b.key,c.key
from backupserver.databasename.owner.YourTable b
LEFT OUTER JOIN YourTable c ON b.key=c.key
WHERE c.Key is NULL
--find differences
SELECT
b.key,c.key
from YourTable c
LEFT OUTER JOIN backupserver.databasename.owner.YourTable b ON c.key=b.key
WHERE b.Key is not null
AND ( ISNULL(c.column1,-9999) != ISNULL(b.column1,-9999)
OR ISNULL(c.column2,'~') != ISNULL(b.column2,'~')
OR ISNULL(c.column2,GETDATE()) != ISNULL(b.column2,GETDATE())
)
SQL Server Management Studio for SQL Server 2008 allows you to export table data as insert statements. See http://www.kodyaz.com/articles/sql-server-script-data-with-generate-script-wizard.aspx. This approach lacks some of the flexibility of sp_generate_inserts (you cannot specify a WHERE clause to filter the rows in your table, for example) but may be more reliable since it is part of the product.