How do I rescue a small portion of data from a SQL Server database backup? - sql-server-2008

I have a live database that had some data deleted from it and I need that data back. I have a very recent copy of that database that has already been restored on another machine. Unrelated changes have been made to the live database since the backup, so I do not want to wipe out the live database with a full restore.
The data I need is small - just a dozen rows - but those dozen rows each have a couple rows from other tables with foreign keys to it, and those couple rows have god knows how many rows with foreign keys pointing to them, so it would be complicated to restore by hand.
Ideally I'd be able to tell the backup copy of the database to select the dozen rows I need, and the transitive closure of everything that they depend on, and everything that depends on them, and export just that data, which I can then import into the live database without touching anything else.
What's the best approach to take here? Thanks.
Everyone has mentioned sp_generate_inserts. When using this, how do you prevent Identity columns from messing everything up? Do you just turn IDENTITY INSERT on?

I've run into similar situations before, but found that doing it by hand worked the best for me.
I restored the backup to a second server and did my query to get the information that I needed, I then build a script to insert the data sp_generate_inserts and then repeated for each of my tables that had relational rows.
In total I only had about 10 master records with relational data in 2 other tables. It only took me about an hour to get everything back the way it was.
UPDATE To answer your question about sp_generate_inserts, as long as you specify #owner='dbo', it will set identity insert to ON and then set it to off at the end of the script for you.

you'll have to restore by hand. The sp_generate_inserts is good for new data. but to update data I do it this way:
SELECT 'Update YourTable '
+'SET Column1='+COALESCE(''''+CONVERT(varchar,Column1Name)+'''','NULL')
+', Column2='+COALESCE(''''+CONVERT(varchar,Column2Name)+'''','NULL')
+' WHERE Key='+COALESCE(''''+CONVERT(varchar,KeyColumn)+'''','NULL') FROM backupserver.databasename.owner.YourTable
you could create inserts this way too, but sp_generate_inserts is better. Watch those identity values, and good luck (I've had this problem before and know where you're at right now).
useful queries:
--find out if there are missing rows, and which ones
SELECT
b.key,c.key
from backupserver.databasename.owner.YourTable b
LEFT OUTER JOIN YourTable c ON b.key=c.key
WHERE c.Key is NULL
--find differences
SELECT
b.key,c.key
from YourTable c
LEFT OUTER JOIN backupserver.databasename.owner.YourTable b ON c.key=b.key
WHERE b.Key is not null
AND ( ISNULL(c.column1,-9999) != ISNULL(b.column1,-9999)
OR ISNULL(c.column2,'~') != ISNULL(b.column2,'~')
OR ISNULL(c.column2,GETDATE()) != ISNULL(b.column2,GETDATE())
)

SQL Server Management Studio for SQL Server 2008 allows you to export table data as insert statements. See http://www.kodyaz.com/articles/sql-server-script-data-with-generate-script-wizard.aspx. This approach lacks some of the flexibility of sp_generate_inserts (you cannot specify a WHERE clause to filter the rows in your table, for example) but may be more reliable since it is part of the product.

Related

How to copy data from one table to another same table?

I have a master database and several child databases on the same server and all the databases have identical tables. I have to copy data from master to child databases but on each child database there's going to be different data from the tables.
Right now, I'm selecting data, comparing it and inserting / deleting it using PHP, which was working fine when there were only 2-3 child databases, but now as the child databases are growing the copying is getting slower.
I even tried to replicate the database tables using the following queries though it worked but later I realized that child dB's don't need all master data, rather they require some specific data only.
TRUNCATE master_db.papers;
INSERT INTO child_1.papers SELECT * FROM master_db.papers;
The above copies all the database based on WHERE condition. but after understanding all the requirements, I have to do the following:
I also tried replacing INSERT with UPDATE but that is causing mysql error.
Copy anything that may have updated in the master to child (UDPATE ONLY)
Copy any new data that needs to go into child.
How can I achieve that?
Thanks in advance.
I worked it out using the following:
Incase you want to copy all data from source to destination:
INSERT INTO source_db.source_table SELECT * FROM target_db.target_table WHERE some_condition = '0';
And if you want to update according to the target table:
UPDATE target_db.target_table
INNER JOIN source_db.source_table USING (some_field) SET
target_db.target_table.id = source_db.source_table.id,
target_db.target_table.name = source_db.source_table.name,
target_db.target_table.phone = source_db.source_table.phone;
Hope this helps anyone is looking to do similar task.

How to manage schema changes on many identical schema-based databases with mysql?

I'm developping a web platform to manage student registrations in schools of my region. For that I have 17 databases running on MySQL (5.7.19) where there is one which is the main database and the 16 others represent schools. Schools databases (must) have the exactly the same schema, each containing data corresponding to the associated school. I separated this way to avoid latency as each school can register many applications (16k on average), so the requests could get heavier over time.
Now I have a serious problem: when I change the schema of a school's database, I have to manually do it for those of other schools to keep the schema consistency because my sql requests are made independently of the school. For example, if i add a new field in table_b of database_school5, i have to manually do the same on table_b of all remaining databases.
What can I do to manage theses changes efficiently? Is there an automatic solution? Is there an adapted DBMS for this problem?
Somebody told me that PostgreSQL can achieve this easily with INHERITANCE, but this only concerns the tables, unless I've done some poor research.
I want every time I make a change to a database schema, whether it is adding a table, adding a field, removing a field, adding a constraint, etc., the changes are automatically transferred to the other databases.
Thanks in advance.
SELECT ... FROM information_schema.tables
WHERE schema_name LIKE 'database_school%'
AND table_name != 'the 17th table'
AND schema_name != 'database_school5' -- since they have already done it.
That will find the 16 names. What you put into ... is a CONCAT(...) to construct the ALTER TABLE ... statements.
Then you do one of these:
Plan A: Manually copy/paste those ALTERs into mysql commandline tool to perform them.
Plan B: Wrap all of it in a Stored Procedure that will loop through the results of the SELECT and prepare+execute each one.

MySQL - Auto/Schedual SQL Updating Tables

I am not database engineering. But I have a question about the possibility of an issue about the MySQL database.
Is it possible to write SQL to get the data from several tables and then use these data (what we get) to updated a new table?
Also, this work should be scheduled daily.
The reason why I ask this question is because I am in this situation:
Our IT department has maintained a big database, but the database/tables are not meet our department's business need (we only have read permission). Our department has a small database (have all the permission), which we can use custom SQL to create some special table and updated them by daily.
So go back to the question, it is possible to set up the SQL and schedule it to make sure these SQL keep updating our tables?
Thank you so much!!!
Is it possible to write SQL to get the data from several tables and
then use these data (what we get) to updated a new table?
Yes it is possible. You can use a UPDATE .. JOIN construct to get the data from several table using SELECT statement and then JOIN with that inline view and perform the update operation to your other table.
Example:
UPDATE Your_Table a
JOIN (
//Select query to get data from multiple other tables
) xxx ON a.some_column = xxx.some_matching_column
SET a.column_c = xxx.column_c;
Also, this work should be scheduled daily
Sure, use MySQL Event Schedular

Query 2 databases on 2 different SQL Servers

In reviewing many of the answers, don't see a solution something I feel should be simple.
I'm attempting to update a couple of fields in my Production database on one server from a restored database on another server because of a loss of data by our ERP vendor's update.
Anyway, have both servers connected in SSMS and just want to run the query below:
USE coll18_production;
GO
USE coll18_test2;
GO
UPDATE coll18_Production.dbo.PERSON
SET W71_ID_CRD_NO = T2.PERSON_USER1, W71_ID_CRD_DATE = T2.PERSON_USER9
FROM coll18_test2.dbo.PERSON as T2
WHERE coll18_Production.dbo.PERSON.ID = T2.ID;
I would think this would be a simple update, but can't make a query for 2 different server databases via the same tables.
Thanks if anyone can make this simple,
Donald
Okay, thanks for the input. In the essence of time I'm going to do something similar to what cpaccho recommended. Create a temp table containing the 2 fields that I want to update from in my Production database. Then I'm going to connect to my Test2 database that I restored from backup. Export these two fields as a csv file with the primary key and simply restore this table data into the temp. table in my production database. Then simply run my update from this temp table into the 2 fields in my production PERSON table where the ID's equal each other.
Have a great weekend,
Donald
The problem is that since the databases are on 2 different servers in order to join between them you will need a way for the servers to talk to each other.
The way to do that is through linked servers. Then you can set up your query to join the 2 tables together using 4 part naming (server.DB.Schema.Table) and accomplish your goal. The query will look sort of like this:
UPDATE Server.DB.Schema.Table1
SET column = b.column
FROM Server1.DB.Schema.Table1 a
INNER JOIN Server2.DB.Schema.Table2 b
ON a.column = b.column
Where a.column = something
You will only need to set up the linked server on one side and the Server name in the query will be the name you give the linked server. The only caveat is that this can be slow because in order to join the tables SQL Server may have to copy the entire table from one server to the other. I would also set up the linked server on the server you are updating (so that you run the update on the same server as the DB you are updating)
How to set up Linked Server Microsoft KB
A simple, rather hacky way would be to hard copy the table from database to database...
First create a table that containts the changes you want:
USE coll18_test2;
GO
SELECT PERSON_USER1, PERSON_USER9, ID
INTO dbo.MyMrigationOrWhateverNameYouLike
FROM coll18_test2.dbo.PERSON
Then go to SSMS, right click on coll18_test2 database --> Task --> Generate scripts and go with the assistant to generate a script for the newly created table. Don't forget to setup, in the advanced options, "Type of data to script" to "Schema and "Data".
Now that you have your script, just run it in your production database, and make your query based on that table.
UPDATE dbo.PERSON
SET W71_ID_CRD_NO = T2.PERSON_USER1, W71_ID_CRD_DATE = T2.PERSON_USER9
FROM dbo.MyMrigationOrWhateverNameYouLike as T2
WHERE coll18_Production.dbo.PERSON.ID = T2.ID;
Finally drop the MyMrigationOrWhateverNameYouLike table and you're done...

SQL best way to update remote server?

This is more like an advise question.
--SQL SERVER 2008/SQL SERVER 2005/HAMACHI/ DELPHI 2010--
Im developing a POS system for few restaurants that we own (4), each of the locations have their own SQL Server database, just 2 days ago i could create a conection using HAMACHI for a VPN and created liked servers (Father Google helped me out with all of this), i can now acces all of the data in the remote locations. I also have all of the databases in this computer (I will build a real server computer). I created a database in the "server" for each of the locations so it would be easier to create reports and all.
I didnt create a client-server model and went for a thick one because internet is very unstable and i dont really need to update at real time.
I want to create an update into the server every 30min or every hour, im still wonrking on it.
I have few questions.
(if you know it) Is hamachi a reliable VPN, does it has its problems (wich ones), or do you recomend another way and wich one?
When doing the update (by update i mean an insert of the new records into the server), should i execute the update from the client or from the server?
I am using MERGE to update when matched and insert when not matched, but i dont know if it is the best way to do it as it scans all the records and a table with only 243,272 records takes like 12mins to complete, or if i should select the recods where the PK is higher than the last PK in the server and do a merge. Based on your
experience wich way would be the best (even without using merge)...
This is a merge code im using:
SET IDENTITY_INSERT pedidos ON
MERGE INTO pedidos C
USING(
SELECT id, id_pedido, id_articulo, cant, fecha, id_usuario, [local], estado
FROM [SENDERO].[PVBC].[DBO].[pedidos]) TC
ON (C.id =TC.id)
WHEN MATCHED THEN
UPDATE
SET C.id_pedido=TC.id_pedido,
C.id_articulo=TC.id_articulo,
C.cant=TC.cant,
C.fecha=TC.fecha,
C.id_usuario=TC.id_usuario,
C.[local]=TC.[local],
C.estado=TC.estado
WHEN NOT MATCHED THEN
INSERT (id, id_pedido, id_articulo, cant, fecha, id_usuario, [local], estado)
VALUES (id, id_pedido, id_articulo, cant, fecha, id_usuario, [local], estado);
SET IDENTITY_INSERT pedidos OFF
Any recomendations are welcome, remember that im new with all of this remote conections thing but im willing to keep learning. Thank you!!
There are many ways to do what you want. I suggest you do some research on SQL Server replication. This is a 'built in' way of making databases copy (publish) themselves to a central area (subscriber). It is a little complicated but does not require custom code and it should make adding more databases easier. There are many ways to implement it, you just have to keep in mind your requirements - 30 minute latency over a VPN - when selecting which method. i.e. you do not need to use mirroring as you don't need your data to be that up to date