SQL best way to update remote server? - sql-server-2008

This is more like an advise question.
--SQL SERVER 2008/SQL SERVER 2005/HAMACHI/ DELPHI 2010--
Im developing a POS system for few restaurants that we own (4), each of the locations have their own SQL Server database, just 2 days ago i could create a conection using HAMACHI for a VPN and created liked servers (Father Google helped me out with all of this), i can now acces all of the data in the remote locations. I also have all of the databases in this computer (I will build a real server computer). I created a database in the "server" for each of the locations so it would be easier to create reports and all.
I didnt create a client-server model and went for a thick one because internet is very unstable and i dont really need to update at real time.
I want to create an update into the server every 30min or every hour, im still wonrking on it.
I have few questions.
(if you know it) Is hamachi a reliable VPN, does it has its problems (wich ones), or do you recomend another way and wich one?
When doing the update (by update i mean an insert of the new records into the server), should i execute the update from the client or from the server?
I am using MERGE to update when matched and insert when not matched, but i dont know if it is the best way to do it as it scans all the records and a table with only 243,272 records takes like 12mins to complete, or if i should select the recods where the PK is higher than the last PK in the server and do a merge. Based on your
experience wich way would be the best (even without using merge)...
This is a merge code im using:
SET IDENTITY_INSERT pedidos ON
MERGE INTO pedidos C
USING(
SELECT id, id_pedido, id_articulo, cant, fecha, id_usuario, [local], estado
FROM [SENDERO].[PVBC].[DBO].[pedidos]) TC
ON (C.id =TC.id)
WHEN MATCHED THEN
UPDATE
SET C.id_pedido=TC.id_pedido,
C.id_articulo=TC.id_articulo,
C.cant=TC.cant,
C.fecha=TC.fecha,
C.id_usuario=TC.id_usuario,
C.[local]=TC.[local],
C.estado=TC.estado
WHEN NOT MATCHED THEN
INSERT (id, id_pedido, id_articulo, cant, fecha, id_usuario, [local], estado)
VALUES (id, id_pedido, id_articulo, cant, fecha, id_usuario, [local], estado);
SET IDENTITY_INSERT pedidos OFF
Any recomendations are welcome, remember that im new with all of this remote conections thing but im willing to keep learning. Thank you!!

There are many ways to do what you want. I suggest you do some research on SQL Server replication. This is a 'built in' way of making databases copy (publish) themselves to a central area (subscriber). It is a little complicated but does not require custom code and it should make adding more databases easier. There are many ways to implement it, you just have to keep in mind your requirements - 30 minute latency over a VPN - when selecting which method. i.e. you do not need to use mirroring as you don't need your data to be that up to date

Related

How do I update a MySQL database with the data from another database?

Disclaimer: this is a bit of a "best practices" question, but I'm not sure how else to phrase it to be more concrete or evoke a more objective answer.
I have two databases for a small Rails project: a dev database and a prod database that live on the same server.
What happens is that every couple weeks, I make some data changes (mostly inserts) via the ActiveAdmin gem to the dev database via a dev Rails environment. I poke around the Rails app making sure the new data looks good. When I'm ready to deploy, I run a script that:
Dumps the dev db
Drops and recreates the prod db to delete all data
Imports the dev db dump into the prod db
My intuition tells me that this is not a great way of going about this, and it's also a bit slow, but I can't seem to find the standard way of doing data deployments from one database to another. What is the standard way, if there is one?
Things I've considered:
Setting up an additional database that is the replica of the dev database; in deployment, somehow switch the Rails app over to use the replica as "prod", update the old "prod" db to be the replica, etc. I can barely even keep this idea in my head, and it seems like such a mess.
Doing data changes directly on prod, which completely invalidates the need for a dev database (feels very gross)
Doing data changes in dev via SQL script with transactions, and applying them to prod when deploying (it'd be really, really annoying to write these scripts by hand)
Some additional notes:
The only data changes that are ever made are the ones I make
Schema changes are done via Rails migrations
The databases are relatively small (the biggest table is ~1000 rows)
If you have two databases on the same server, you can compare and insert into tables. First, for deleted rows:
BEGIN TRAN;
DELETE FROM prod.tbl1
WHERE id IN (
SELECT id FROM dev.tbl1 RIGHT JOIN prod.tbl1 ON dev.tbl1.id = prod.tbl1.id WHERE dev.tbl1.id IS NULL);
COMMIT;
Second, for new rows:
BEGIN TRAN;
INSERT INTO prod.tbl1
SELECT *
FROM dev.tbl1
WHERE id IN (
SELECT id FROM dev.tbl1 LEFT JOIN prod.tbl1 ON dev.tbl1.id = prod.tbl1.id WHERE prod.tbl1.id IS NULL);
COMMIT;
Now, a trigger on your dev database to manage updates:
CREATE DEFINER=`root`#`localhost` TRIGGER `dev`.`tbl1_update`
AFTER UPDATE ON `dev`.`tbl1`
FOR EACH ROW
BEGIN
SET NEW.update = '1';
END
You need a "update" field on the dev table. When a update query run on the table, the field "update" changes to 1 automatiaclly. Then, use this query:
BEGIN TRAN;
UPDATE prod.tbl1
LEFT JOIN dev.tbl1
ON prod.tbl1.id = dev.tbl1.id
SET prod.tbl1.fld1 = dev.tbl1.fld1, prod.tbl1.fld2 = dev.tbl1.fld2
WHERE prod.tbl1.id IN (SELECT id FROM dev.tbl1 WHERE update = '1');
UPDATE dev.tbl1 SET update = '0';
COMMIT;
You can run a query like this on all tables. You can put it on a .sql file and run with a cron job (mysql -h -u -D < myscript.sql).
This query compare tables and get the IDs on dev not present on production. Then, execute a select for the complete table (only these ids), and insert them on prod.
(Replace the id field with your unique identifier one for each table).
This seems like a pretty strange approach. Usually the data in development is regarded as disposable. You want just enough data so that you can do styling and troubleshooting - usually with psuedorandom data. Building the "finished" app data in development seems error prone and you would need to sync work if you are more than one developer.
Plus if the data set is significantly large Rails will be very slow in development due to the lack of caching.
What you want is a staging environment which runs on the same settings as the intended production. The key here is that it should be as close to production as possible. This can run on remote server or a server in a intranet.
You can also use the staging environment to display new features or progress to clients/stakeholders and let them preview new features or be looped in on the progress in development.
You can create it by copying config/environments/production.rb -> staging.rb and by setting the RAILS_ENV env var to staging on the intended staging server.
You should also create an additional section in config/database.yml or use ENV['DATABASE_URL'].
Depending on the project staging can be flushed daily with mirrored data from production or be fully synced.

Is it possible "Database Synchronization"

I have a problem and not sure if this is possible. My web application has a database and i'm using a mysql workbench and using wamp server.
My web app has a database name healthcare, and if I import again another database with the same tables, etc but addition data. I want the first database to be updated only with new values but not replaced.
Is it possible?
Edit: I searched in the net and other related sources and I manage to set my phpmyadmin "Ignore multiple statement errors". When I import the second database (.sql with same tables but with new data) it does not update the first database but the message is successful. Please help, I'll appreciate any help...
in the past ive searched for tools to do some similar database sync tasks - in my experience ive found that none are free & reliable.
have you tried writing some queries to do this manually?
first thing that comes to mind would be figuring out a key you can use to evaluate each row and determine if you should copy said record from database A to database B.
afterwards you could simply do an INSERT(SELECT)
INSERT INTO healthcare_DESTINATION.table (SELECT * FROM healthcare_SOURCE WHERE some_condition = 1);
obviously this is the simplified version - but i've done something very similar utilizing timestamps (eg only copy rows newer than the newest row in the destination table)
hope this helps

SQL 2008 - Alternative to trigger

I am looking for a solution to the following:
Database: A
Table: InvoiceLines
Database: B
Table: MyLog
Every time lines are added to InvoiceLines in database A, I want to run a query that updates the table MyLog in database B. And I want it instantly.
Normally I would create a trigger in database A on INSERT in InvoiceLines. The problem is that database A belongs to a ERP program where I don't want to make any changes at all (updates, unknown functionality in 3-layer program, etc)
Any hints to help me in the right direction...?
You can use transactional replication to send changes from your table in database A to a copy in DB B, then create your triggers on the copy. It's not "instant," but it's usually considered "near real time."
You might be able to use DB mirroring to do this somehow, but you'd have to do some testing to see if you could get it to work right (maybe set up triggers in the mirror that don't exist in the original?)
One possible solution to replicate trigger's functionality without database update is to poll the table by an external application (i.e. java) which on finding new insert would fire required query.
In SQLServer2008, something similar can be done via C# assembly but again this needs to be installed which requires database update.

Perl: How to copy/mirror remote MYSQL table(s) to another database? Possibly different structure too?

I am very new to this and a good friend is in a bind. I am at my wits end. I have used gui's like navicat and sqlyog to do this but, only manually.
His band info data (schedules and whatnot) is in a MYSQL database on a server (admin server).
I am putting together a basic site for him written in Perl that grabs data from a database that resides on my server (public server) and displays schedule info, previous gig newsletters and some fan interaction.
He uses an administrative interface, which he likes and desires to keep, to manage the data on the admin server.
The admin server db has a bunch of tables and even table data the public db does not need.
So, I created tables on the public side that only contain relevant data.
I basically used a gui to export the data, then insert to the public side whenever he made updates to the admin db (copy and paste).
(FYI I am using DBI module to access the data in/via my public db perl script.)
I could access the admin server directly to grab only the data I need but, the whole purpose of this is to "mirror" the data not access the admin server on every query. Also, some tables are THOUSANDS of rows and parsing every row in a loop seemed too "bulky" to me. There is however a "time" column which could be utilized to compare to.
I cannot "sync" due to the fact that the structures are different, I only need the relevant table data from only three tables.
SO...... I desire to automate!
I read "copy" was a fast way but, my findings in how to implement were too advanced for my level.
I do not have the luxury of placing a script on the admin server to notify when there was an update.
1- I would like to set up a script to check a table to see if a row was updated or added on the admin servers db.
I would then desire to update or insert the new or changed data to the public servers db.
This "check" could be set up in a cron job I guess or triggered when a specific page loads on the public side. (the same sub routine called by the cron I would assume).
This data does not need to be "real time" but, if he updates something it would be nice to have it appear as quickly as possible.
I have done much reading, module research and experimenting but, here I am again at stackoverflow where I always get great advice and examples.
Much of the terminology is still quite over my head so verbose examples with explanations really help me learn quicker.
Thanks in advance.
The two terms you are looking for are either "replication" or "ETL".
First, replication approach.
Let's assume your admin server has tables T1, T2, T3 and your public server has tables TP1, TP2.
So, what you want to do (since you have different table structres as you said) is:
Take the tables from public server, and create exact copies of those tables on the admin server (TP1 and TP2).
Create a trigger on the admin server's original tables to populate the data from T1/T2/T3 into admin server's copy of TP1/TP2.
You will also need to do initial data population from T1/T2/T3 into admin server's copy of TP1/TP2. Duh.
Set up the "replication" from admin server's TP1/TP2 to public server's TP1/TP2
A different approach is to write a program (such programs are called ETL - Extract-Transform-Load) which will extract the data from T1/T2/T3 on admin server (the "E" part of "ETL"), massage the data into format suitable for loading into TP1/TP2 tables (the "T" part of "ETL"), transfer (via ftp/scp/whatnot) those files to public server, and the second half of the program (the "L") part will load the files into the tables TP1/TP2 on public server. Both halfs of the program would be launched by cron or your scheduler of choice.
There's an article with a very good example of how to start building Perl/MySQL ETL: http://oreilly.com/pub/a/databases/2007/04/12/building-a-data-warehouse-with-mysql-and-perl.html?page=2
If you prefer not to build your own, here's a list of open source ETL systems, never used any of them so no opinions on their usability/quality: http://www.manageability.org/blog/stuff/open-source-etl
I think you've misunderstood ETL as a problem domain, which is complicated, versus ETL as a one-off solution, which is often not much harder than writing a report. Unless I've totally misunderstood your problem, you don't need a general ETL solution, you need a one-off solution that works on a handful of tables and a few thousand rows. ETL and Schema mapping sound scarier than they are for a single job. (The generalization, scaling, change-management, and OLTP-to-OLAP support of ETL are where it gets especially difficult.) If you can use Perl to write a report out of a SQL database, you probably know enough to handle the ETL involved here.
1- I would like to set up a script to check a table to see if a row was updated or added on the admin servers db. I would then desire to update or insert the new or changed data to the public servers db.
If every table you need to pull from has an update timestamp column, then your cron job includes some SELECT statements with WHERE clauses based on the last time the cron job ran to get only the updates. Tables without an update timestamp will probably need a full dump.
I'd use a one-to-one table mapping unless normalization was required... just simpler to my opinion. Why complicate it with "big" schema changes if you don't have to?
some tables are THOUSANDS of rows and parsing every row in a loop seemed too "bulky" to me.
Limit your queries to only the columns you need (and if there are no BLOBs or exceptionally big columns in what you need) a few thousand rows should not be a problem via DBI with a FETCHALL method. Loop all you want locally, just make as few trips to the remote database as possible.
If a row is has a newer date, update it. I will also have to check for new rows for insertion.
Each table needs one SELECT ... WHERE updated_timestamp_columnname > last_cron_run_timestamp. That result set will contain all rows with newer timestamps, which contains newly inserted rows (if the timestamp column behaves like I'd expect). For updating your local database, check out MySQL's ON DUPLICATE KEY UPDATE syntax... this will let you do it in one step.
... how to implement were too advanced for my level ...
Yes, I have actually done this already but, I have to manually update...
Some questions to help us understand your level... Are you hitting the database from the mysql client command-line or from a GUI? Have you gotten to the point where you've wrapped your SQL queries in Perl and DBI, yet?
If the two databases have different, you'll need an ETL solution to map from one schema to another.
If the schemas are the same, all you have to do is replicate the data from one to the other.
Why not just create identical structure on the 'slave' server to the master server. Then create a small table that keeps track of the last timestamp or id for the updated tables.
Then select from the master all rows changed since the last timestamp or greater than the id. Insert them into the matching table on the slave server.
You will need to be careful of updated rows. If a row on the master is updated but the timestamp doesn't change then how will you tell which rows to fetch? If that's not an issue the process is quite simple.
If it is an issue then you need to be more sophisticated, but without knowing the data structure and update mechanism its a goose chase to give pointers on it.
The script could be called by cron every so often to update the changes.
if the database structures must be different on the two servers then a simple translation step may need to be added, but most of the time that can be done within the sql select statement and maybe a join or two.

How do I rescue a small portion of data from a SQL Server database backup?

I have a live database that had some data deleted from it and I need that data back. I have a very recent copy of that database that has already been restored on another machine. Unrelated changes have been made to the live database since the backup, so I do not want to wipe out the live database with a full restore.
The data I need is small - just a dozen rows - but those dozen rows each have a couple rows from other tables with foreign keys to it, and those couple rows have god knows how many rows with foreign keys pointing to them, so it would be complicated to restore by hand.
Ideally I'd be able to tell the backup copy of the database to select the dozen rows I need, and the transitive closure of everything that they depend on, and everything that depends on them, and export just that data, which I can then import into the live database without touching anything else.
What's the best approach to take here? Thanks.
Everyone has mentioned sp_generate_inserts. When using this, how do you prevent Identity columns from messing everything up? Do you just turn IDENTITY INSERT on?
I've run into similar situations before, but found that doing it by hand worked the best for me.
I restored the backup to a second server and did my query to get the information that I needed, I then build a script to insert the data sp_generate_inserts and then repeated for each of my tables that had relational rows.
In total I only had about 10 master records with relational data in 2 other tables. It only took me about an hour to get everything back the way it was.
UPDATE To answer your question about sp_generate_inserts, as long as you specify #owner='dbo', it will set identity insert to ON and then set it to off at the end of the script for you.
you'll have to restore by hand. The sp_generate_inserts is good for new data. but to update data I do it this way:
SELECT 'Update YourTable '
+'SET Column1='+COALESCE(''''+CONVERT(varchar,Column1Name)+'''','NULL')
+', Column2='+COALESCE(''''+CONVERT(varchar,Column2Name)+'''','NULL')
+' WHERE Key='+COALESCE(''''+CONVERT(varchar,KeyColumn)+'''','NULL') FROM backupserver.databasename.owner.YourTable
you could create inserts this way too, but sp_generate_inserts is better. Watch those identity values, and good luck (I've had this problem before and know where you're at right now).
useful queries:
--find out if there are missing rows, and which ones
SELECT
b.key,c.key
from backupserver.databasename.owner.YourTable b
LEFT OUTER JOIN YourTable c ON b.key=c.key
WHERE c.Key is NULL
--find differences
SELECT
b.key,c.key
from YourTable c
LEFT OUTER JOIN backupserver.databasename.owner.YourTable b ON c.key=b.key
WHERE b.Key is not null
AND ( ISNULL(c.column1,-9999) != ISNULL(b.column1,-9999)
OR ISNULL(c.column2,'~') != ISNULL(b.column2,'~')
OR ISNULL(c.column2,GETDATE()) != ISNULL(b.column2,GETDATE())
)
SQL Server Management Studio for SQL Server 2008 allows you to export table data as insert statements. See http://www.kodyaz.com/articles/sql-server-script-data-with-generate-script-wizard.aspx. This approach lacks some of the flexibility of sp_generate_inserts (you cannot specify a WHERE clause to filter the rows in your table, for example) but may be more reliable since it is part of the product.