Will changing a VARCHAR(MAX) column to a VARCHAR(500) invalidate a merge replication snapshot in SQL Server 2014? - sql-server-2014

We have a number of columns in a table that were created as VARCHAR(MAX) this seems to be causing performance issues with our merge replication.
We don't need these to be VARCHAR(MAX) and would like to change them to VARCHAR(500).
Will this invalidate the current snapshot and request a new snapshot to be created? This is a live environment and with the servers being in different continents the snapshot could take a while to transfer, which we need to be prepared for.

For anybody who has the same doubts. I can confirm that such a change will not invalidate the snapshot and so will not trigger a new snapshot. We ran the changes and the DDL scripts were simply passed to the subscriber and executed.

Related

General Log Move Another Table

Using MYSQL, I want to record my data from the general_log table on server A to a table on server B instantly at every data and delete the data from server A at the end of the day. I tried to use Trigger for this, but the general_log does not allow me to write triggers because it sees the system file. Alternatively, when I use the Fedareted table, when I delete the data on server A, those on server B are also deleted. Thanks in advance for your help.
I would recommend the following strategy:
First, partition the data on in general_log by date. You can learn about table partitioning in the documentation.
Second, set up replication so server B is identify to server A in real time. Once again, you may need to refer to the documentation.
Third, set up a job to remove the previous partition from A shortly after midnight.
To be honest, if you don't understand table partitioning and replication, you should get a DBA involved. In fact, if you are trying to coordinate multiple database servers, you should have a DBA involved, who would understand these concepts and how best to implement them in your environment.
I recommend to develop an ETL job to move the data every day and delete it from the old server

Can I safely change data in a replicated SQL Server table on the target machine?

I replicate a table from one SQL Server 2008 instance to another which already works fine. I'd also like to set a field in the replicated table on the target instance of the SQL Servers, but I'm not quite sure whether this is allowed (even though it seems to work).
Reason behind this: Replication from server A to B, processing of rows on server B and then setting (e.g. a flag such as "processed") when the row has been processed. This information is not available on server A and can only be set on server B.
A more cumbersome way would involve a separate table on server B which would have to keep entries of IDs in the replicated table that have already been processed, but maybe this is not necessary?
With Transactional Replication, by default the Subscribers should be treated as read-only. This is because if data has been changed at a Subscriber - whether it be inserts, updated, or deletes - it can not only cause data consistency errors, but a reinitialization could wipe the Subscriber data out as article #pre_creation_cmd is set to drop by default.
If you're going to be updating data at the Subscribers then I would suggest using Updatable Subscriptions for Transactional Replication, Peer to Peer Replication, or Merge Replication.

SQL JOB to Update all tables

I am using Microsoft SQL Server 2008 R2.I have copied database A(myproduction database) to database B(Myreportin database) by creating SSIS package.Both databases are in same server.I want to run a job so that If any change(data modifications like inserting new rows or updating values of any row in any table) take place in database A that will also take place in my B database and sql job will run and acomplish the changing automatically.I don't want that in database B table will be dropped and recreated (as its not our business rule )instead only the change will take place.
Can any one help me please.Thanks in Advance.
I would suggest that you investigate using replication. Specifically, transactional replication if you need constant updates. Here's a bit from MSDN:
Transactional replication typically starts with a snapshot of the publication database objects and data. As soon as the initial snapshot is taken, subsequent data changes and schema modifications made at the Publisher are usually delivered to the Subscriber as they occur (in near real time). The data changes are applied to the Subscriber in the same order and within the same transaction boundaries as they occurred at the Publisher; therefore, within a publication, transactional consistency is guaranteed.
If you don't need constant updating (that comes at a price in performance, of course), you can consider the alternatives of merge replication or snapshot replication. Here's a page to start examining those alternatives.

Throttling SQL server Replication?

We have a performance issue with the current transactional replication setup on sql server 2008.
When a new snapshot is created and the snapshot is applied to the subscriber, we see network utilization on the publisher and the distributor jump to 99%, and we are seeing disk queues going to 30
This is causing application timeouts.
Is there any way, we can throttle the replicated data that is being sent over?
Can we restrict the number of rows being replicated?
Are there any switches which can be set on/off to accomplish this?
Thanks!
You have an alternative to deal with this situation
While setting up transaction replication on a table that has millions of records
Initial snapshot would take time for the records to be delivered to subscriber
In SQL 2005 we have an option to create the tables on both transaction and publish server, populate dataset and setup replication on top of it
When you add subscription with command EXEC sp_addsubscription set The #sync_type = 'replication support only'.
Reference article http://www.mssqltips.com/tip.asp?tip=1117
Our DBA has forced us to break down dml code to run in batches of 50000 rows at a time with a couple of minutes in between. He plays with that batch size time to time but this way our replicating databases are ok.
For batching, everything has to go into temp tables, a new column (name it Ordinal) that does row_number(), and then a BatchID to be like Ordinal / 50000. Finally comes a loop to count BatchID and update target table batch by batch. Hard on devs, easier for DBAs and no need to pay more for infrastructure.

MySQL triggers + replication with multiple databases

I am running a couple of databases on MySQL 5.0.45 and am trying to get my legacy database to sync with a revised schema, so I can run both side by side. I am doing this by adding triggers to the new database but I am running into problems with replication. My set up is as follows.
Server "master"
Database "legacydb", replicates to server "slave".
Database "newdb", has triggers which update "legacydb" and no replication.
Server "slave"
Database "legacydb"
My updates to "newdb" run fine, and set off my triggers. They update "legacydb" on "master" server. However, the changes are not replicated down to the slaves. The MySQL docs say that for simplicity replication looks at the current database context (e.g. "SELECT DATABASE();" ) when deciding which queries to replicate rather than looking at the product of the query. My trigger is run from the context of database "newdb", so replication ignores the updates.
I have tried moving the update statement to a stored procedure in "legacydb". This works fine (i.e. data replicates to slave) when I connect to "master" and manually run "USE newdb; CALL legacydb.do_update('Foobar', 1, 2, 3, 4);". However, when this procedure is called from a trigger it does not replicate.
So far my thinking on how to fix this has been one of the following.
Force the trigger to set a new current database. This would be easiest, but I don't think this is possible. This is what I hoped to achieve with the stored procedure.
Replicate both databases, and have triggers in both master and slave. This would be possible, but a pain to set up.
Force the replication to pick up all changes to "legacydb", regardless of the current database context.
If replication runs at too high a level, it will never even see any updates run by my trigger, in which case no amount of hacking is going to achieve what I want.
Any help on how to achieve this would be greatly appreciated.
This may have something to do with it:
A stored function acquires table locks before executing, to avoid inconsistency in the binary log due to mismatch of the order in which statements execute and when they appear in the log. Statements that invoke a function are recorded rather than the statements executed within the function. Consequently, stored functions that update the same underlying tables do not execute in parallel.
In contrast, stored procedures do not acquire table-level locks. All statements executed within stored procedures are written to the binary log.
Additionally, there are a whole list of issues with Triggers:
http://dev.mysql.com/doc/refman/5.0/en/routine-restrictions.html