Will existing subscriptions still be able to sync to a publisher that has had an article added to it (Merge Rep) - sql-server-2014

I have to add a new table to the publisher database which will then be added to the subscribers.
I have done some digging and to allow the table to be replicated I need to add it as an article for that publication. When I add the article through the Articles under publication properties I receive the following message:
After adding a new merge article, you must generate a new snapshot
before changes from any subscription can be merged.
Although a snapshot of all articles must be generated, only the
snapshot of the new article will be used to synchronize existing
subscriptions.
Are you sure you want to add a new merge article?
I am planning on rolling out the changes tomorrow by generating a snapshot using the "Generate the selected snapshot now" option from the Data Partitions page of the publication properties. This is then copied to the subscriber and the application we use will handle the rest of updating the subscriber database.
It is using SQL Server 2014, with merge replication.
Will the other subscribers still be able to sync their subscriptions before they have the new snapshot, or will they have to wait until they have been updated?

The short answer is no.
If you are adding a new article to an existing publisher you will need to rollout the snapshot before the subscriber is allowed to sync.

Related

Trigger a build in TeamCity on all branches, except for master for a specific user

So we have this TeamCity setup, that automatically triggers a build every single time something is committed in GitHub. The problem is that I only need to build on all branches for all users + on master for ONE SPECIFIC user.
I took a look at TeamCity's Triggers:
It has a Trigger Rule option:
Unfortunately though, it doesn't have a "branch" filter, so that I could only accept builds from my particular user on master
So how do I configure this? Thanks.

MSAccess: ReadOnly Link to an External Data Source (Salesforce Account Table)?

I am able to link to our Salesforce Accounts table from MSAccess (via my Admin User login). This provides me the welcome benefit not needing to manually "Export" using SFDC Data Loader functionality to perform data maintenance and synchronization tasks w/3rd party data.
This approach gives me strong reservations as this link is Read/Write to live data. I am currently the only user to see this AccessDB/table and perform these maintenance tasks, but still does not preclude me from inadvertently doing something really STUPID.
Here's my current thought (short of a formal ReadOnly method):
Retain the LINKED SFDC table in my local AccessDB, but HIDE in MSAccess' Navigator
OnOpen (or some other event) - COPY the full contents of the linked table into a LOCAL Table
Perform necessary queries & maintenance tasks referencing the LOCAL table only
Is there a better way to accomplish this task? Maybe Link>Copy>Unlink>Maintain afresh for each new session?

Introducing Liquibase database migration to existing product

I am adding Liquibase database migration to our currently product deployment using RPM, and am looking for some advice/tips on how to achieve my desired goal.
Preferably the RPM would be able to be installed in brand new and shiny developer environments, as well as existing integration/production systems.
I've used generateChangeLog to create an xml changelog of the current (pre-liquibase) db schema, and I've got our master changelog created and ready to go forward with new changesets as needed.
However, I am trying to determine the best way to conditionally have the generated initial schema be executed one-time if necessary (ie: on a fresh new db). Contexts don't seem super ideal, just due to the fact that I'd need to have some other external way to communicate to the RPM what contexts it should run as, and that seems error-prone.
I'd also like the legacy generated change log to appear as having been run in the DATABASECHANGELOG table, to have the project appear as if it has always been liquibase managed.
Appreciate any help or guidance,
Thanks in advance
You can put all of your initial lb changes into one single changeset and put a precondition tableExists into that changeset, checking for any table from your model. If that table doesn't exist -> you have an empty data base so your big changeset will create all the objects. If the table exists -> you are running on an existing database, so it should be skipped (use onFail="MARK_RAN" with precondition).

Best git mysql versioning system?

I've started using git with a small dev team of people who come and go on different projects; it was working well enough until we started working with Wordpress. Because Wordpress stores a lot of configurations in MySQL, we decided we needed to include that in our commits.
This worked well enough (using msyql dump on pre-commits, and pushing the dumped file into mysql on post-checkout) until two people made modifications to plugins and committed, then everything broke again.
I've looked at every solution I could find, and thought Liquibase was the closest option, but wouldn't work for us. It requires you to specify schema in XML, which isn't really possible because we are using plugins which insert data/tables/modifications automatically into the DB.
I plan on putting a bounty on it in a few days to see if anyone has the "goldilocks solution" of:
The question:
Is there a way to version control a MySQL database semantically (not using diffs EDIT: meaning that it doesn't just take the two versions and diff it, but instead records the actual queries run in sequence to get from the old version to the current one) without the requirement of a developer written schema file, and one that can be merged using git.
I know I can't be the only one with such a problem, but hopefully there is somebody with a solution?
The proper way to handle db versioning is through a version script which is additive-only. Due to this nature, it will conflict all the time as each branch will be appending to the same file. You want that. It makes the developers realize how each others' changes affect the persistence of their data. Rerere will ensure you only resolve a conflict once though. (see my blog post that touches on rerere sharing: http://dymitruk.com/blog/2012/02/05/branch-per-feature/)
Keep wrapping each change within a if then clause that checks the version number, changes the schema or modifies lookup data or something else, then increments the version number. You just keep doing this for each change.
in psuedo code, here is an example.
if version table doesn't exist
create version table with 1 column called "version"
insert the a row with the value 0 for version
end if
-- now someone adds a feature that adds a members table
if version in version table is 0
create table members with columns id, userid, passwordhash, salt
with non-clustered index on the userid and pk on id
update version to 1
end if
-- now some one adds a customers table
if version in version table is 1
create table customers with columns id, fullname, address, phone
with non-clustered index on fullname and phone and pk on id
update version to 2
end if
-- and so on
The benefit of this is that you can automatically run this script after a successful build of your test project if you're using a static language - it will always roll you up to the latest. All acceptance tests should pass if you just updated to the latest version.
The question is, how do you work on 2 different branches at the same time? What I have done in the past is just spun up a new instance that's delimited in the db name by the branch name. Your config file is cleaned (see git smudge/clean) to set the connection string to point to the new or existing instance for that branch.
If you're using an ORM, you can automate this script generation as, for example, nhibernate will allow you to export the graph changes that are not reflected in the db schema yet as a sql script. So if you added a mapping for the customer class, NHibernate will allow you to generate the table creation script. You just script the addition of the if-then wrapper and you're automated on the feature branch.
The integration branch and the release candidate branch have some special requirements that will require wiping and recreating the db if you are resetting those branches. That's easy to do in a hook by ensuring that the new revision git branch --contains the old revision. If not, wipe and regenerate.
I hope that's clear. This has worked well in the past and requires the ability for each developer to create and destroy their own instances of dbs on their machines, although could work on a central one with additional instance naming convention.

Move schema changes and data from Publisher to subscriber in MergeReplication

I have a corporate server and around 50 remote clients. Images are added to Remoteclients and these Images are merge replicated to CorporateServer. Now initially all these images were on BLOB. We have decided to use filestream and create a new table containing Image binary. So we have partioned the original Image Table to Image and new table Image_Source.This is on production and Corporate data size is arnd 250 GB.
Now we have the following Tables:-
Images
Images_Source
I have to do the following things.
Add this new table to publisher and merge replicate it to subscribers.
Copy all image blob from Images and transfer it to Images_Source.
To Achieve this i will do the following things:-
Add new table to Publisher on corporate and turn Replicate Schema Changes to True. This way schema will be synced across corporate and RemoteClients.
Now on corporate, I will disable the triggers for Images_Source table, and move data from Images to Images_Source table using a Job.
Once all data is there in Images_Source table, All subscrbers will sync.
Now I want some expert advice on the correct procedure for doing this kind of changes. If you guys could share your experiences and Things to remember before performing such a change.
I have never done this with images, but adding new objects to a publication usually follows this script:
create the table(s) on the publisher
Stop replication processes between your publisher and subscribers
Add these tables to the corresponding publications via sp_addmerge... Parameters shall include a request for snapshot reinitialisation (it does not mean that the whole snapshot will be retransfered to the subscriber, but new objects have to be added to the snapshot before being added to subcribers databases).
At this stage, a new snapshot will be built
replication can be launched again
Hope it helps