Microsoft Sync Framework mark client database as up to date - sql-server-2008

I've developed an application using the Microsoft Sync Framework 2.1 SDK and my current deployment method has been:
Make a backup of the unprovisioned database from a development machine and restore it on the server.
Provision the server followed by provisioning the client
Sync the databases
Take a backup of the synced database on the development machine and use that for the client installations. It is included in an InstallShield package as an SQL/Server backup that I restore on the client machine.
That works but on the client machine now I would also like to create a seperate test database using the same SQL/Server backup without doubling the size of the installation. That also works but of course because the client test version is no longer synced with the test version on the server it attempts to download all records which takes many hours over slower Internet connections.
Because integrity of the test database is not critical I'm wondering if there's a way to essentially mark it as 'up to date' on the client machine without too much network traffic?
After looking at the way the tracking tables work I'm not sure this is even possible without causing other clients to either upload or download everything. Maybe there is an option to upload only from a client that I've missed? That would suit this purpose fine.

Everytime you take a backup of a provisioned database and restore it to initialize another client or replica, make sure you run PerformPostRestoreFixup after you restore and before you sync it for the first time.

After further analysis of the data structures used by Sync Framework I determined there would be no acceptable way to achieve the result I was seeking without sending a significant of data between the client and server that would have approached what was required to do a 'proper' sync.
Instead I ended up including a seperate test database backup along with the deployment so that the usual PerformPostRestoreFixup could be performed followed by a sync in the normal manner the same as I was handling the live database.

Related

How to properly use databases in development?

I'm struggling with finding out how to properly test stuff on my local PC and then transfer that over to production.
So here is my situation:
I got a project in NodeJS/typescript, and I'm using Prisma in it for managing my database. On my server I just run a MySQL database, and for testing on my PC I always just used SQLite.
But now that I want to use Prisma Migrate (because it's highly recommended to do so in production) I can't because I use different databases on my PC vs on my Server. Now here comes my question, what is the correct way to test with a database during development?
Should I just connect to my server and make a test database there? Use VS Code's SSH coding function to code directly on the server and connect to the database? Install MySQL on my PC? Like, what's the correct way to do it?
Always use the same brand and same version database in development and testing that you will eventually deploy to. There are compatibility differences between brands, i.e. an SQL query that works on SQLite does not necessarily work the same on MySQL, and vice-versa. Even data types and schema definitions aren't all the same between different SQL products.
If you use different SQL databases in development and production, you will waste a bunch of time and increase your gray hair debugging problems in production, as you insist, "it works on my machine."
This is avoidable!
When I develop on my local computer, I usually have an instance of MySQL Server running in a Docker container on my laptop.
I assume any test data on my laptop is temporary. I can easily recreate schema and data at any time, using scripts that are checked into my source control repo, so I don't worry about losing any data. In fact, I feel no hesitation to drop it and recreate it several times a week.
So if I need to upgrade the local database version to match an upgrade on production, I just delete the Docker container and its data, pull the new Docker image version, initialize a new data dir, and reload my test data again.
Every step is scripted, even the Docker pull.
The caveat to my practice is that you can't necessarily duplicate the software if you use cloud databases, for example Amazon Aurora. There's no way to run an Aurora-compatible instance on your laptop (and don't believe the salespeople that Aurora is fully compatible with MySQL; it's not). So you could run a small Aurora instance in a development VPC and connect to that from your app development environment. At least if your internet connection is reliable enough.
By the way, a similar rule applies to all the other technology you use in development. The version of Node.js, Prisma, other NPM dependencies, http and cache servers, etc. Even the operating system might be the source of compatibility issues, but you may have to develop in a Virtual Machine to match the OS to production exactly.
At one past job, I did help the developer team create what we called the "golden image" which was a pre-configured VM with all our software dependencies installed, and we used this golden image for both the developer sandbox VM, and also an AMI from which we launched the production Amazon EC2 instances. So all the developers were guaranteed to have a test environment that matched production exactly. After that, if they had code problems, they could fix it in development and have a much higher confidence it would work after deploying to production.

Best solution for automated collection of data from remote MySQL servers

I have done extensive research, i feel that i have good candidates but i still lack enough knowledge to decide which one i should implement, ideally i would like to hear from someone that actually implemented a solution to a similar problem.
The Problem
Our project consists of a community of distributed nodes (25 nodes). The nods run on Linux computers, and are installed in the typical residential setting (behind a NAT), with wide dispersion geographically and ISP wise.
Our software on the node collects a variety of its own unique data which is logged to MySQL DB on the local host (node) which is not WAN accessible directly. We also have a Web interface for each node that uses the local node DB to allow the local node user to visualize certain data and parameters; this is only accessible on the LAN.
We typically set-up and maintain an open port for SSH from our labs to each node. All remote DB on nodes have the exact same schema but completely different data. We need an automated way to collect all data from all the nodes and get them to our WAN accessible lab servers (Windows 7 servers, but can be Linux if it provide a better solution). We have narrowed the option as follow:
Solutions:
Create a .bat script that sequentially connect to each node over SSH to import data.
Use the web interface that runs on each node to periodically query the local db then save that data to a central MySQL server. I know i can connect to two db in PHP. seems to be doable here.
Use the MySQL supported “slave-master” replication setup which will duplicate all remote databases on the server.
Use the MySQL supported federated engine setup which will link local tables to remote ones.
Questions:
Are all these viable solutions?
Any major cons i should be aware of for the viable ones?
is there better solutions available (paid or otherwise) ?

Single Store CRM application to Multi Store CRM - Single Local database server to Multi Store Multi location

My company has Desktop application developed in vb.net using devexpress controls. Back End database is MySQL.
Company is in retailing and have 2 retail stores in in same city. Both stores always stay busy and customers are always in waiting at the counter. Basically, it is desktop based CRM application which has lot of modules inside it apart from invoice/Receipt module, it has other modules like Delivery module, installation module, Service/Repair module, Account Receivable module and many other modules used by various back office departments of the company. Other resources/hardware such as Barcode Printer, Receipt Printer, and Barcode scanner are connected to the CRM on Desktop PC.
Currently, there are around 55 clients always connected to server and using application.
Problem:
Till couple of weeks back, company had no issue using this desktop application and single MySQL server as all clients were connected via LAN or WLAN.
Now situation has changed, and new requirement has raised: Company has planned to open new stores at very far distance. Such stores cannot be connected to current central database via LAN or WLAN. Each new branch would have around 20-30 clients, say “Branch Clients”
Also, there would be field executive who will be working from their laptop. Say “Remote Clients”. They will just have 3G internet connection on their laptop.
Thought 1: Install desktop application at all branch PCs, and connect them to central MySQL database server over the internet.
Not possible: Connection over the internet would be very slow for fetching such huge data. Data is really huge For, e.g. if client opens “Customer Master”, then there would be more than 600,000 rows which takes lot of bandwidth and time to open over the internet. And there are many more such modules which loads lot of data.
Also, in case of losing internet connection, clients would not able to operate the application. Customer waiting in line to make receipt would go crazy if they have to wait for long.
Thought 2: Install new MySQL server at branch store, all the desktop PCs then would be connected to that local branch server. And then that local branch server would be connected to central server via MySQL replication option.
Not possible: Since MySQL replication has limitation of only one way replication, we cannot implement this structure. Application requires to move data from central server to branch server and from Branch to Central in real-time. Also, MySQL replication engineering has limitation to replicate only with one server only. In that case, we cannot replicate with multiple branch stores. There is an option of cluster server, but company cannot afford licensing cost.
Thought 3: Somebody suggested me that I should transfer entire desktop application into Web Application and get cloud server for database.
Not possible: I think looking at current requirement (fast access), environment (retail store-pos) and hardware (printers, scanners) connected to client - it is not advisable to have web application and cloud database server. Also in the event of no internet, entire store would go down.
Thought 4: Somebody suggested me that I should move from MySQL server to MSSQL and keep desktop application as it is. MSSQL has capability to sync with multiple servers in real-time over the internet. It has no limitation like MySQL’s one way replication and only one replication connection.
I guess, to make faster and constant database connection, installing local branch server is highly required. But I don’t know how those different branch servers could be connected to central server.
My Questions:
• What is the best way to resolve above issues in given condition and successfully fulfill the company’s requirement? Faster and constant connection to database server. And also real-time updates between all branches and central server. If internet connection is down, then delay in real-time update is acceptable but clients should not be affected from work.
• Would migration from MySQL to MSSQL resolve the issue? Because data migration is not issue as there are many tools available which converts the database from one platform to other. But issue is - application is very huge having hundreds of query written for MySQL. I guess I have to change those all queries also, because queries are not same for MySQL and MSSQL. Do I have to change all the queries or just the few percentage queries? Or if there is any tool available which convert queries from MySQL to MSSQL query.
• In general, how such small-medium retail store company have their infrastructure and application setup? Let me know some ideas.

Java EE application deployment on Amazon EC2

We have a Java EE application (EAR file deployed on JBoss, MySQL, MongoDB) which we would like to deploy on an Amazon EC2 instance. I have several questions regarding deployment best practices.
What is the most commonly used Linux AMI which we can rely on for a robust deployment (There are so many Linux variants, and I am not sure which AMI is commonly used, is it Fedora, CentOS, Red Hat, SUSE ...)
How do we handle production upgrades (EAR file modifications or schema upgrades). Are there any tools which are available to handle this installation or rollback of these changes.
What kind of data backup capability is available for the database?
Should I rely on Amazon RDS for MySQL support?
How should I handle support for MongoDB?
This is the first time, I am hosting an web-app and would appreciate some inputs on how to manage the production instance.
I agree with Mark Robinson's answer: Use whichever Unix variant you're most comfortable with. It may pay to pick one with decent cloud support. For my site I use Ubuntu.
I have a common image which is the base of every version deploy I do. I have www.mysite.com pointing to an Elastic IP so I can decide which instance it goes to. The common image has all the software I need installed (Postgres/Postgis/Tomcat/etc) but the database and web server data folders and symlinked to Elastic Block Store (EBS) instances.
When it comes time to do a deploy I start a new instance up, freeze and snapshot the EBS volumes on production and make new volumes. I point my new instance at the new volumes and then install whatever I need to onto that. Once I've smoke tested everything successfully I can switch the Elastic IP to point to the new instance and everything keeps on going.
I'll note that I currently have the advantage where only I can modify the database; no users can. This will become a problem shortly.
If you use the XFS filesystem on top of the EBS volume then you can tell XFS to freeze the file system (so no updates happen) then call the EC2 api to snapshot the volume then unfreeze the file system. The result is that the snapshot is taken quickly and sent to S3. I have a nightly script which does this.
If RDS looks like it will suit your needs then use it. Amazon is building lots of solid tools quickly and this will ease your scalability issues if you have any.
I'm sorry, I have no idea.
Good question!
1) I would recommend going with whatever Linux variant you are most comfortable with. If you have someone who is really keen on CentOS, go with that. Once you have selected your AMI, take it and customize it by configuring how you want it. Then save that AMI as you base-layout. It will make rolling out new machines much easier and save your bacon if EC2 goes down.
2) Upgrades with EC2 can be tres cool. Instead of upgrading a live system, take your pre-configured AMI, update that and save that AMI as myAMI-1.1 (or whatever). That way, you can flip over to the new system almost instantly AND roll back to a previous version in case something breaks. You can also back-up DB instances to S3. It's cheap at about $0.10/GB/Month.
3) It depends where you are storing your DB. If you are storing it on your EC2 instance you are in trouble. The EC2 instances have no persistence storage. So if your machine crashes, you lose everything. I'm not familiar with Amazon DB system but you should also look into Elastic Block Store. It's basically an actual hard-drive you can write to. When you want to upgrade your schema, do a full DB dump to S3 and then do an upgrade of your actual schema. If something goes wrong, you can pull the previous version out of S3.
4) & 5) I have never used those so I can't help you.
What is the most commonly used Linux AMI which we can rely on for a robust deployment (There are so many Linux variants, and I am not sure which AMI is commonly used, is it Fedora, CentOS, Red Hat, SUSE ...)
How do we handle production upgrades (EAR file modifications or schema upgrades). Are there any tools which are available to handle this installation or rollback of these changes.
What kind of data backup capability is available for the database?
Should I rely on Amazon RDS for MySQL support?
How should I handle support for MongoDB?
Any Linux AMI will do the job, what you need is a JRE only. (assuming development work not required). If you need to monitor the JVM behavior then get JConsole installed.
Easiest and painless way is to SSH into the local home directory, transfer the updated class file/EAR file (depends the number of changes applied) and copy and replace into the Tomcat deployment directory, restart apache. (make sure you tested locally before upload to production).
Depends on which database you are using, if you are using MySQL then just do scheduled backup that writes to your home directory so that from time to time you could SSH in and download a copy for backup purpose.
I would not consider reply on Amazon RDS for MySQL support due to 2 reasons: MySQL is small enough and manageable, and also I would want to have total complete control of the database and why pay for more when you can do it yourself FOC?
The usage of MongoDB should be align with the purpose of your application and benefits you gain from that. I would recommend you use MongoDB for static data retrieval like state, country, area etc... where MySQL to be use for transaction data only.
If you can live with deploying your Java EE application on TomEE instead of JBoss, Boxfuse does what you want.
For you Java EE application you literally only have to execute (TomEE uses war files instead of ear files):
boxfuse run my-tomee-app-1.0.war -env=prod
This will
Create AMI containing TomEE and your application ready to boot
Create an Elastic IP or ELB
Create a security group with the correct ports defined
Create an auto-scaling group
Launch your instance(s)
Any subsequent update will be done as a zero downtime blue/green deployment.
More info: https://boxfuse.com/blog/javaee-aws

How to manage a test database in SQL Server

I am currently working on an application that uses a SQL Server 2008 database that sits internally on a LAN. I am having two problems related to managing the database:
Currently, I have 2 databases in SQL server, one for test and one for production, and I copy tables and views etc... between these two databases when deploying changes. I'm assuming there is a better way to manage pushing changes from the test database to the production database, can anyone point me in the right direction here?
I do a good portion of my work remotely, so I have installed SQL Server 2008 Express to my laptop and run a 3rd copy of the database locally. Is this the best option for doing remote work? The solution I've been looking for in this situation is to expose my test database to the web with a limited user that I could use for when I am developing remotely. Is this feasible/recommended?
I have found that using my own, local copy of SQL Server Developer Edition on my notebook is the best way to do dev work overall; then a separate test and production database on servers. I like keeping my local dev server so that I am never at the mercy of connections to do dev work.
As a principle, I don't expose SQL servers publicly ever, so working through a VPN is the only way I can access my typical test/production servers. If my dev server was there, too; I would often be unable to do dev work when, for example, I am at a location where VPN pass-through is not permitted.
As for updating the production/test databases; I always generate change scripts when ever I change the dev server, and then keep them organized so they can be applied to the test and then later production servers. You can generate those scripts via SQL Server Management Studio or Visual Studio.
Probably the cleanest most repeable way is to use a real build process for your database code and objects. First put all your database code and objects in source control. Then use DBGHOST to create upgrade scripts to get your production database upgraded. As part of this you can create output that will create a empty dev database that matches any given release easily when using DBGhost. We have been using for about 3 years now and wouldn't do it any otherway. Check out there site for a full walk through. Well well worth the money. Did I say it's well worth the money?
http://www.innovartis.co.uk/