I'm planning to make a system that will utilize a central corporate database and several local in-house database.
Connection between the central database and local databases would depend on the local's availability (The central database is always online). So I'm thinking of synchronizing them by "pushing" updates from the local database to the central database using a website user interface and vice versa.
The central database is expected to have decent and stable internet connection. So it just sit there waiting for updates from local databases. Also local systems should have the capability to download updates from some tables that are only updated from the central database. The local system should only contain local information and general settings set by the central system. Local systems have no capability to see other local information in other branches(This capability is only for the central system).
So basically, information that are pushed to the central database are only modified in the local system and the data downloaded from the central database are only updated from the central database. So the local system can live and operate on its own, and the only true purpose of the central database is to create an overview of all the updates.
I've already checked Microsoft Sync Framework and it looks promising. I just can't find any tutorial that could demonstrate it completely. I'm hoping for a solution that can be implemented using a website interface. Just a nice button in my local system's page.
If anyone would be able to point me to good source or starting point, it would be really helpful.
I have just finished my Microsoft Sync Framework research. In fact, MSDN provided all the solutions that you need. In my opinion, you don't have to pay for any commercial sync solution as you could get it done by yourself. Basically, you have to understand the overall structure of Microsoft Sync Framework: Concept of Microsoft Sync Framework. Besides that, there are two type of sync module available which is 2-Tier and N-Tier Synchronization. Please study 2-Tier and N-Tier if you doesn't know about it. However, here is a good starting point for you to begin the N-Tier Synchronization N-Tier Sync. Meanwhile, you can set your synchronization direction to match with various kind of situation Sync Direction. In addition, data conflict could be occur in certain situation which should be handled Data Conflict Handling.
Related
I am developing an application which uses as a back-end an MS Access database (.mdb, not my decision). Recently I came across someone suggesting that using JET engine over WAN is not really a good idea, with a high risk of data corruption. Since my application should be doing just that (connecting to database on NAS (EDIT: not NAS, shared shared network drive), I got worried. It is really that risky? If so, is there any work around or is an MS Access database just unusable for that kind of application?
EDIT
The front end is .NET windows desktop application in C# (WPF). The system does not have many users, max 10. Most of the time they will approach the database from LAN and 99% of writing to the database will be done within the LAN (from the area of the company). However there are some cases where they will connect to the NAS (EDIT: not NAS, shared shared network drive) from outside the company via network (from their home).
If you have a 100 Mb/s fibre, it will be OK, but if your line is, say, an xDSL line, it is generally an absolute no-no.
Convince the powers that be to move the backend to a server engine like SQL Server where the Express version is free.
The scenario you describe is not a good fit for having an Access database as the back-end. The WAN users could very well find the application slow, but the NAS is the real cause for concern regarding corruption, and that would affect both LAN and WAN users.
Many (most?) NAS devices run on Linux and use Samba to provide Windows file-sharing services. The Access Database Engine apparently uses some low-level features of "real" Windows file sharing that Samba does not always fully implement (ref: here).
In fact, the only time I've seen repeated corruption problems with a shared Access back-end (and a properly distributed front-end) was when a client moved their file shares from an older Windows server to a newer NAS device. The Access application continued to work for the most part, but every few months they would find that the primary keys of some tables would disappear after they did a Compact and Repair on the back-end database file. That never happened while their file share was on the Windows server.
Splitting a front-end from a back-end removes the majority of the risk of corruption. Of course, with Access there's always the possibility and if you're looking for something that reduces the risk to close to nil then you might want to consider SQL Server or MySQL. However, using Access is fine as long as you take proper precautions.
For example, you might want to look into record-locking on tables that will get edited, to prevent multiple simultaneous writes. Backing up your DB on a regular basis is always good, too.
We currently have an application located on a remote server, and our call center uses this application to perform customer transactions.
We plan to setup asterisk on a local server to help us with all the call routing and recording, for asterisk to work smoothly we have to move our application from the remote server to the local.
Its will be easy to mover all data to the local server and do transactions locally, but there is an option for users to do transactions online too which will hit the remote server database.
The reason we still have the remote application because of the reliable infrastructure and backup solution provided by rackspace.
If we move application to local server i am looking at a reliable solution for syncing remote and local databases so that we can handle local as well as online transactions.
Why not use mysql master-master replication and hold definitive data at both ends? (Note you'll have to do some reading on on auto_increment_increment and auto_increment_offset)
symcbean's answer is basically correct. I'd add this article as a good starting place to understand master-master replication. I'd further recommend High Performance MySQL as a good reference for a deeper understanding of the techniques and issues.
There are some issues that you will have to face doing writes to two non-colocated MySQL servers. You'll have replication lag to deal with, so the databases won't necessarily be completely in sync, but will only be "eventually consistent". Also, if you have both sides doing updates on content, you can end up with data integrity issues. If your system leans towards INSERTs more then UPDATES for the write operations, it is less likely that you'll run into issues. Also, if the subset of data that is likely to be modified tends to be localized around one or the other of the servers, you'll run into fewer issues.
Otherwise, you'll probably want to roll your own solution that is designed towards the specific use cases of your application.
I am developing a database with about 10 tables in it. Basically it will be used in 2 or 3 distant geographical locations (let's call them A,B and C). The desired work flow will be as follows:
A,B and C should always have the same database. So when A does any changes he should be able to send those changes over to B and C. Emailing the entire mdb file doesnt make sense since its 15+mb in size. So I would like to send the new additional records and changes only to B and C. The changes B and C make should also be reflected to the other repective parties. How can I do this?
I have a few ideas in mind but cont know how to implement it.
solution 'A' - export the data tables only into a xls file and email that. But the importing of the tables into the mdb file could be a bit complex right? and the xls is file will also become bigger and bigger with time.
solution 'B' - try extract just the changes and email only the new parts? (but how to extract just those)
Solution 'C' - find some way of syncing all users onto the same database(storage) location. I was thinking of a front/back end splitting solution by storing the tables in a shared drive in the parent company's server (which is also overseas). But the network connection between locations is very slow, and I dont know how much bandwidth is needed for this.
Any recomendations would be most welcome!
In regard to sources for information on replication, start with my Jet Replication Wiki.
But I would never recommend Jet replication for your scenario. The only environment where I currently recommend it (and I've been doing replicated apps since 1997 and still have several in production use) is for supporting laptop users who have to work with live data in the field disconnected from any network, and return to the home office and synch direct with the mother ship.
The easiest solutions with an Access application would be hosting the app on Windows Terminal Server/Citrix and the users would run it over a Remote Desktop Connection, or using Sharepoint. The Terminal Server/Citrix solution has no accomodation for disconnected users, but Sharepoint can accomodate offline usage and synch changes when connected. Access 2010 and Sharepoint 2010 provide a host of new features, including better schema design, the equivalent of triggers and greatly improved peformance for large Sharepoint lists, so it's a no-brainer to me that if you choose Sharepoint you'd want to use A2010 and Sharepoint 2010.
While it's possible to do what you want with Jet Replication, it requires a lot of setup on the server and client ends, and is relatively fragile (not in terms of data integrity if you're using indirect replication (as you should), but in terms of network reliability) -- there are too many moving parts and too many failure points.
Windows Terminal Server/Citrix is by far the simplest, with the fewest moving parts and completely centralized administration, and works very well for a relatively small investment.
Sharepoint is more complicated than WTS/Citrix, but is less complex and more centralized than a Jet Replication solution.
If it were me, I'd probably go with WTS/Citrix if there was no need for disconnected usage, but I'd be salivating over trying out A2010/Sharepoint 2010. If there was a need for disconnected usage, then I'd definitely go the Sharepoint route.
You want to use "Jet Replication". See
MSDN Search for jro at http://social.msdn.microsoft.com/Search/en-US?query=jro&ac=8
MSDN Search for access replication at http://social.msdn.microsoft.com/Search/en-US?query=access%20replication&ac=3
It's been some time since I did it, but the indirect method of replication worked well for me in a similar situation.
It takes something to set up. The documentation used to be appalling for it, but I found articles written by Michael Kaplan (aka Michka) that walked me through how to do it.
If your final environment is going to be fairly stable, then use Access the whole way. If not, then I'd urge you to take HansUp's advice and go with SQL Server or SharePoint.
Do note: if you're working in Access 2007 or later, replication is not directly supported, and you'll have to roll-your-own bits and pieces. If you're using an earlier installation, you'll be fine, but allow time for some head-scratching.
I would like to know how we are supposed to do integration between different Perforce servers/depots.
I'm looking for a solution that would allow us to do both-ways integrations.
This Using Remote Depots article describes how to map the remote depot as read only. Is this the only solution to do mappings on both servers? If so, this means that I could not use a single branch spec to do both ways integrations.
From reading the Perforce knowledge base, I believe the preferred/suggested solution is for each server to do the integrate from the read-only remote depot.
This is a by-design limitation of Perforce because the meta-data is only available to the local server, e.g. serverA:1666 does not know commands performed by a user on serverB:1666 (as explained in the case-study at the bottom of this article).
Also the point regarding performance is absolutely true; our server was hammered this afternoon during a code drop from a remote depot. All we could do was wait until the integrate/diff was complete.
To find out what is happening on your server, use the command p4 monitor show to show what the current workload on your server.
I have a website using cPanel on a dedicated account, I would like to be able to automatically sync the website to a second hosting company or perhaps to a local (in house ) server.
Basically this is a type of replication. The website is database driven (MySQL), so Ideally it would sync everything (content, database, emails etc.) , but most importantly is syncing the website files and its database.
I'm not so much looking for a fail-over solution as an automatic replication solution, so if the primary site (Server) goes off-line, I can manually bring up the replicated site quickly.
I'm familiar with tools like unison and rsync, but most of these only sync file(s) and do not do too well with open database connections.
Don't use one tool when two is better; Use rsync for files, but use replication for MySQL.
If, for some reason, you don't want to use replication, you might want to consider using DRBD. This is of course only applicable if you're running Linux. DRBD is now part of the main kernel (since version 2.6.33).
And yes - I am aware of at least one large enterprise deployment of DRBD which is used, among other things, store MySQL database files. In fact, MySQL website even has relevant page on this topic.
You might also want to Google for articles against DRBD/MySQL combination; I remember reading few posts of that.