Couchbase continuous replication limitation - couchbase

I'm using Couchbase Lite in a mobile app. Just wondering:
is there any limitation if I create continuous replication to sync data between mobile device and server?
How many concurrent continuous replication can be handled by one couchDB or coubchbase server?
If reach the limit, what will happen on mobile side and server side?
Thanks! Frank

Related

Is it mandatory to use Couchbase at both client and server end for seamless sync operations?

I want to know how to sync Couchbase with other Databases seamlessly? Can we use different databases with Couchbase in the same project?
As you haven't specified which databases you have in mind, I will give you a broad answer:
Mobile: Couchbase can be sync with Couchbase Lite (https://www.couchbase.com/products/lite) via Sync Gateway - the middleware between cblite and Couchbase Server. Sync Gateway is mandatory in this case for security reasons, as you should not simply expose your database on the web.
Xamarin: https://blog.couchbase.com/synchronized-drawing-apps-with-couchbase-mobile/
Android: https://docs.couchbase.com/couchbase-lite/current/java-android.html
Swift: https://docs.couchbase.com/couchbase-lite/current/swift.html
Java: https://docs.couchbase.com/couchbase-lite/current/java-platform.html
Others: https://docs.couchbase.com/couchbase-lite/current/index.html
Couchbase Lite 1.x could also be sync with PouchDB, but we dropped this support on Couchbase Lite 2.x as we rewrote the whole thing and this is a feature yet to come.
Server: One of the most common ways to sync Couchbase Server with another database is through the Kafka Connector https://docs.couchbase.com/kafka-connector/current/index.html

High available web service based on LAMP to encounter link failure

Recently I started a project aims at decentralizing a Moodle e-learning web server to encounter link failure. Here's a detailed description:
Connection here (rural area in Africa) is fragile (60-70% uptime), which is the main problem in this project. And our goal is to enable students to access course content as much as possible.
Thus, I'm thinking to have a local server constantly caching web content and provide accessibility during down-time. Although, due to the interactive nature of online learning (discussion board, quiz etc), the synchronization should be bi-directional between master and slaves. Plus, slave server should be able to provide transparency to end users, record all interactions locally and update master server once link is recovered (race condition and conflicts need to be solved intelligently). These slave servers will be deployed in either Raspberry Pi or other low-power consumption platform powered by solar. Load balancing would be bonus.
In short, the system should share characteristics of web cluster and database replication, but emphasizing disconnected operation. Weak consistency is acceptable
I've been looking into these areas:
CODA file system
Content Delivery Network
Apache2 web server cluster
MySQL cluster
Although most of them mainly focus on scalability and increasing throughput, which are the trend of networking but not main concerns in my project.
I'm still struggling to find a suitable mechanism/schema and would appreciate any advises!
Thanks in advance!

Single Store CRM application to Multi Store CRM - Single Local database server to Multi Store Multi location

My company has Desktop application developed in vb.net using devexpress controls. Back End database is MySQL.
Company is in retailing and have 2 retail stores in in same city. Both stores always stay busy and customers are always in waiting at the counter. Basically, it is desktop based CRM application which has lot of modules inside it apart from invoice/Receipt module, it has other modules like Delivery module, installation module, Service/Repair module, Account Receivable module and many other modules used by various back office departments of the company. Other resources/hardware such as Barcode Printer, Receipt Printer, and Barcode scanner are connected to the CRM on Desktop PC.
Currently, there are around 55 clients always connected to server and using application.
Problem:
Till couple of weeks back, company had no issue using this desktop application and single MySQL server as all clients were connected via LAN or WLAN.
Now situation has changed, and new requirement has raised: Company has planned to open new stores at very far distance. Such stores cannot be connected to current central database via LAN or WLAN. Each new branch would have around 20-30 clients, say “Branch Clients”
Also, there would be field executive who will be working from their laptop. Say “Remote Clients”. They will just have 3G internet connection on their laptop.
Thought 1: Install desktop application at all branch PCs, and connect them to central MySQL database server over the internet.
Not possible: Connection over the internet would be very slow for fetching such huge data. Data is really huge For, e.g. if client opens “Customer Master”, then there would be more than 600,000 rows which takes lot of bandwidth and time to open over the internet. And there are many more such modules which loads lot of data.
Also, in case of losing internet connection, clients would not able to operate the application. Customer waiting in line to make receipt would go crazy if they have to wait for long.
Thought 2: Install new MySQL server at branch store, all the desktop PCs then would be connected to that local branch server. And then that local branch server would be connected to central server via MySQL replication option.
Not possible: Since MySQL replication has limitation of only one way replication, we cannot implement this structure. Application requires to move data from central server to branch server and from Branch to Central in real-time. Also, MySQL replication engineering has limitation to replicate only with one server only. In that case, we cannot replicate with multiple branch stores. There is an option of cluster server, but company cannot afford licensing cost.
Thought 3: Somebody suggested me that I should transfer entire desktop application into Web Application and get cloud server for database.
Not possible: I think looking at current requirement (fast access), environment (retail store-pos) and hardware (printers, scanners) connected to client - it is not advisable to have web application and cloud database server. Also in the event of no internet, entire store would go down.
Thought 4: Somebody suggested me that I should move from MySQL server to MSSQL and keep desktop application as it is. MSSQL has capability to sync with multiple servers in real-time over the internet. It has no limitation like MySQL’s one way replication and only one replication connection.
I guess, to make faster and constant database connection, installing local branch server is highly required. But I don’t know how those different branch servers could be connected to central server.
My Questions:
• What is the best way to resolve above issues in given condition and successfully fulfill the company’s requirement? Faster and constant connection to database server. And also real-time updates between all branches and central server. If internet connection is down, then delay in real-time update is acceptable but clients should not be affected from work.
• Would migration from MySQL to MSSQL resolve the issue? Because data migration is not issue as there are many tools available which converts the database from one platform to other. But issue is - application is very huge having hundreds of query written for MySQL. I guess I have to change those all queries also, because queries are not same for MySQL and MSSQL. Do I have to change all the queries or just the few percentage queries? Or if there is any tool available which convert queries from MySQL to MSSQL query.
• In general, how such small-medium retail store company have their infrastructure and application setup? Let me know some ideas.

Primary Server and Hot Standby Server architecture

I am now starting to look into building the proper architecture for Intranet network with one Primary Server and a Secondary Server that I would like to operate as a hot standby.
My knowladge of this is quite minimal and I am looking for guidelines and articles that would get me started.
The Server that needs to be replicated will run the following:
- Windows Server 2008 R2 OS
- MS SQL 2008 R2 Std
- IIS 7.0 that will run a web application built in asp.net
- Several background services, some of them write data to the database. This are .net applications that were written in house but with no replication methodology.
My goal is to have the Primary Server data constantly replicated to the Secondary Server so that in case of failure the Seconday Server can start acting as Main Server ASAP.
My questions are:
1. What is the recommanded hardware topology in this case? Besides of the two server machines, do I need any extra hardware that will act as a DNS server to resolve rounting to the correct server?
If not, how can this be done with software?
2. Data base replication - I understand that I will need to use some sort of log shipping in order to syncronize between the databases. What are the limitations and guidelines? I need to know if there is a tradeoff for good performace vs. having an up-to-date replication of the database. A good article will be helpful.
3. Considering that rewriting the services application to support running in some sort of "passive" mode and transmitting state-data between the servers is probably not possible, what should be done with those services on the secondary machine?
I think you have the wrong approach to this, instead of using a hot standby you should use load balancing and clustering to provide availability.
My recommendation is to run the web application on both servers and use an IP Load Balancer to distribute requests between the two servers. If one of the servers becomes unavailable user requests will no longer be routed to that server and users will not really notice that a disruption has occurred. You should try to make use of an exising load balancer in your companies infrastructure.
If you have more than two servers available I would also recommend that you look at Windows Network Load Balancing (NLB) which a feature included in Windows Server, read more about NLB at http://technet.microsoft.com/en-us/library/cc725691.aspx. But as NLB and fail-over cluster is not supported on the same servers I cannot recommend that if you only have two servers.
For the database I would recommend that you use a 2-nodes active-passive database cluster, instead of deploying two separate SQL instances with replication between them. In a cluster configuration SQL Server runs on a single server but if that server has a problem SQL Server automatically switch over to the other server. Read more about SQL Server clustering at http://sql.starwindsoftware.com/sql-server-clustering-technology.
Implementing a clustering solution will require some sort of shared disk between the two server, because both servers can be active instances they have to be able to write to the same disks. If your organization has a SAN available then that is the preferred choice for the shared disk.
But now comes the problem with the background services. If they cannot be modified you just have to come up with some mechanism to move them if a server fails. If the servers are monitored you could have a technician initiate a script which starts the services on the other server. Manual operations are never reliable, but if you cannot rewrite them you don't have much choice.
If you have two server I recommend:
HW IP Load Balancer
|
-----------------------------
| |
SERVER A SERVER B
ASP.NET web app ASP.NET web app
SQL Server (active) SQL Server (passive)
Bg services (not running) Bg services (running)
I you have four servers I would recommend:
HW IP LB or Windows NLB
|
-----------------------------
| |
SERVER A SERVER B
ASP.NET web app ASP.NET web app
| |
-----------------------------
|
-----------------------------
| |
SERVER C SERVER D
SQL Server (active) SQL Server (passive)
Bg services (not running) Bg services (running)

Hosting a MySQL DB on the cloud

I used to develop app with classical hosting, let's say we have an MVC app running a MySQL database hosted in a classical hosting company like Godaddy
My question is : My application seems very fast, and managing well concurrent connections, but will grow probably exponentially. So I am wondering if keeping my application layers (app files) on Godaddy and moving the database on the cloud like Amazon-RDS is possible. And if possible will it make my app faster than it is.
It is defintely possible, but might not be the best solution: it depends on wether or not you are having a latency problem or a database query performance issue. Using aws to host MySQL gives you almost unlimited ability to scale up the performance of your db operations, but if that is not the bottleneck, it won't do you much good. Hosting the web on godaddy and db on aws will introduce additional latency between the web and the db.
Personally, if you are thinking of moving Part of your stack to aws, you might as well move the web layer as well -you'll get fast and scalable db performance with none of the additional latency cause by use both hosts and disparate locations.
I recommend you to deploy your web application on the same infrastructure where you are going to use your MySQL database. Amazon RDS provides you a great way where you can scale up or down your MySQL database. As far as I know, the hosting provider that you mention has its own infrastructure, so I would deploy your application on the same Amazon datacenter.
However, you have more options than using AWS, as there are some PaaS (Plarform as a Service) which use Amazon as an IaaS. The benefit of using a PaaS is that you can scale up or down your application in the same way that you are thinking about your MySQL database.