How to overcome Network Failure error in MySQL - mysql

I am developing a CI app for a client with MySQL as back end.
The client has 8 shops. For each shop, there is a local server, and additionally, there is one central server, which is placed at the Head Quarters (HQ).
The problem i am facing is,
At the time of network failure at a shop, the billing and other
processes should work; without central server. Once the network is
back, they need it to sync with HQ Server.
Those who clicking on too board on close can you please say what all details you need? I am not getting that part thats why, please add that as comment, i will do it

This is a common problem in shop environment, you should cope to this requirements having basic data into the single store (eg. items, promotions, parameters) and setting up a database synchronization between local stores and center db ...
If you have MySQL in each store and as central DB, you can set up a MySQL replication, otherwise take a look at SymmetricDS that is in short the missing component that can perfectly fit your scenario, since :
SymmetricDS is open source software for both file and database
synchronization with support for multi-master replication, filtered
synchronization, and transformation across the network in a
heterogeneous environment. It supports multiple subscribers with one
direction or bi-directional, asynchronous data replication. It uses
web and database technologies to replicate data as a scheduled or near
real-time operation. The software was designed to scale for a large
number of nodes, work across low-bandwidth connections, and withstand
periods of network outage. It works with most operating systems, file
systems, and databases, including Oracle, MySQL, MariaDB, PostgreSQL,
MS SQL Server (including Azure), IBM DB2, H2, HSQLDB, Derby, Firebird,
Interbase, Informix, Greenplum, SQLite (including Android), Sybase
ASE, and Sybase ASA (SQL Anywhere) databases.

Related

Applications downs due to heavy MySQL server load

We have a 2GB Digital Ocean server, and it is dedicated for a MySQL server of other two PHP servers. we are using Percona MySQL Server 5.6 on this server. We configured MySQL replication and these configuration is working fine
Our issue is sometime our site monitoring tools reporting that some of the URL hosted with this server is down (May be this is happening once in a week or two). When I am checking, I could see that Mysql Master server load is too much high (May be 35 - 40), so the MySQL server was not responded. # that I usually do a MySQl service restart, this restart cause to server load become normal and the sites started working after service restart.
This is a back-end MySQL database server of 20-25 PHP applications (WordPress, Drupal and some custom applications server).
Here are my questions,
Why this server load automatically goes down, after a spikes happens?
Is there any way in which database is causing issues? So that I can identify the application too.
How can I identify the root cause of this issues
Depending upon your working dataset, a 2GB server providing access for 20-25 PHP applications (WordPress, Drupal and some custom applications server) could be the issue.
For example, if you have a 1.4GB buffer pool (assuming all tables are InnnoDB) and 10GB of data, then your various applications could end up competing for resources, such as I/O, buffer pool pages, Adaptive Hash Index, query cache. They could also, assuming caching is used, be invalidating theit caches within a similar timeframe, thus sending expensive queries to the database.
Whilst a load of 50 is something that you would normally want to avoid, the load average is not something that you should concern yourself with if showing in isolation.
The use of the uninterruptible state has since grown in the Linux
kernel, and nowadays includes uninterruptible lock primitives. If the
load average is a measure of demand in terms of running and waiting
threads (and not strictly threads wanting hardware resources), then
they are still working the way we want them to.
http://www.brendangregg.com/blog/2017-08-08/linux-load-averages.html
If the issue is happening once per week then it is starting to sound like a batch process, or cache expiration issue - too much happening at once for the resources available.
The best thing to do is to monitor and look for the cause. Since you are already using Percona Server, using PMM should give you the perfect insight to find the cause, although it works with Oracle MySQL, MariaDB, Aurora, etc. You can try a demo to see the insights that you can gain:
https://pmmdemo.percona.com. The software is Open Source and free to use.
You can look in QAN to find the most expensive queries, whilst looking at Prometheus data to give an insight into the host itself. There are some recommendations to get the most from PMM, depending upon your flavour of MySQL.

MySQL Group Replication or a Single Server is enough?

I'm planning to create a system which tracks visitors clicks into the database. I'm expecting around 1M inserts/day into the Database.
On the backend, I'll have an analytics system which will analyze all the data that's been collected over the days/weeks/months/years.
My question is: is it a practical approach to have 2 different MySQL Servers + 1 Web server? MySQL Server A would insert the clicks into it's DB and it would be connected to MySQL Server B by group replication, so whenever I create reports, etc on MySQL Server B, it doesn't load Server A heavily.
These 2 Database servers would then be connected to the Web Server which would handle all the click requests and displaying the backend reports also.
Is it a practical solution, or is it better to have one bigger server to handle all the MySQL data? Or have multiple MySQL servers that are load balancing each other? Anything else perhaps?
1M inserts/day is not a high load by modern standards. That's less than 12 per second on average.
On sufficiently powerful servers with fast storage and proper tuning of MySQL options, you can expect to support at least 100x that load with a single MySQL server.
A better reason to use multiple MySQL servers is redundancy. Inevitably, any MySQL server needs to be upgraded, or you might have hardware failures and need to replace a disk, or other components. To avoid downtime, you should have a standby database server, which stays in sync with the primary server, either using MySQL replication or by disk-level replication like DRBD.

Sync data from local db to Central SQL Server

I have a requirement to sync local db data with central SQL server. The remote users (mostly around 10 people) will be using laptop which will host application and local db. The internet connection is not 24x7. During no connectivity, the laptop user should be able to make changes in local db and once the connection is restored, the data should be synced with central SQL server automatically. The sync is just usually data updates. I have looked at options Sync framework and Merge replication. I can’t use sync framework as I am not C# expert. For Merge replication, additional hardware is required I believe which is not possible. The solution should be easy to develop and maintain.
Are there any other options available? Is it possible to use SSIS in this scenario?
I would use Merge replication for this scenario. I'm unaware of any "additional hardware" requirements.
SSIS could do this job but it does not give you any help out-of-the-box - you would be reinventing the wheel for a very common and complex scenario.
an idea...
Idea requires an intermediate database (exchange database).
On the exchange database you have tables with data for each direction of synchronization. And using change tracking on exchange db, and central.
On the local database side could mean rows with flags:
row is created on local db
row comes with exchange db
row required resynchronisation (when is updated, ect.)
Synchronisation localdb-exchange db.
When synchronizing, first send the data in localdb (marked as created locally or required resynchronisation), later download the data from exchange db (marked by change trancking as changed).
Synchorisation beetween exchange db and central db is simply, basen on change tracking with the database engine.
About Change Trancking here!

Is it possible to real-time synchronize 2 SQL Server databases

I have an application that runs on server A and the database is on the same server
there is a backup server B which I use in case the server A is down
the application will remain unchanged but the data in the DB is changing constantly
Is there a way to synchronize those 2 databases real-time automatically?
currently I wait till all the users are gone so I can manually backup and restore in the backup server.
Edit: When I said real-time I didn't mean it literally, I can handle up to one hour delay but the faster sync the better.
My databases are located on 2 servers on the same local network.
2 of them are SQL Server 2008, the main DB is on windows server 2008
the backup is on windows server 2003
A web application (intranet) is using the DB
I can use sql agent (if that can help)
I don't know what kind of details could be useful to solve this, kindly tell me what can help. Thanks.
Edit: I need to sync all the tables and table only.
the second database is writable not read-only
I think what you want is Peer to Peer Transactional Replication.
From the link:
Peer-to-peer replication provides a scale-out and high-availability
solution by maintaining copies of data across multiple server
instances, also referred to as nodes. Built on the foundation of
transactional replication, peer-to-peer replication propagates
transactionally consistent changes in near real-time. This enables
applications that require scale-out of read operations to distribute
the reads from clients across multiple nodes. Because data is
maintained across the nodes in near real-time, peer-to-peer
replication provides data redundancy, which increases the availability
of data.

Configuring Web Apps for Distributed Database

I have read MongoDB's replication docs and MySQL's Cluster page however I cannot figure out how to configure my web apps to connect to database.
My apps will have connection information, database host, username, password etc, however, even if there is multi server function, should I need a big master that has a fixed ip that distirbutes the load to servers? Then, how can I prevent single-point of failure? Is there any common approaches to that problem?
Features such as MongoDB's replica sets are designed to enable automatic failover and recovery. These will help avoid single points of failure at the database level if properly configured. You don't need a separate "big master" to distribute the load; that is the gist of what replica sets provide. Your application connects using a database driver and generally does not need to be aware of the status of individual replicas. For critical writes in MongoDB you can request that the driver does a "safe" commit which requires data to be confirmed written to a minimum number of replicas.
To be comprehensively insulated from server failures, you still have to consider other factors such as physical failure of disks, machines, or networking equipment and provision with appropriate redundancy. For example, your replica sets should be distributed across more than one server or instance. If all of those instances are in the same physical colocation facility, your single point of failure could still be the hopefully unlikely (but possible) case where the colocation facility loses power or network.