Syncing two databases - yii2

For example, I have 2 web sites and 2 different hosting accounts and if I have a new record in the first database it should be also inserted in 2 database.
Problem is that, it can be absolutely different hosting accounts and it can be problems with access, the main purpose is to see statisctis of sales. Or may be there are other alternatives how to do it? Thanks beforehand.

You can do this by creating master-slave MySQL instance,
any change in master will automatically reflect in slave - you don't have to do this manually.
follow docs:
https://www.digitalocean.com/community/tutorials/how-to-set-up-master-slave-replication-in-mysql
https://support.rackspace.com/how-to/set-up-mysql-master-slave-replication/
https://documentation.red-gate.com/sc9/worked-examples/worked-example-comparing-and-synchronizing-two-databases
and many more, google it.

Related

How to use more than one MySQL master server in Google Cloud Platform?

I'm looking for a best solution that suits my requirements. I would like to use MySQL with a lot of instances, so I need to be able to add as much master servers with slaves servers as might be needed in the future. There also will be sharding. Currently I've found out that GCP doesn't allow you to add more than one master server to a running instance. If so, what can I do then? I need to create 3 or more master servers and add slave servers to them. And if there is a new row in one of the master servers, the 3 slaves will receive that row and everything will by synchronized, so I'll be able to do a simple SELECT query in one of these slaves to get the actual data. I'm sorry for my english, I'm not a native speaker :)
What you are looking for it's called read replica. Using Google Cloud Cloud SQL for MySQL will let you implement a setup like the one you are describing, deploying multiples read replicas really fast
For the sharding part, you just need to deploy multiple masters with its own read replicas and on your application logic implement the needed code to find the data in the right instance.

how to easily replicate mysql database to and from google-cloud-sql?

Google says NO triggers, NO stored procedures, No views. This means the only thing I can dump (or import) is just a SHOW TABLES and SELECT * FROM XXX? (!!!).
Which means for a database with 10 tables and 100 triggers, stored procedures and views I have to recreate, by hand, almost everything? (either for import or for export).
(My boss thinks I am tricking him. He cannot understand how previous, to me, employers did that replication to a bunch of computers using two clicks and I personally need hours (or even days) to do this with an internet giant like Google.)
EDIT:
We have applications which are being created in local computers, where we use our local MySQL. These applications use MySQL DB's which consist, say, from n tables and 10*n triggers. For the moment we cannot even check google-cloud-sql since that means almost everything (except the n almost empty tables) must be "uploaded" by hand. And we cannot also check using google-cloud-sql DB since that means almost everything (except the n almost empty tables) must be "downloaded" by hand.
Until now we do these "up-down"-loads by taking a decent mysqldump from the local or the "cloud" MySQL.
It's unclear what you are asking for. Do you want "replication" or "backups" because these are different concepts in MySQL.
If you want to replicate data to another MySQL instance, you can set up replication. This replication can be from a Cloud SQL instance, or to a Cloud SQL instance using the external master feature.
If you want to backup data to or from the server, checkout these pages on importing data and exporting data.
As far as I understood, you want to Create Cloud SQL Replicas. There are a bunch of replica options found in the doc, use the one that fits the best to you.
However, if you said "replica" as Cloning a Cloud SQL instance, you can follow the steps to clone your instance in a new and independent instance.
Some of these tutorials are done by using the GCP Console and can be scheduled.

2 tables in 2 different databases with different structures with same type of data to be synced

My problem is I have a website that customers place orders on. That information goes into orders, ordersProducts, ...etc tables. I have a reporting Database on a DIFFERENT server where my staff will be processing the orders from. The tables on this server will need the order information AND additional columns so they can add extra information and update current information
What is the best way to get information from the one server (order website) to the other (reporting website) efficiently without the risk of data loss? Also I do not want the reporting database to be connecting to the website to get information. I would like to implement a solution on the order website to PUSH data.
THOUGHTS
mySQL Replication - Problem - Replicated tables are strictly for reporting and not manipulation. Example what if customer address changes? Need products added to order? This would mess up the replicated table.
Double Inserts - Insert into Local tables and then insert into Reporting Database. Problem - If for whatever reason the reporting database goes down there is a chance I lose data because the mySQL connection wont be able to push the data. Implement some sort of query log?
Both Servers use mySQL and PHP
Mysql replication sounds exactly like what you are looking for, I'm not too sure I understand what you've listed as the disadvantage there.
The solution to me sounds like a master to read-only slave where the slave is the reporting database. If your concern is changes to the master then making the slave out of sync then this shouldn't be too much of an issue, all changes will be synced over. In the situation of a loss of connectivity then the slave would track how many seconds it is behind master and execute the changes until the two are back in sync.

MySQL joins across databases on different servers

So, I have an existing db with some tables for a class of users. We're building a more general app to handle multiple things the company does and this class of users, call them hosts, is a general type used by multiple programs in our company. We want to (eventually) migrate into a centralized app as now we have several. However, we don't have the time to do it completely right now. I need to build a login system for these hosts and I'd like to begin to migrate to this new system with that. I can't figure out a reasonable way to move those tables that are in the legacy DB to the new DB, which (of course) resides on a different server, with out wanting to stab my own eyes out after 30 seconds of having to deal with this. The legacy db has many reports the rely on joining on the current hosts tables.
The only things I can come up with don't seem like very good ideas. Those being, writing to both dbs from both apps (pointless data duplication prone to syncing problems), provide an API from the new app and mash the data coming back together with record sets (just seems... wrong).
Anyone have any ideas how to deal with this?
It has it's limitations, but the FEDERATED storage engine might be of assistance.

Can I set up a filtered, star-pattern database replication?

We have a client that needs to set up N local databases, each one containing one site's data, and then have a master corporate database containing the union of all N databases. Changes in an individual site database need to be propagated to the master database, and changes in the master database need to be propagated to the appropriate individual site database.
We've been using MySQL replication for a client that needs two databases that are kept simultaneously up to date. That's a bidirectional replication. If we tried exactly the same approach here we would wind up with all N local databases equivalent to the master database, and that's not what we want. Not only should each individual site not be able to see data from the other sites, sending that data N times from the master instead of just once is probably a huge waste.
What are my options for accomplishing this new star pattern with MySQL? I know we can replicate only certain tables, but is there a way to filter the replication by records?
Are there any tools that would help or competing RDBMSes that would be better to look at?
SymmetricDS would work for this. It is web-enabled, database independent, data synchronization/replication software. It uses web and database technologies to replicate tables between relational databases in near real time. The software was designed to scale for a large number of databases, work across low-bandwidth connections, and withstand periods of network outage.
We have used it to synchronize 1000+ MySQL retail store databases to an Oracle corporate database.
I've done this before, and AFAIK this is the easiest way. You should look in to using Microsoft SQL Server Merge Replication, and using Row Filtering. Your row filtering would be set up to have a column that states what individual site destination it should go to.
For example, your tables might look like this:
ID_column | column2 | destination
The data in the column might look like this:
12345 | 'data' | 'site1'
You would then set your merge replication "subscriber" site1 to filter on column 'destination' and value 'site1'.
This article will probably help:
Filtering Published Data for Merge Replication
There is also an article on msdn called "Enhancing Merge Replication Performance" which may help - and also you will need to learn the basics of setting up publishers and subscribers in SQL Server merge replication.
Good luck!
Might be worth a look at mysql-table-sync from maatkit which lets you sync tables with an optional --where clause.
If you need unidirectional replication, then use multiple copies of databases replicated in center of star and custom "bridge" application to move data further to the final one
Just a random pointer: Oracle lite supports this. I've evaluated it once for a similar task, however it needs something installed on all clients which was not an option.
A rough architecture overview can be found here
Short answer no, you should redesign.
Long answer yes, but it's pretty crazy and will be a real pain to setup and manage.
One way would be to roundrobin the main database's replication among the sites. Use a script to replicate for say 30 seconds from a site record how far it got and then go on the the next site. You may wish to look at replicate-do-db and friends to limit what is replicated.
Another option that I'm unsure would work is to have N mysqls in the main office that replicates from each of the site offices, and then use the federated storage engine to provide a common view from the main database into the per-site slaves. The site slaves can replicate from the main database and pick up whichever changes they need.
Sounds like you need some specialist assistance - and I'm probably not it.
How 'real-time' does this replication need to be?
Some sort of ETL process (or processes) is possibly an option. we use MS SSIS and Oracle in-house; SSIS seems to be fairly good for ETL type work (but I don't work on that specific coal face so I can't really say).
How volatile is the data? Would you say the data is mostly operational / transactional?
What sort of data volumes are you talking about?
Is the central master also used as a local DB for the office where it is located? if it is you might want to change that - have head office work just like a remote office - that way you can treat all offices the same; you'll often run into problems / anomalies if different sites are treated differently.
it sounds like you would be better served by stepping outside of a direct database structure for this.
I don't have a detailed answer for you, but this is the high level of what I would do:
I would select from each database a list of changes during the past (reasonable time frame), construct the insert and delete statements that would unify all of the data on the 'big' database, and then separate smaller sets of insert and delete statements for each of the specific databases.
I would then run these.
There is a potential for 'merge' issues with this setup if there is any overlap with data coming in and out.
There is also the issue of data being lost or duplicated because your time frame were not constructed properly.