Google says NO triggers, NO stored procedures, No views. This means the only thing I can dump (or import) is just a SHOW TABLES and SELECT * FROM XXX? (!!!).
Which means for a database with 10 tables and 100 triggers, stored procedures and views I have to recreate, by hand, almost everything? (either for import or for export).
(My boss thinks I am tricking him. He cannot understand how previous, to me, employers did that replication to a bunch of computers using two clicks and I personally need hours (or even days) to do this with an internet giant like Google.)
EDIT:
We have applications which are being created in local computers, where we use our local MySQL. These applications use MySQL DB's which consist, say, from n tables and 10*n triggers. For the moment we cannot even check google-cloud-sql since that means almost everything (except the n almost empty tables) must be "uploaded" by hand. And we cannot also check using google-cloud-sql DB since that means almost everything (except the n almost empty tables) must be "downloaded" by hand.
Until now we do these "up-down"-loads by taking a decent mysqldump from the local or the "cloud" MySQL.
It's unclear what you are asking for. Do you want "replication" or "backups" because these are different concepts in MySQL.
If you want to replicate data to another MySQL instance, you can set up replication. This replication can be from a Cloud SQL instance, or to a Cloud SQL instance using the external master feature.
If you want to backup data to or from the server, checkout these pages on importing data and exporting data.
As far as I understood, you want to Create Cloud SQL Replicas. There are a bunch of replica options found in the doc, use the one that fits the best to you.
However, if you said "replica" as Cloning a Cloud SQL instance, you can follow the steps to clone your instance in a new and independent instance.
Some of these tutorials are done by using the GCP Console and can be scheduled.
Related
I want to fetch data from multiple mysql databases which are on multiple servers.
I'm using phpmyadmin (mysql). All the databases will be mysql database (same vendor) which are on multiple servers. First I want to connect to those server databases and then I want to fetch data from them and then put the result in central database.
For example : remote_db_1 on server 1, remote_db_2 on server 2, remote_db_3 on server 3. and I have central database where I want to store the data which comes from different databases.
Query : select count(user) from user where profile !=2; same query will be run for all the databases.
central_db
school_distrct_info_table
id school_district_id total_user
1. 2 50
2. 55 100
3. 100 200
I've tried federated engine but it doesn't fit to our requirement.What can be done in this situation any tool, any alternative method or anything.
In future no. of databases on different server will be increased. It might 50, 100, maybe more, exporting the tables from source server & then load to central db will be hard task. So I'm also looking for some kind of etl tool which can directly fetch data from multiple source databases and then sending the data to destination database. In central db table, structure,datatypes,columns everything will be different. Sometimes we might need to add extra column to store some data I know it can be achieved through etl tool in the past I've used ssdt which works with SQL Server but here this is mysql.
The easiest way to handle this problem is with federated servers. But, you say that won't work for you.
So, your next best way to handle the problem is to export the tables from the source servers and then load them into your central server. But that's much harder. This sort of operation is sometimes called extract / transform / load or ETL.
You'll write a program in the programming language of your choice (Python, php, Java, PERL, nodejs??) to connect to each database separately, then query it, then put the information into a central database.
Getting this working properly is, sad to say, incompatible with really urgent. It's tricky to get working and to test.
May I suggest you write another question explaining why server federation won't meet your needs, and asking for help? Maybe somebody can help you configure it so it does. Then you'll have a chance to finish this project promptly.
I should migrate part of the service provided from one back-end to another.
Company A has an application that connects to a .Net server in windows with a Sql database. Company B has an application that does something very similar, but connects to a node js server in debian with a MySql database.
Initially some of the services of the application will be transferred from A to B. The structure of the two databases is similar, but not quite the same. The basis of the solution I am preparing is:
Copying current data from A to B (with appropriate modifications, since there are differences in tables).
Creating triggers for create, update and delete operations on database A, so that all changes on database A will be reflected in database B
Step 1 I have figured out mostly. There will be some work to be done so that changes in tables are adapted. I will be using navicat to export tables from db A into excel spreadsheets. These spreadsheets will be modified to fit the structure of db B and then imported into db B, keeping in mind table dependencies.
For step 2, as I mentioned, I was considering using triggers. As far as I have been reading about triggers, they would require both databases to be sql or both of them to be mysql, which is not the case. I also tried this article, but could not make it work (server A is windows and server B is linux, this article does not seem to work for this case).
Any idea / correction / lead to performing the task? I have been thinking and googling but without making anything work.
PS: I am not asking help in writing the actual queries or statements that will replicate / update data from A to B, only a general method that will work considering that the databases are of different languages (sql to mysql), in different servers of different OS (windows to debian), and are used by code of different languages (.net + C# to node js).
PPS: Altering the code in server A to execute queries in both server A and server B is not an option.
PPPS: In the title I mention real - time replication. That would be ideal, but in lack of an appropriate solution, replication in intervals of time is also an option, as long as it is reasonably correct. Triggers managed real - time transfer, but in the worst case I could have a cron job in server B fetch the data from A, as long as I can tell which data to handle.
I'm researching something that I'd like to call replication, but there is probably some other technical word for it - since as far as I know "replication" is a complete replication of structure and its data to slaves. I only want the structure replication. My terminology is probably wrong which is why I can't seem to find answers on my own.
Is it possible to set up a mysql environment that replicates a master structure to multiple local databases when a change, addition or drop has been made? I'm looking for a solution where each user gets its own database instance with their own unique data but with the same structure of tables. When an update is being made to the master structure, the same procedure should be replicated by each user database.
E.g. a column is being added to master.table1 that is replicated by user1.table1 and user2.table1.
My first idea was to write a update procedure in PHP but it feels like this would be a quite fundamental function built-in to the database, since my conclusion would be that index lookup would be much faster with less data (~ total data divided by users) and probably more secure (no unfortunate leaks, if any).
I solved this problem with simple set of SQL scripts for every change in database, named year-month-day-description.sql, which i run in lexicographical order (that's why it begins with date).
Of course you do not want to run them all every time. So to know which scripts I need to execute, each script has simple insert at it's end, which inserts filename of the script into table in database. So the updater PHP script simply make list of scripts, remove these in table and run the rest.
Good on this solution is, that you can include data transformations too. And also, it can be fully automatic and as long as scripts are ok, nothing bad will happen.
You will probably need to look into incorporating the use of database "migrations", something popularized by the Ruby on Rails framework. This Google search for PHP database migrations might be a could starting point for you.
The concept is that as you develop your application and make schema changes, you can create SQL migration scripts to roll-forward or roll-back the schema changes. This makes it really easy to then easily "migrate" your database schema to work with a particular code version (for example if you have branched code being worked on in multiple environments that need each need a different version of the database).
That isn't going to autmoatically make updates like you suggest, but is certainly a step in the right direction. There a also tools like Toad for MySQL and Navicat which have some level of support of schema synchronization. But again these would be manual comparisons/syncs.
We are running a Java PoS (Point of Sale) application at various shops, with a MySql backend. I want to keep the databases in the shops synchronised with a database on a host server.
When some changes happen in a shop, they should get updated on the host server. How do I achieve this?
Replication is not very hard to create.
Here's some good tutorials:
http://www.ghacks.net/2009/04/09/set-up-mysql-database-replication/
http://dev.mysql.com/doc/refman/5.5/en/replication-howto.html
http://www.lassosoft.com/Beginners-Guide-to-MySQL-Replication
Here some simple rules you will have to keep in mind (there's more of course but that is the main concept):
Setup 1 server (master) for writing data.
Setup 1 or more servers (slaves) for reading data.
This way, you will avoid errors.
For example:
If your script insert into the same tables on both master and slave, you will have duplicate primary key conflict.
You can view the "slave" as a "backup" server which hold the same information as the master but cannot add data directly, only follow what the master server instructions.
NOTE: Of course you can read from the master and you can write to the slave but make sure you don't write to the same tables (master to slave and slave to master).
I would recommend to monitor your servers to make sure everything is fine.
Let me know if you need additional help
three different approaches:
Classic client/server approach: don't put any database in the shops; simply have the applications access your server. Of course it's better if you set a VPN, but simply wrapping the connection in SSL or ssh is reasonable. Pro: it's the way databases were originally thought. Con: if you have high latency, complex operations could get slow, you might have to use stored procedures to reduce the number of round trips.
replicated master/master: as #Book Of Zeus suggested. Cons: somewhat more complex to setup (especially if you have several shops), breaking in any shop machine could potentially compromise the whole system. Pros: better responsivity as read operations are totally local and write operations are propagated asynchronously.
offline operations + sync step: do all work locally and from time to time (might be once an hour, daily, weekly, whatever) write a summary with all new/modified records from the last sync operation and send to the server. Pros: can work without network, fast, easy to check (if the summary is readable). Cons: you don't have real-time information.
SymmetricDS is the answer. It supports multiple subscribers with one direction or bi-directional asynchronous data replication. It uses web and database technologies to replicate tables between relational databases, in near real time if desired.
Comprehensive and robust Java API to suit your needs.
Have a look at Schema and Data Comparison tools in dbForge Studio for MySQL. These tool will help you to compare, to see the differences, generate a synchronization script and synchronize two databases.
We have a client that needs to set up N local databases, each one containing one site's data, and then have a master corporate database containing the union of all N databases. Changes in an individual site database need to be propagated to the master database, and changes in the master database need to be propagated to the appropriate individual site database.
We've been using MySQL replication for a client that needs two databases that are kept simultaneously up to date. That's a bidirectional replication. If we tried exactly the same approach here we would wind up with all N local databases equivalent to the master database, and that's not what we want. Not only should each individual site not be able to see data from the other sites, sending that data N times from the master instead of just once is probably a huge waste.
What are my options for accomplishing this new star pattern with MySQL? I know we can replicate only certain tables, but is there a way to filter the replication by records?
Are there any tools that would help or competing RDBMSes that would be better to look at?
SymmetricDS would work for this. It is web-enabled, database independent, data synchronization/replication software. It uses web and database technologies to replicate tables between relational databases in near real time. The software was designed to scale for a large number of databases, work across low-bandwidth connections, and withstand periods of network outage.
We have used it to synchronize 1000+ MySQL retail store databases to an Oracle corporate database.
I've done this before, and AFAIK this is the easiest way. You should look in to using Microsoft SQL Server Merge Replication, and using Row Filtering. Your row filtering would be set up to have a column that states what individual site destination it should go to.
For example, your tables might look like this:
ID_column | column2 | destination
The data in the column might look like this:
12345 | 'data' | 'site1'
You would then set your merge replication "subscriber" site1 to filter on column 'destination' and value 'site1'.
This article will probably help:
Filtering Published Data for Merge Replication
There is also an article on msdn called "Enhancing Merge Replication Performance" which may help - and also you will need to learn the basics of setting up publishers and subscribers in SQL Server merge replication.
Good luck!
Might be worth a look at mysql-table-sync from maatkit which lets you sync tables with an optional --where clause.
If you need unidirectional replication, then use multiple copies of databases replicated in center of star and custom "bridge" application to move data further to the final one
Just a random pointer: Oracle lite supports this. I've evaluated it once for a similar task, however it needs something installed on all clients which was not an option.
A rough architecture overview can be found here
Short answer no, you should redesign.
Long answer yes, but it's pretty crazy and will be a real pain to setup and manage.
One way would be to roundrobin the main database's replication among the sites. Use a script to replicate for say 30 seconds from a site record how far it got and then go on the the next site. You may wish to look at replicate-do-db and friends to limit what is replicated.
Another option that I'm unsure would work is to have N mysqls in the main office that replicates from each of the site offices, and then use the federated storage engine to provide a common view from the main database into the per-site slaves. The site slaves can replicate from the main database and pick up whichever changes they need.
Sounds like you need some specialist assistance - and I'm probably not it.
How 'real-time' does this replication need to be?
Some sort of ETL process (or processes) is possibly an option. we use MS SSIS and Oracle in-house; SSIS seems to be fairly good for ETL type work (but I don't work on that specific coal face so I can't really say).
How volatile is the data? Would you say the data is mostly operational / transactional?
What sort of data volumes are you talking about?
Is the central master also used as a local DB for the office where it is located? if it is you might want to change that - have head office work just like a remote office - that way you can treat all offices the same; you'll often run into problems / anomalies if different sites are treated differently.
it sounds like you would be better served by stepping outside of a direct database structure for this.
I don't have a detailed answer for you, but this is the high level of what I would do:
I would select from each database a list of changes during the past (reasonable time frame), construct the insert and delete statements that would unify all of the data on the 'big' database, and then separate smaller sets of insert and delete statements for each of the specific databases.
I would then run these.
There is a potential for 'merge' issues with this setup if there is any overlap with data coming in and out.
There is also the issue of data being lost or duplicated because your time frame were not constructed properly.