Database structure replication in a "one-to-many database" - mysql

I'm researching something that I'd like to call replication, but there is probably some other technical word for it - since as far as I know "replication" is a complete replication of structure and its data to slaves. I only want the structure replication. My terminology is probably wrong which is why I can't seem to find answers on my own.
Is it possible to set up a mysql environment that replicates a master structure to multiple local databases when a change, addition or drop has been made? I'm looking for a solution where each user gets its own database instance with their own unique data but with the same structure of tables. When an update is being made to the master structure, the same procedure should be replicated by each user database.
E.g. a column is being added to master.table1 that is replicated by user1.table1 and user2.table1.
My first idea was to write a update procedure in PHP but it feels like this would be a quite fundamental function built-in to the database, since my conclusion would be that index lookup would be much faster with less data (~ total data divided by users) and probably more secure (no unfortunate leaks, if any).

I solved this problem with simple set of SQL scripts for every change in database, named year-month-day-description.sql, which i run in lexicographical order (that's why it begins with date).
Of course you do not want to run them all every time. So to know which scripts I need to execute, each script has simple insert at it's end, which inserts filename of the script into table in database. So the updater PHP script simply make list of scripts, remove these in table and run the rest.
Good on this solution is, that you can include data transformations too. And also, it can be fully automatic and as long as scripts are ok, nothing bad will happen.

You will probably need to look into incorporating the use of database "migrations", something popularized by the Ruby on Rails framework. This Google search for PHP database migrations might be a could starting point for you.
The concept is that as you develop your application and make schema changes, you can create SQL migration scripts to roll-forward or roll-back the schema changes. This makes it really easy to then easily "migrate" your database schema to work with a particular code version (for example if you have branched code being worked on in multiple environments that need each need a different version of the database).
That isn't going to autmoatically make updates like you suggest, but is certainly a step in the right direction. There a also tools like Toad for MySQL and Navicat which have some level of support of schema synchronization. But again these would be manual comparisons/syncs.

Related

how to easily replicate mysql database to and from google-cloud-sql?

Google says NO triggers, NO stored procedures, No views. This means the only thing I can dump (or import) is just a SHOW TABLES and SELECT * FROM XXX? (!!!).
Which means for a database with 10 tables and 100 triggers, stored procedures and views I have to recreate, by hand, almost everything? (either for import or for export).
(My boss thinks I am tricking him. He cannot understand how previous, to me, employers did that replication to a bunch of computers using two clicks and I personally need hours (or even days) to do this with an internet giant like Google.)
EDIT:
We have applications which are being created in local computers, where we use our local MySQL. These applications use MySQL DB's which consist, say, from n tables and 10*n triggers. For the moment we cannot even check google-cloud-sql since that means almost everything (except the n almost empty tables) must be "uploaded" by hand. And we cannot also check using google-cloud-sql DB since that means almost everything (except the n almost empty tables) must be "downloaded" by hand.
Until now we do these "up-down"-loads by taking a decent mysqldump from the local or the "cloud" MySQL.
It's unclear what you are asking for. Do you want "replication" or "backups" because these are different concepts in MySQL.
If you want to replicate data to another MySQL instance, you can set up replication. This replication can be from a Cloud SQL instance, or to a Cloud SQL instance using the external master feature.
If you want to backup data to or from the server, checkout these pages on importing data and exporting data.
As far as I understood, you want to Create Cloud SQL Replicas. There are a bunch of replica options found in the doc, use the one that fits the best to you.
However, if you said "replica" as Cloning a Cloud SQL instance, you can follow the steps to clone your instance in a new and independent instance.
Some of these tutorials are done by using the GCP Console and can be scheduled.

How to keep two databases with different schemas up-to-date

Our company has really old legacy system with such a bad database design (no foreign keys, columns with serialized PHP arrays, etc. :(). We decided to rewrite a system from a scratch with new database schema.
We want to rewrite a system by parts. So we will split old monolithic application to many smaller ones.
Problem is: we want to have live data in two databases. Old and New schema.
I'd like to ask you if anyone of you knows best practices how to do this.
What we think of:
asynchronous data synchronization with message queue
make a REST API in new system and make legacy application to use
it instead of db calls
some kind of table replication
Thank you very much
I had to deal with a similar problem in the past. There was a system which didn't have support but there was people using it because, It had some features (security holes) which allowed them certain functionalities. However, they also needed new functionalities.
I selected the tables which involved the new system and I created some triggers for cross update the tables, so when I created a register on the old system the trigger created a copy in the new system and reversal. If you design this system properly you would have both systems working at the same time in real time.
The drawback is that while the both system are running the system is going to become slower since you have to maintain the integrity of two databases in every operation.
I would start by adding a database layer to accept API calls from the business layer, then write to both the old schema and the new. This adds complexity up front, but it lets you guarantee that the data stays in sync.
This would require changing the legacy system to call an API instead of issuing SQL statements. If they did not have the foresight to do that originally, you may not be able to take my approach. But, you should do it going forward.
Triggers may or may not work out. In older versions of MySQL, there can be only one trigger of a given type on a given table. This forces you to lump unrelated things into a single trigger.
Replication can solve some changes -- Engine, datatypes, etc. But it cannot help with splitting one table into two. Be careful of the replication of Triggers and where the Trigger has effect (between Master and Slave). In general, a stored routine should be performed on the Master, letting the effect be replicated to the slave. But it may be worth considering how to have the trigger run in the Slave instead. Or different triggers in the two servers.
Another thought is to do the transformation in stages. By careful planning of schema changes versus application of triggers versus code changes versus database layer, you can do partial transformations one at a time, sometimes without having a big outage to update everything simultaneously (with your fingers crossed). A simple example: (1) change code to dynamically handle either new or old schema, (2) change the schema, (3) clean up the code (remove handling of old schema).
Doing a database migration may be a tedious task considering the complexity of data and structure of the tables which is of-course with out any constraints or a proper design. But given that your legacy application was doing its job - the amount of corrupt usable data will be minimal.
For the said problem I would suggest a db migration task which would convert all the old legacy data into the new form. And develop the new application. The advantages being.
1) There is not need to keep 2 different applications.
2) No need to change the code in the legacy application - which can become messy.
3) DB migration will give us a chance to correct any corrupt data (if needed).
DB migration may not be practical under all scenarios but if you can do it in lesser effort than making the changes for database sync, new api's for legacy application - I would suggest to go for it.

Keep MySQL schema in sync between computers

When developing, I can use SVN or Git to keep code in sync between machines. However, I have been unable to find something similar for MySQL. Does anyone know of anything?
Update: I am trying to get the schema changes across machines. Getting the data to sync as well would be great but is not as important at the moment.
Data is not considered a part of your "application source". The schema (ie definition of the tables, indexes etc) should be considered part of your source, although many people do not bother when it comes to MySQL.
If you need to keep data syncronised, you should look at replication scenarios.
See this about replication for MySQL
MySQL Replication
How to Set Up Replication
Unless you want to use replication (haven't dealt with this with MySQL before, personally), you're better off using some sort of script to do a mysqldump and commit the backup each time.
You might even be able do it with a git hook to have it automatically dump it out and commit it before a push.
MySQL Replication is the way to go here.
Keep an eye on the data both sides of replication as MySQL doesn't offer synchronous replication (yet).
There's a comprehensive toolset to monitor your replication (and more) called Maatkit. This allows you to compare checksums of your data and sync if there are discrepancies.
https://github.com/reduardo7/db-version-updater
Script to keep the verion of the database model updated, using a simple Linux script.
This script creates an auxiliar Database Table to register the executed scripts, and prevent to execute any executed script.
--
https://github.com/reduardo7/db-version-updater-mysql
Keep the verion of the database model updated, using a MySQL script.
With this script, you can keep the version of the database model without needing another tool than MySQL console.
It is the simplest and easiest way to perform this task so common, without the need to using other extra tools.
To work, creates an auxiliar database table (database_version) to register the executed scripts, and prevent to execute any executed script.

How to replicate two different database systems?

I'm not sure, if it fits exactly stackoverflow, however as i'm seeking for some code rather than a tool, i think it does.
I'm looking for a way of how to replicate / synchronize different database systems -- in this case: mysql and mongodb. We are running both for different purpose. We started with a mysql database and added mongodb later on for special applications. There's data we would like to have in both databases, where we want to have constraints in mysql respectivly dbrefs in mongodb. For example: We need a user-record in mysql, but also in mongodb for references between tables respectivly objects. At the moment we have a cronjob, which dumps the mysql data and imports it in mongodb. However though it works quite well, that's not the solution we would like to have.
I think for the moment a one-way replication would be enough -- mysql->mongodb, the important part is, that the replication works in "realtime", much like a mysql master->slave replication works.
Are there already any solutions for this problem or ideas anyone of how to achieve this?
Thanks!
SymmetricDS is open source, Java-based, web-enabled, database independent, data synchronization/replication software that might do the trick with a few tweaks. It has an extension point called IDataLoaderFilter which you could use to implement a MongodbDataLoader.
This would help with one way database replication. It might be a little more difficult to synchronized from MongoDb -> relational database, but the SymmetricDS team would be very helpful in trying to find the solution.
What you're looking for is called EAI (Enterprise application integration). There are a lot of commercial tools around but under the provided link, you'll also find a couple OSS solutions. The basis of EAI is that you have data sources and data sinks. The EAI framework offers tools to build custom pumps between the two.
I suggest to either use a DB trigger to start the synchronization or send a trigger signal in your applications. Note that there is no key-hole solution since synchronization can become arbitrarily complex (for example, how do you make sure that all rows are copied?).
As far as I see you need to develop some sort of "Control program" that has the drivers for each DBMS and run it as a daemon. The daemon should have a trigger or a very small recheck interval to keep the DBs synchronized
Technically, you could set up a process which parses the binary log of the MySQL server and replicate the relevant sql queries. I've never done such a thing with a a different database as a slave, but maybe it is worth a shot?

MS SQL - MySQL Migration in a legacy webapp

I wish to migrate the database of a legacy web app from SQL Server to MySQL. What are the limitations of MySQL that I must look out for ? And what all items would be part of a comprehensive checklist before jumping into actually modifying the code ?
First thing I would check is the data types - the exact definition of datatypes varies from database to database. I would create a mapping list that tellme what to map each of the datatypes to. That will help in building the new tables. I would also check for data tables or columns that are not being used now. No point in migrating them. Do the same with functions, job, sps, etc. Now is the time to clean out the junk.
How are you accessing the data through sps or dynamic queries from the database? Check each query by running it aganst a new dev database and make sure they still work. Again there are differences between how the two flavors of SQl work. I've not used my sql so I'm not sure what some of the common failure points are. While you are at it you might want to time new queries and see if they can be optimized. Optimization also varies from database to database and while you are at it, there are probably some poorly performing queries right now that you can fix as part of the migration.
User defined functions will need to be looked at as well. Don't forget these if you are doing this.
Don't forget scheduled jobs, these will need to be checkd and recreated in myslq as well.
Are you importing any data ona regular schedule? All imports will have to be rewritten.
Key to everything is to use a test database and test, test, test. Test everything especially quarterly or annual reports or jobs that you might forget.
Another thing you want to do is do everything through scripts that are version controlled. Do not move to production until you can run all the scripts in order on dev with no failures.
One thing I forgot, make sure the dev database you are running the migration from (the sql server database) is updated from production immediately before each test run. Hate to have something fail on prod because you were testing against outdated records.
Your client code is almost certain to be the most complex part to modify. Unless your application has a very high quality test suite, you will end up having to do a lot of testing. You can't rely on anything working the same, even things which you might expect to.
Yes, things in the database itself will need to change, but the client code is where the main action is, it will need heaps of work and rigorous testing.
Forget migrating the data, that is the last thing which should be on your mind; the database schema can probably be converted without too much difficulty; other database objects (SPs, views etc) could cause issues, but the client code is where the focus of the problems will be.
Almost every routine which executes a database query will need to be changed, but absolutely all of them will need to be tested. This will be nontrivial.
I am currently looking at migrating our application's main database from MySQL 4.1 to 5, that is much less of a difference, but it will still be a very, very large task.