I got a case where I have several databases running on the same server. There's one database for each client (company1, company2 etc). The structure of each of these databases should be identical with the same tables etc, but the data contained in each db will be different.
What I want to do is keep a master db that will contain no data, but manage the structure of all the other databases, meaning if I add, remove or alter any tables in the master db the changes will also be mirrored out to the other databases.
Example: If a table named Table1 is created in the master DB, the other databases (company1, company2 etc) will also get a table1.
Currently it is done by a script that monitors the database logs for changes made to the master database and running the same queries on each of the other databases. Looked into database replication, but from what I understand this will also bring along the data from the master database, which is not an option in this case.
Can I use some kind of logic against database schemas to do it?
So basicly what I'm asking here is:
How do I make this sync happen in the best possible way? Should I use a script monitoring the logs for changes or some other method?
How do I avoid existing data getting corrupted if a table is altered? (data getting removed if a table is dropped is okay)
Is syncing from a master database considered a good way to do what I wish (having an easy maintainable structure across several datbases)?
How will making updates like this affect the performance of the databases?
Hope my question was clear and that this is not a duplicate of some other thread. If more information and/or a better explantion of my problem is needed, let me know:)
You can get the list of tables for a given schema using:
select TABLE_NAME from information_schema.tables where TABLE_SCHEMA='<master table name>';
Use this list for a script or stored procedure ala:
create database if not exists <name>;
use <name>;
for each ( table_name in list )
create table if not exists <name>.table_name like <master_table>.table_name;
Now that Im thinking about it you might be able to put a trigger on the 'information_schema.tables' db that would call the 'create/maintain' script. Look for inserts and react accordingly.
Related
I have three identical tables, one on MySQL, one linked to this one on Access by ODBC, and a native in the same Access database.
When I update the table on MySQL, the linked table on Access updates, and vice versa. But I would like to know if it is possible that the linked table updates the native table (and vice versa)?
Access table
MySQL table
It really depends on how the local Access table is being updated. If it is ALWAYS updated say by a few forms, then you could add a after update even to those few forms, and put in code to update the MySQL table.
Another approch (again you only/always update the local tables) is to add a table trigger to the local table. In this table code event, you can actually have it call some VBA code, and that VBA code could then update/insert to the linked MySQL table. Once again, then the two tables will automatic remain in sync.
The other possible would be to add a time + date stamp column to the tables (both on MySQL side, and on the Access side). You could then write some VBA code to sync up the tables. Such code is not too hard, but in a multi-user setting, this can become quite a challenge, since while you are syncing the data, other users might also update the MySQL tables and thus your sync routines might well miss some tables. Database sync software and this subject can fill a few books the size of medical texts, and is a VERY complex subject.
However, why not just always use linked tables to MySQL, and be done with any requirements to sync data? Access makes a great client to SQL server or MySQL. If you eliminate the local tables, then you eliminate the need to sync your data.
I'm developping a web platform to manage student registrations in schools of my region. For that I have 17 databases running on MySQL (5.7.19) where there is one which is the main database and the 16 others represent schools. Schools databases (must) have the exactly the same schema, each containing data corresponding to the associated school. I separated this way to avoid latency as each school can register many applications (16k on average), so the requests could get heavier over time.
Now I have a serious problem: when I change the schema of a school's database, I have to manually do it for those of other schools to keep the schema consistency because my sql requests are made independently of the school. For example, if i add a new field in table_b of database_school5, i have to manually do the same on table_b of all remaining databases.
What can I do to manage theses changes efficiently? Is there an automatic solution? Is there an adapted DBMS for this problem?
Somebody told me that PostgreSQL can achieve this easily with INHERITANCE, but this only concerns the tables, unless I've done some poor research.
I want every time I make a change to a database schema, whether it is adding a table, adding a field, removing a field, adding a constraint, etc., the changes are automatically transferred to the other databases.
Thanks in advance.
SELECT ... FROM information_schema.tables
WHERE schema_name LIKE 'database_school%'
AND table_name != 'the 17th table'
AND schema_name != 'database_school5' -- since they have already done it.
That will find the 16 names. What you put into ... is a CONCAT(...) to construct the ALTER TABLE ... statements.
Then you do one of these:
Plan A: Manually copy/paste those ALTERs into mysql commandline tool to perform them.
Plan B: Wrap all of it in a Stored Procedure that will loop through the results of the SELECT and prepare+execute each one.
First of all I'd like to start saying that I've checked these two questions:
Sync 2 tables of different databases - MySQL
How to synchronize two tables of different databases on the same machine (MySql)
But while similar they are not what i need.
I have 2 databases in the same server.
Db1 and Db2
Both databases have the exact copy of a single table called "user":
userid
login
name
lastname
password
level
How can I achieve some sort of redundancy between these two tables in different databases?
If db1.user gets a new record then db2.user has to have that record, if a record is modified then the other one is modified and if deleted the then the other one gets deleted too.
To be more specific, db2.user needs to be a reflection of db1.user using triggers.
EDIT: there is this question: Mysql replication on single server and that is not even remotely close to what i want to do. I updated a little bit at the very end of what i previously posted with how I'd like to achieve this thanks to a suggestion.
As proposed you can use triggers as stated in this standard documentation.
You define AFTER INSERT,AFTER UPDATE and AFTER DELETE triggers on db1.user and within this trigger you have the NEW.object to pass the information into the db2.user table.
I have many users on my server, each user has their own database, with a set of tables. I also have a template database with a set of tables that I duplicate when someone signs up.
What I need to do is, if I make a change to the template database like; add a table, add a column, delete a column... I need to sync this database structure to the other databases.
Is there any way of doing this without writing a special script to check formats and make the appropriate changes?
I've been asked to build a module for a web application, which will also be used as a stand alone website. Since this is the case, I wanted to use a separate database, and wondered if there was a way of having a table in one database, be a "pointer" in another database.
For example, I have databases db1 and db2
db1 has table users, so I want to have db2.users point to db1.users.
I know I could setup triggers and what not to sync two seperate tables but this sounds cooler :)
EDIT
So in my code I'm using sql such as
select * from users
Now, at the database level, I want "users" to actually be db1.users. Then, if I want to, I can remove the alias/pointer and "select * from users" will point to the users table in the current database. I guess what I'm looking for is a "global alias" type of thing.
Just use it directly from another database?
SELECT ... FROM `db1`.`users` LEFT JOIN `db2`.`something`
The federated storage engine offers something similar to the feature you asked for.
And if your databases are on the same database server, the federated storage enging sounds a bit like an overkill to me. You may want to create a view instead.
Both methods won't be useful if db1 is not available. As Emmerman already points out, you need to store the data in db2 if you want to prepare for the case of db1 being unavailable.