Many tables, one, shared structure - mysql

For example i have 10 databases and they are the same. Only different data.
When i'm making some update in one database table(for example adding new collumn or changing/renaming it), it should affect others tables in all databases.
Can i make some view to make it easier or there is a way to do it automatically?
Thanks.

You can create a Trigger, which observe changes in the database structure.
See MySql trigger

Related

Add an attribute to many tables automatically

I have a database diagram in Workbench and this diagram is synchronized with my database. This database has more than 500 tables, and I need to add a new column to most tables, it is not a viable option to add it one by one and then synchronize the changes with the database. Is there any workbench option that allows me to add the field in bulk to many tables ?.
The only way I can think of is to make a script that automatically adds that attribute to the tables and then perform the inverse synchronization, that is, update the diagram from the database.
What do you think? Any better solution?

Get Created date time of new columns added to existing tables

Sorry if this is a simple question but i have a problem.
I have been adding new columns to many tables in my local db . i.e MYSQL
I want to deploy the changes to production database and i have not maintained any text file to mention the changes i have made.
So how to get created or updated datetime of columns added to existing tables?
The table which might contain this information would be the INFORMATION_SCHEMA.COLUMNS table. The only problem is that it doesn't appear to record a timestamp when a column was added/altered. I can offer a workaround which might be just as fast. You may run SHOW CREATE TABLE on the table running in production, and then do the same on your dev version. Then, just use any reputable diff checking tool (e.g. DiffChecker.com) and look for the differences.
Moving forward, you should keep better track of the changes you make to your table during development. A much better approach, I think, would be to just keep track of the alter statements which you run on the table. Then, deploy these changes when you send everything to production.

Setting up a master database to control the structure of other databases

I got a case where I have several databases running on the same server. There's one database for each client (company1, company2 etc). The structure of each of these databases should be identical with the same tables etc, but the data contained in each db will be different.
What I want to do is keep a master db that will contain no data, but manage the structure of all the other databases, meaning if I add, remove or alter any tables in the master db the changes will also be mirrored out to the other databases.
Example: If a table named Table1 is created in the master DB, the other databases (company1, company2 etc) will also get a table1.
Currently it is done by a script that monitors the database logs for changes made to the master database and running the same queries on each of the other databases. Looked into database replication, but from what I understand this will also bring along the data from the master database, which is not an option in this case.
Can I use some kind of logic against database schemas to do it?
So basicly what I'm asking here is:
How do I make this sync happen in the best possible way? Should I use a script monitoring the logs for changes or some other method?
How do I avoid existing data getting corrupted if a table is altered? (data getting removed if a table is dropped is okay)
Is syncing from a master database considered a good way to do what I wish (having an easy maintainable structure across several datbases)?
How will making updates like this affect the performance of the databases?
Hope my question was clear and that this is not a duplicate of some other thread. If more information and/or a better explantion of my problem is needed, let me know:)
You can get the list of tables for a given schema using:
select TABLE_NAME from information_schema.tables where TABLE_SCHEMA='<master table name>';
Use this list for a script or stored procedure ala:
create database if not exists <name>;
use <name>;
for each ( table_name in list )
create table if not exists <name>.table_name like <master_table>.table_name;
Now that Im thinking about it you might be able to put a trigger on the 'information_schema.tables' db that would call the 'create/maintain' script. Look for inserts and react accordingly.

SQL Server - update schema of one db from another

I have two databses on a SQL Server -- one for development (call it "TestData"), and one for production (call it "LiveData"). I make changes to TestData -- typically adding tables or adding new fields to existing tables (rarely dropping anything) and creating or modifying Stored Procedures. At some point, I would like to update the LiveData tables, stored procedures, etc. with the changes made to TestData. I only want this to affect the schema, not the actual data. What is the best way to do this? I am new to SQL Server, so the more detailed the explanation, the better.
edit: I know there are third-party programs out there, but I'm looking into ways to do this without a separate software, just using scripts, etc.
You might want to take a look at redgate SQL Compare.
DBComparer is a great free utility to compare schemas. It is a little buggy and crashes sometimes, but other than that it works great.

Is there any way to automatically create a trigger on creation of new table in MySQL?

Is there any way to automatically create a trigger on creation of new table in MySQL?
As I've pointed out in your other question, I think a process and security review is in order here. It's an audited database, so nobody (especially third-party service providers) should be creating tables in your database without your knowledge.
The issue you've got is, as well as the new table being created, you will also need to have another table created to store the audited/changed records, which will have an identical structure as the original table with possibly a time/date and user column. If a third-party provider is creating this table, they won't know to create the auditing table, therefore even if you could generate your triggers dynamically, they wouldn't work.
It's impossible to create a single table that will hold all changes record for all other tables in your database because the structure between tables inevitably differs.
Therefore: make all change requests (e.g. providers wants to create TableX, they submit a change request (including the SQL script) explaining the reason for the change) to yourself and/or your team.
You execute the SQL on a test copy of your database, and use the same structure to create another table to hold the modified records.
You then create and test the necessary triggers, generate a new SQL script to create the two tables and your triggers and execute that on your live database. You give your provider permissions to use the new table and away they go.
Everyone's happy. Yes, it may take a little while longer, and yes you'll have more work to do, but that's a hell of a lot less work than is required to try and parse query logs to re-create records that have already been changed/deleted, or parse the binary log and keep up-to-date with every change, and modify your code when the format of the log file changes etc etc.