Run Update statement on information_schema.COLUMNS - mysql

A previous dba made many spurious design decisions when creating the schema for the database that I am now administering. Basically every column in the database that has a default value also is not nullable. This plays havoc with just about any ORM. I'd like to be able to run an update statement on the COLUMNS table of the information_schema database and set nullable to YES if the column has a default value, but naturally, I don't have access to that table, nor does root. Is this even possible, or do I need to manually alter thousands of columns?

Without root privileges or no other way to modify the columns (other than manully touching each column) you could do the following:
get a backup of the database
restore the backup to a new database that you have full access to
make the changes to the newly restored database
delete the original
rename the restored database to the original databases name
I would do this a few times in your dev environment first and feel very confident of the changes before doing it in a production environment.
Also, depending on the size of your database this could be quite an expensive action to perform on the database. If it takes hours or days to run then this might not be a viable solution.
Good luck.

Related

Get Created date time of new columns added to existing tables

Sorry if this is a simple question but i have a problem.
I have been adding new columns to many tables in my local db . i.e MYSQL
I want to deploy the changes to production database and i have not maintained any text file to mention the changes i have made.
So how to get created or updated datetime of columns added to existing tables?
The table which might contain this information would be the INFORMATION_SCHEMA.COLUMNS table. The only problem is that it doesn't appear to record a timestamp when a column was added/altered. I can offer a workaround which might be just as fast. You may run SHOW CREATE TABLE on the table running in production, and then do the same on your dev version. Then, just use any reputable diff checking tool (e.g. DiffChecker.com) and look for the differences.
Moving forward, you should keep better track of the changes you make to your table during development. A much better approach, I think, would be to just keep track of the alter statements which you run on the table. Then, deploy these changes when you send everything to production.

mysqldump: how to fetch dependent rows

I'd like a snapshot of a live MySQL DB to work with on my development machine. The problem is that the DB is too large, so my thought was to execute:
mysqldump [connection-info-here] --no-autocommit --where="1 limit 1000" mydb > /dump.sql
I think this will give me the first thousand rows of every table in database mydb. I anticipate that the resulting dataset will break a lot of foreign key constraints since some records will be missing. As a result the application I mean to run on the dev machine will fail.
Is there a way to mysqldump a sample of the database while ensuring that all records dumped abide by key constraints? (for instance if a foreign key is dumped, the matching record in the foreign table will also be dumped).
If that isn't possible, how do you guys deal with this problem?
No, there's no option for mysqldump to dump only rows that match in foreign key relationships. You already know about the --where option, and that won't do it.
I've had the same task as you, to dump a subset of data but only data that is related. For example, for creating a test instance.
I've been using MySQL for many years, I've worked as a MySQL consultant and trainer, and I try to keep up with current tools. I have never heard of any MySQL tool that does this operation.
The only solution I can suggest is to write your own script to dump table by table using SELECT...INTO OUTFILE.
It's sometimes easier to write a custom script just for your specific schema, than for someone to write a general-purpose tool that works for everyone's schema.
How I have dealt with this problem in the past is I don't copy data from the live database. I find some other way to create a subset of fake data for testing. It's probably better to create synthetic data anyway, because then you don't risk accidentally using live data in your dev/test environment, in case some of it is private data.

Setting up a master database to control the structure of other databases

I got a case where I have several databases running on the same server. There's one database for each client (company1, company2 etc). The structure of each of these databases should be identical with the same tables etc, but the data contained in each db will be different.
What I want to do is keep a master db that will contain no data, but manage the structure of all the other databases, meaning if I add, remove or alter any tables in the master db the changes will also be mirrored out to the other databases.
Example: If a table named Table1 is created in the master DB, the other databases (company1, company2 etc) will also get a table1.
Currently it is done by a script that monitors the database logs for changes made to the master database and running the same queries on each of the other databases. Looked into database replication, but from what I understand this will also bring along the data from the master database, which is not an option in this case.
Can I use some kind of logic against database schemas to do it?
So basicly what I'm asking here is:
How do I make this sync happen in the best possible way? Should I use a script monitoring the logs for changes or some other method?
How do I avoid existing data getting corrupted if a table is altered? (data getting removed if a table is dropped is okay)
Is syncing from a master database considered a good way to do what I wish (having an easy maintainable structure across several datbases)?
How will making updates like this affect the performance of the databases?
Hope my question was clear and that this is not a duplicate of some other thread. If more information and/or a better explantion of my problem is needed, let me know:)
You can get the list of tables for a given schema using:
select TABLE_NAME from information_schema.tables where TABLE_SCHEMA='<master table name>';
Use this list for a script or stored procedure ala:
create database if not exists <name>;
use <name>;
for each ( table_name in list )
create table if not exists <name>.table_name like <master_table>.table_name;
Now that Im thinking about it you might be able to put a trigger on the 'information_schema.tables' db that would call the 'create/maintain' script. Look for inserts and react accordingly.

Large scale MySQL changes to active sites

Just some pointers here.
I am making fairly extensive modifications to a site, including the MySQL database.
My plan is to do everything on my development server, export the new MySQL structure for the db and import it onto the clients server.
Basically I need to know that performing a structure only import will not overwrite/delete existing data. I am not making changes to the data type or field length.
In my experience, when you export a database (through phpMyAdmin for instance), part of the SQL script that is created includes a "DROP TABLE IF EXISTS 'table_name';" before doing a "CREATE TABLE 'table_name'...;" to build the new table.
My guess is that this is not what you want to do! Certainly use the dev system to alter the structure in order to make everything correct, but then look around for a database synchronisation routine where you can provide the old structure, the new structure, and the software will create the appropriate "ALTER TABLE 'table_name'...;" scripts to make the required changes.
You should then really examine these change files before executing them on the live database, and of course BACKUP the live database, and ensure you are able to fully recover from the backup before starting any of the alterations!
I've had to do this a lot, and it always goes like this:
Make a backup of the live database, complete with data.
Make a backup of the live database schema only.
Calculate the differences between the old (live) schema and the new (devel) schema.
Create all of the 'ALTER TABLE ...' DDL statements necessary to upgrade from the old schema to the new one. Keep in mind that if you rename a field, you probably won't be able to just rename it -- you'll need to create the new field, copy the data from the old field, and then drop the old field.
If you changed relationships between tables, you'll probably need to drop indexes and foreign key relationships first, and then add them back afterwards.
You'll need to populate any new fields based upon their default values, if any.
Once you've got all the pieces working, you'll need to combine them into one large script, and then run it on a copy of the live database.
Dump the schema and compare it against the desired new schema -- if they don't match, go back to step 3 and repeat.
Dump the data and compare it against the expected changes -- again, if they don't match, go back to step 3 and repeat.
You're going to learn a lot more about SQL DDL/DML during this process than you ever thought you'd learn. (For one project, where we were switching from natural keys to UUID keys for 50+ tables, I ended up writing programs to generate all of the DDL/DML.)
Good luck, and make frequent backups.
I'd recommend to prepare a sql script for every change you do on development server, so you will be able to reproduce it on development. You shouldn't get to the point where you need to calculate differences between database structures
This is how I do it, all changes are reflected in sql scripts, and I can reconstruct the history of my database running all these files if needed.
Test the final release version on a "staging" mysql server. Make a copy of your production server on another machine and test your script to make sure everything's ok.
Of course, preliminary database backup is a must.

question about MySQL database migration

If I have a MySQL database with several tables on a live server, now I would like to migrate this database to another server. Of course, the migration I mean here involves some database tables, for example: add some new columns to several tables, add some new tables etc..
Now, the only method I can think of is to use some php/python(two scripts I know) script, connect two databases, dump the data from the old database, and then write into the new database. However, this method is not efficient at all. For example: in old database, table A has 28 columns; in new database, table A has 29 columns, but the extra column will have default value 0 for all the old rows. My script still needs to dump the data row by row and insert each row into the new database.
Using MySQLDump etc.. won't work. Here is the detail. For example: I have FOUR old databases, I can name them as 'DB_a', 'DB_b', 'DB_c', 'DB_d'. Now the old table A has 28 columns, I want to add each row in table A into the new database with a new column ID 'DB_x' (x to indicate which database it comes from). If I can't differentiate the database ID by the row's content, the only way I can identify them is going through some user input parameters.
Is there any tools or a better method than writing a script yourself? Here, I dont need to worry about multithread writing problems etc.., I mean the old database will be down (not open to public usage etc.., only for upgrade ) for a while.
Thanks!!
I don't entirely understand your situation with the columns (wouldn't it be more sensible to add any new columns after migration?), but one of the arguably fastest methods to copy a database across servers is mysqlhotcopy. It can copy myISAM only and has a number of other requirements, but it's awfully fast because it skips the create dump / import dump step completely.
Generally when you migrate a database to new servers, you don't apply a bunch of schema changes at the same time, for the reasons that you're running into right now.
MySQL has a dump tool called mysqldump that can be used to easily take a snapshot/backup of a database. The snapshot can then be copied to a new server and installed.
You should figure out all the changes that have been done to your "new" database, and write out a script of all the SQL commands needed to "upgrade" the old database to the new version that you're using (e.g. ALTER TABLE a ADD COLUMN x, etc). After you're sure it's working, take a dump of the old one, copy it over, install it, and then apply your change script.
Use mysqldump to dump the data, then echo output.txt > msyql. Now the old data is on the new server. Manipulate as necessary.
Sure there are tools that can help you achieving what you're trying to do. Mysqldump is a premier example of such tools. Just take a glance here:
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
What you could do is:
1) You make a dump of the current db, using mysqldump (with the --no-data option) to fetch the schema only
2) You alter the schema you have dumped, adding new columns
3) You create your new schema (mysql < dump.sql - just google for mysql backup restore for more help on the syntax)
4) Dump your data using the mysqldump complete-insert option (see link above)
5) Import your data, using mysql < data.sql
This should do the job for you, good luck!
Adding extra rows can be done on a live database:
ALTER TABLE [table-name] ADD [row-name] MEDIUMINT(8) default 0;
MySql will default all existing rows to the default value.
So here is what I would do:
make a copy of you're old database with MySql dump command.
run the resulting SQL file against you're new database, now you have an exact copy.
write a migration.sql file that will modify you're database with modify table commands and for complex conversions some temporary MySql procedures.
test you're script (when fail, go to (2)).
If all OK, then goto (1) and go live with you're new database.
These are all valid approaches, but I believe you want to write a sql statement that writes other insert statements that support the new columns you have.