How to make MySQL Workbench forget I renamed a table? - mysql

Early on in my design phase I made the mistake of renaming a table and then creating a new table with the old name. For some reason, MySQL Workbench decided to remember this fact very well. Now every time I try to synchronize with a database, MySQL Workbench wants to rename the (new) table back to its old name.
I did this:
Rename my_table => my_new_table
Create new table called 'my_table'
And now MySQL Workbench insists this would be the right way to proceed:
Rename my_new_table => my_table
Drop my_new_table
That's clearly a bug in its own right, but how is MySQL Workbench keeping track of what it has done with the tables of a random DB? Is there some metadata somewhere I can clear to make MySQL Workbench to believe that this change has already taken place and need not to be repeated? Clearly Workbench is doing more than just a diff between the model and the existing database...
(I realize I could just ignore the the particular step in synchronization process, but I don't want to do it every time I synchronize and of course any changes to these two particular tables would be never carried out...)

Managed to work around the issue by running synchronization once with "skip DB changes and update model only". Unclear whether there are bits of the schemata out of sync or not, but at least new changes now carry over.

Renaming an object and creating a new one with the same name is a special case which makes it quite difficult for the synchronization process. The reason is that Workbench uses the names to find objects (as there is no other identification mechanism like UUIDs or similar). So extra steps are done to find such a rename situation. Workbench explicitly keeps the old name to have a way to find what to sync how.

Related

How to properly wipe a database, and re-import?

I am unsure about the best way to do this. As I'm getting ready to put a new database into production, I need to import data from the old database that has been formed in the meantime of me working on it. The new database now also contains a lot of fake data that was used for testing, which I have to get rid of, so a fresh complete re-import seems reasonable.
Now, truncating all the tables in the new database cannot go through, because the foreign keys prevent it. Simply deleting the data instead would solve that problem, but it leaves the AUTO_INCREMENT indexes to the values where they were, so it's not a "proper" wipe. Now, there could be more properties such as that one, that would be left over (so to say), but this is the only one that I'm aware of.
So my question now is, how much of a problem could these "leftover" pieces of data pose to performance, if I were to go with the simple DELETE solution?
And also; is there a way that would be more thorough in cleaning it out, and also allow me to, of course, keep the defined constraints?
First i would use some gui tool to create the dump for the old DB ( like mySql workbench, or what ever you prefer ). Check options "Export to self-contained file", and check "Dump stored procedures and functions","Dump events" and "Dump triggers".
Then get create scripts for all tables not included in the old DB.
You can do this via "reverse engineer" option.
If you have trouble with this part this post will help.
How to get a table creation script in MySQL Workbench?
When you have old DB dump and create scripts for new sql tables, combine them to a single sql file.
On the first row add:
SET FOREIGN_KEY_CHECKS = 0;
On the last row add:
SET FOREIGN_KEY_CHECKS = 1;
Run the script. As a result you should have all tables ( new without data and old with data ), with all relations set properly. Hope it will work for you.

Get Created date time of new columns added to existing tables

Sorry if this is a simple question but i have a problem.
I have been adding new columns to many tables in my local db . i.e MYSQL
I want to deploy the changes to production database and i have not maintained any text file to mention the changes i have made.
So how to get created or updated datetime of columns added to existing tables?
The table which might contain this information would be the INFORMATION_SCHEMA.COLUMNS table. The only problem is that it doesn't appear to record a timestamp when a column was added/altered. I can offer a workaround which might be just as fast. You may run SHOW CREATE TABLE on the table running in production, and then do the same on your dev version. Then, just use any reputable diff checking tool (e.g. DiffChecker.com) and look for the differences.
Moving forward, you should keep better track of the changes you make to your table during development. A much better approach, I think, would be to just keep track of the alter statements which you run on the table. Then, deploy these changes when you send everything to production.

MySql weird behavior on databases with the same name

I have a database that I create through phpMyAdmin. Then I update it by creating tables in it via a script. All fine til now, then I use it. Two days later I drop this database altogether. Then I create another one with a different name, create something in it as well. Drop it too. Then I create a database with the same name as the first one, again creating tables in it. I leave it that way to find out that the next day I login I get the first database (with its contents) rather than the thing that I expected.
This type of weird behaviour happens no matter how you turn it. The same name is not the problem I think since I know it can happen with databases that carry different names. (I've tried it)
Basically it's like this thing forgets about it's state. What is happening? Any ideas?
This thing is driving me nuts since I can't do anything without fearing that I lose data.
I'm using windows with Zend Server.
I am NOT accidentally dropping tables or something like that.
EDIT: It seems that an immediate restart does not have any effect (meaning it keeps things as is, the random changes happen only after a certain period of time (more than a day)
Databases are stored on disk as folders containing:
one file describing the structure of each table
another file containing all of the tables' rows
one file for each index/key you define
some files (just allocated blocks) intended to allow storing large data such as text-columns or blob-columns which do not reside in the tables' rows (the rows only have a reference to where the blob resides).
When dropping a table the OS deletes some files. When dropping a database the OS deletes a folder.
It is possible (though very unlikely) that the database folder wasn't correctly deleted in the first place -- be it a permission issue on how you installed your server or a server bug or a phpMyAdmin bug.

How do you version and sync your MySQL data model?

What's the best way to save my MySQL data model and automatically apply changes to my development database server as they are made (or at least nightly)?
For example, today I'm working on my project and create this table in my database, and save the statement to SQL file to deploy to production later:
create table dog (
uid int,
name varchar(50)
);
And tomorrow, I decide I want to record the breed of each dog too. So I change the SQL file to read:
create table dog (
uid int,
name varchar(50),
breed varchar(30)
);
That script will work in production for the first release, but it won't help me update my development database because ERROR 1050 (42S01): Table 'dog' already exists. Furthermore, it won't work in production if this change was made after the first release. So I really need to ALTER the table now.
So now I have two concerns:
Is this how I should be saving my
data model (a bunch of create
statements in a SQL file), and
How
should I be applying changes like
this to my database?
My goal is to release changes accurately and enable continuous integration. I use a tool called DDLSYNC do find and apply difference in an Oracle database, but I'm not sure what similar tools exist for MySQL.
At work, we developed a small script to manage our database versioning. Every change to any table or set of data gets it's own SQL file.
The files are numbered sequentially. We keep track of which update files have been run by storing that information in the database. The script inserts a row with the filename when the file is about to be executed, and updates the row with a completion timestamp when the execution finishes. This is wrapped inside a transaction. (It's worth remembering that DDL commands in MySQL can not occur within a transaction. Any attempt to perform DDL in a transaction causes an implicit commit.)
Because the SQL files are part of our source code repository, we can make running the update script part of the normal rollout process. This makes keeping the database and the code in sync easy as pie. Honestly, the hardest part is making sure another dev hasn't grabbed the next number in a pending commit.
We combine this update system with an (optional) nightly wipe of our dev database, replacing the contents with last night's live system backup. After the backup is restored, the update gets run, with any pending update files getting run in the process.
The restoration occurs in such a way that only tables that were in the live database get overwritten. Any update that adds a table therefore also has to be responsible for only adding it if it doesn't exist. DROP TABLE IF EXISTS is handy. Unfortunately not all databases support that, so the update system also allows for execution of scripts written in our language of choice, not just SQL.
All of this in about 150 lines of code. It's as easy as reading a directory, comparing the contents to a table, and executing anything that hasn't already been executed, in a determined order.
There are standard tools for this in many frameworks: Rails has something called Migrations, something that's easily replicated in PHP or any similar language.

Large scale MySQL changes to active sites

Just some pointers here.
I am making fairly extensive modifications to a site, including the MySQL database.
My plan is to do everything on my development server, export the new MySQL structure for the db and import it onto the clients server.
Basically I need to know that performing a structure only import will not overwrite/delete existing data. I am not making changes to the data type or field length.
In my experience, when you export a database (through phpMyAdmin for instance), part of the SQL script that is created includes a "DROP TABLE IF EXISTS 'table_name';" before doing a "CREATE TABLE 'table_name'...;" to build the new table.
My guess is that this is not what you want to do! Certainly use the dev system to alter the structure in order to make everything correct, but then look around for a database synchronisation routine where you can provide the old structure, the new structure, and the software will create the appropriate "ALTER TABLE 'table_name'...;" scripts to make the required changes.
You should then really examine these change files before executing them on the live database, and of course BACKUP the live database, and ensure you are able to fully recover from the backup before starting any of the alterations!
I've had to do this a lot, and it always goes like this:
Make a backup of the live database, complete with data.
Make a backup of the live database schema only.
Calculate the differences between the old (live) schema and the new (devel) schema.
Create all of the 'ALTER TABLE ...' DDL statements necessary to upgrade from the old schema to the new one. Keep in mind that if you rename a field, you probably won't be able to just rename it -- you'll need to create the new field, copy the data from the old field, and then drop the old field.
If you changed relationships between tables, you'll probably need to drop indexes and foreign key relationships first, and then add them back afterwards.
You'll need to populate any new fields based upon their default values, if any.
Once you've got all the pieces working, you'll need to combine them into one large script, and then run it on a copy of the live database.
Dump the schema and compare it against the desired new schema -- if they don't match, go back to step 3 and repeat.
Dump the data and compare it against the expected changes -- again, if they don't match, go back to step 3 and repeat.
You're going to learn a lot more about SQL DDL/DML during this process than you ever thought you'd learn. (For one project, where we were switching from natural keys to UUID keys for 50+ tables, I ended up writing programs to generate all of the DDL/DML.)
Good luck, and make frequent backups.
I'd recommend to prepare a sql script for every change you do on development server, so you will be able to reproduce it on development. You shouldn't get to the point where you need to calculate differences between database structures
This is how I do it, all changes are reflected in sql scripts, and I can reconstruct the history of my database running all these files if needed.
Test the final release version on a "staging" mysql server. Make a copy of your production server on another machine and test your script to make sure everything's ok.
Of course, preliminary database backup is a must.