PHPMyAdmin put the most recent database - mysql

I'm wondering if there is any better way than going through the tables one by one adding the columns missing when some fields/tables needs to be added because of the most recent changes in the app?
For example, I'm working at the localhost and when I finish doing the new version of my app, I will put all the files into my FTP and, sometimes, I have done, in my local database, changes and so it means that I also need to update my database at my server.
There's any better way to add/edit the columns/tables without changing the info? Some of the columns are also deleted, etc.

Hopefully you've thought your database design through so that making changes to the structure is a rare occurrence. If you're making regular changes to the number of columns or adding tables, it's likely a sign that you haven't normalized your database structure sufficiently.
Anyway, I'd script it as an SQL file that you deploy (which you can then run through phpMyAdmin or the command line or any other means you prefer to execute SQL queries). This has the added advantage of being something you can easily duplicate across your development and production databases, send to customers, and if you wish store in version control so you know when exactly you made the changes to the database.
This way, you'll end up with an SQL file that has a couple of statements like
ALTER TABLE `foo` ADD `new` INT NOT NULL ;
or something similar.
As for how you'd make the file, probably the easiest way is just copying and pasting the generated SQL statement from phpMyAdmin after modifying the table -- the SQL code used to make the change is shown near the top of the screen on the next page. You can copy and paste that to a new text file to create your SQL file. You may wish to add the first line
use `baz`;
using your database name instead of "baz". That way you don't have to specify on import which database the changes are meant for.
Hope this helps.

Related

How to properly wipe a database, and re-import?

I am unsure about the best way to do this. As I'm getting ready to put a new database into production, I need to import data from the old database that has been formed in the meantime of me working on it. The new database now also contains a lot of fake data that was used for testing, which I have to get rid of, so a fresh complete re-import seems reasonable.
Now, truncating all the tables in the new database cannot go through, because the foreign keys prevent it. Simply deleting the data instead would solve that problem, but it leaves the AUTO_INCREMENT indexes to the values where they were, so it's not a "proper" wipe. Now, there could be more properties such as that one, that would be left over (so to say), but this is the only one that I'm aware of.
So my question now is, how much of a problem could these "leftover" pieces of data pose to performance, if I were to go with the simple DELETE solution?
And also; is there a way that would be more thorough in cleaning it out, and also allow me to, of course, keep the defined constraints?
First i would use some gui tool to create the dump for the old DB ( like mySql workbench, or what ever you prefer ). Check options "Export to self-contained file", and check "Dump stored procedures and functions","Dump events" and "Dump triggers".
Then get create scripts for all tables not included in the old DB.
You can do this via "reverse engineer" option.
If you have trouble with this part this post will help.
How to get a table creation script in MySQL Workbench?
When you have old DB dump and create scripts for new sql tables, combine them to a single sql file.
On the first row add:
SET FOREIGN_KEY_CHECKS = 0;
On the last row add:
SET FOREIGN_KEY_CHECKS = 1;
Run the script. As a result you should have all tables ( new without data and old with data ), with all relations set properly. Hope it will work for you.

If I update a SQL table Scheme. Do I have to update all users DBs linked tables?

I updated the SCHEMA of a live table in MySQL for use in my multi-user database. Each user has their own db and links to the production tables through ODBC.
I have been receiving a write error while trying to test my schema updates. I cannot find the core reason. I hypothesized that because the other users are in the production table but have not been relinked to update the table SCHEMA; That it is causing a conflicting write error on my relinked table.
I added a TINYINT with No NULLS and default value of 0
I double checked all datatypes for incompatibility & have tested the "non relinked" tables in a older version of the DB and confirmed it is working as intended with no errors
I expect/want to be able to edit records without a write error, but am hesitant to update the other users to the new table if it is currently having write errors
After changing the schema of a linked table, it's required to refresh the link on all Access databases connected to it.
You can do this on the ribbon through external data -> linked table manager.
Unfortunately, either all users that have a database need to do this manually, unless you automate the task on startup through vba.
You have two separate issues. To "see" new columns, then yes, you must re-link the tables.
(so above is separate question and separate issue). You thus as a general rule can add new columns to the database (even while in use). However, the client side linked tables will not see the new columns until such time you re-link. This approach (adding new columns, but not yet re-linked from Access) is certainly ok and fine - the only downside is end users can't see nor use the new columns until such time you link. From a developer point of view, this good - since your users will not see nor find new columns until such time you roll out a new front end to each work station.
Ok, now problem and issue number two.
As for adding a new column, then re-linking, and THEN having some issue is really a separate issue. In most cases, if you attempting to use a tiny int as a Boolean (and I think that is your case), then you need to ensure several things:
Do not allow nulls (you seem to have this ok).
Make sure you set a default of 0 (server side) for this column. (you might have not allowed nulls, but without a default, then Access likely will still complain. And this default is important during creating time - since the new column needs to be "filled" with zeros.
Make sure the table has a PK defined.
Consider adding a row version column (I think mySQL has these, not sure but they can help immensely).

Making sure that a table is constructed correctly

I have a schema of a database and a web application. I want to have the web application be able to select, insert and remove rows to a table, but the table may not exist, maybe in a testing environment, and the table may be missing columns, most likely because the web application has updated.
I want to be able to make sure that the table is ready to accept the data that the web application sends to it during the time the application is alive.
The idea I had is the application (written in Java) will have a table structure embedded into it, and when the application starts, just copy all of the data in the table (if it exists) to a temporary table, delete the old table and make a new one with the temporary table's data, and then drop the temporary table. As you can tell, it's nowhere near innovative.
Another idea I had is use the SHOW COLUMNS command to correct any missing columns parallel with the SHOW TABLES LIKE to check if it exists, but I feel like Stack Overflow would've had a better solution. Is that all I can do?
There are many ways to solve the problem of consistency of the database version and the version of the application.
However, in the production database, this situation is unacceptable.
I think that the simplest ways are the best.
To ensure such compliance, it is enough to execute a script that updates the database before performing the testing.
START TRANSACTION;
DROP TABLE ... IF EXISTS;
CREATE TABLE ...
COMMIT;
Remember about IF EXISTS and having DROP grant!
Such a script can be easily managed by placing it in RCS and controlling the version number needed in the application.
You can also save this version number in some table in the database itself and check when the application starts, whether the number is compatible with the assumed one and if you do not call the database update script.
Have a look at JPA an Hibernate. There is hbm2ddl.auto property. Looks like "update" option does what you want.
For more details
What are the possible values of the Hibernate hbm2ddl.auto configuration and what do they do

Large scale MySQL changes to active sites

Just some pointers here.
I am making fairly extensive modifications to a site, including the MySQL database.
My plan is to do everything on my development server, export the new MySQL structure for the db and import it onto the clients server.
Basically I need to know that performing a structure only import will not overwrite/delete existing data. I am not making changes to the data type or field length.
In my experience, when you export a database (through phpMyAdmin for instance), part of the SQL script that is created includes a "DROP TABLE IF EXISTS 'table_name';" before doing a "CREATE TABLE 'table_name'...;" to build the new table.
My guess is that this is not what you want to do! Certainly use the dev system to alter the structure in order to make everything correct, but then look around for a database synchronisation routine where you can provide the old structure, the new structure, and the software will create the appropriate "ALTER TABLE 'table_name'...;" scripts to make the required changes.
You should then really examine these change files before executing them on the live database, and of course BACKUP the live database, and ensure you are able to fully recover from the backup before starting any of the alterations!
I've had to do this a lot, and it always goes like this:
Make a backup of the live database, complete with data.
Make a backup of the live database schema only.
Calculate the differences between the old (live) schema and the new (devel) schema.
Create all of the 'ALTER TABLE ...' DDL statements necessary to upgrade from the old schema to the new one. Keep in mind that if you rename a field, you probably won't be able to just rename it -- you'll need to create the new field, copy the data from the old field, and then drop the old field.
If you changed relationships between tables, you'll probably need to drop indexes and foreign key relationships first, and then add them back afterwards.
You'll need to populate any new fields based upon their default values, if any.
Once you've got all the pieces working, you'll need to combine them into one large script, and then run it on a copy of the live database.
Dump the schema and compare it against the desired new schema -- if they don't match, go back to step 3 and repeat.
Dump the data and compare it against the expected changes -- again, if they don't match, go back to step 3 and repeat.
You're going to learn a lot more about SQL DDL/DML during this process than you ever thought you'd learn. (For one project, where we were switching from natural keys to UUID keys for 50+ tables, I ended up writing programs to generate all of the DDL/DML.)
Good luck, and make frequent backups.
I'd recommend to prepare a sql script for every change you do on development server, so you will be able to reproduce it on development. You shouldn't get to the point where you need to calculate differences between database structures
This is how I do it, all changes are reflected in sql scripts, and I can reconstruct the history of my database running all these files if needed.
Test the final release version on a "staging" mysql server. Make a copy of your production server on another machine and test your script to make sure everything's ok.
Of course, preliminary database backup is a must.

MySQL to SQL Server transferring data

I need to convert data that already exists in a MySQL database, to a SQL Server database.
The caveat here is that the old database was poorly designed, but the new one is in a proper 3N form. Does any one have any tips on how to go about doing this? I have SSMS 2005.
Can I use this to connect to the MySQL DB and create a DTS? Or do I need to use SSIS?
Do I need to script out the MySQL DB and alter every statement to "insert" into the SQL Server DB?
Has anyone gone through this before? Please HELP!!!
See this link. The idea is to add your MySQL database as a linked server in SQL Server via the MySQL ODBC driver. Then you can perform any operations you like on the MySQL database via SSMS, including copying data into SQL Server.
Congrats on moving up in the RDBMS world!
SSIS is designed to do this kind of thing. The first step is to map out manually where each piece of data will go in the new structure. So your old table had four fields, in your new structure fileds1 and 2 go to table a and field three and four go to table b, but you also need to have the autogenerated id from table a. Make notes as to where data types have changed and you may need to make adjustments or where you have required fileds where the data was not required before etc.
What I usually do is create staging tables. Put the data in the denormalized form in one staging table and then move to normalized staging tables and do the clean up there and add the new ids as soon as you have them to the staging tables. One thing you will need to do if you are moving from a denormalized database to a normalized one is that you will need to eliminate the duplicates from the parent tables before inserting them into the actual production tables. You may also need to do dataclean up as there may be required fileds in the new structure that were not required in the old or data converstion issues becasue of moving to better datatypes (for instance if you stored dates in the old database in varchar fields but properly move to datetime in the new db, you may have some records which don't have valid dates.
ANother issue you need to think about is how you will convert from the old record ids to the new ones.
This is not a an easy task, but it is doable if you take your time and work methodically. Now is not the time to try shortcuts.
What you need is an ETL (extract, transform, load) tool.
http://en.wikipedia.org/wiki/Extract,_transform,_load#Tools
I don't really know how far an 'ETL' tool will get you depending on the original and new database designs. In my career I've had to do more than a few data migrations and we usually always had to design a special utility which would update a fresh database with records from the old database, and yes we coded it complete with all the update/insert statements that would transform data.
I don't know how many tables your database has, but if they are not too many then you could consider going the grunt root. That's one technique that's guaranteed to work after all.
If you go to your database in SSMS and right-click, under tasks should be an option for "Import Data". You can try to use that. It's basically just a wizard that creates an SSIS package for you, which it can then either run for you automatically or which you can save and then alter as needed.
The big issue is how you need to transform the data. This goes into a lot of specifics which you don't include (and which are probably too numerous for you to include here anyway).
I'm certain that SSIS can handle whatever transformations you need to do to change it from the old format to the new. An alternative though would be to just import the tables into MS SQL as-is into staging tables, then use SQL code to transform the data into the 3NF tables. It's all a matter of what your most comfortable with. If you go the second route, then the import process that I mentioned above in SSMS could be used. It will even create the destination tables for you. Just be sure that you give them unique names, maybe prefixing them with "STG_" or something.
Davud mentioned linked servers. That's definitely another way that you can go (and got my upvote). Personally, I prefer to copy the tables over into MS SQL first since linked servers can sometimes have weirdness, especially when it comes to data types not mapping between different providers. Having the tables all in MS SQL will also probably be a bit faster and saves time if you have to rerun or correct portions of the data. As I said though, the linked server method would probably be fine too.
I have done this going the other direction and SSIS works fine, although I might have needed to use a script task to deal with slight data type weirdness. SSIS does ETL.