I have 2 active database connections, I need to replace a number of tables from 'connection1' with that of connection2. The structures may, or may not be same, (depending if we make changes to the connection1 table.
I would assume I should do a complete table dump and replace keys where neccesary, but I really have no idea how to do this :)
Any help?
Have a look at Schema and Data sync tools in dbForge Studio for MySQL. It will help you to compare two databases on different servers, map tables and fields, generate and run synchronization script.
I ended up using the build in system command in PHP and mysqldump to first dump the data (export) to a file, then used system() again with mysql to import it into the new table and replace the old one.
Works like a charm :)
Related
I have a .sql file from Oracle which contains create table/index statements and a lot of insert statements(around 1M insert).
I can manually modify the create table/index part(not too much), but for the insert part there are some Oracle functions like to_date.
I know MySql has a similar function STR_TO_DATE but the usage of the parameter is different.
I can connect to MySQL, but the .sql file is the only thing I got from Oracle.
Is there any way I can import this Oracle .sql file into MySQL?
Thanks.
Although the above job can be done by manually editing the script appropriately however there are products available which can be of use. Refer to the link for more information on one such product.
P.S. I am not affiliated in any way to the product
Since you mention about insert script basically i think you will be inserting data for this you can use any ETL tool, like open source tool like Pentaho data integrator, pretty simple to do, just search table to table transformation from different database connection on youtube to learn you should be able to connect to both mysql and oracle database else this wont help, but all the table structures you should create manually in the source database for data - you can just load it using ETL, no need to edit for every single line of insert if its more than 100 may be its very painful thing to do.
We are handling a data aggregation project by having several microsoft sql server databases combining to one mysql database. all mssql database have the same schema.
The requirements are :
each mssql database can be imported to mysql independently
before being able to import each record to mysql we need to validates each records with a specific createrias via php.
each imported mssql database can be rollbacked. It means even it already imported to mysql, all the mssql database can be removed from the mysql.
we would still like to know where does each record imported to the mysql come from what mssql database.
All import process will be done with PHP .
we have difficulty in many aspects. we don't know what is the best approach to solve our problem.
your help will be highly appreciated.
ps: each mssql database has around 60 tables and each table can have a few hundred thousands .
Don't use PHP as a database administration utility. Any time you build a quick PHP script to transfer records directly from one database to another, you're going to cause yourself a world of hurt when that script becomes required for production operation.
You have a number of problems that you need solved:
You have multiple MSSQL databases with similar if not identical tables.
You have a single MySQL database that you want to merge the data into.
The imported data must be altered in a specific way before being merged.
You want to prevent all duplicate records in your import.
You want to know what database each record originally came from.
The solution?
Analyze the source MSSQL databases and create a merge strategy for them.
Create a database structure on the MySQL database that fits the merge strategy in #1, including all the new key constraints (like unique and foreign keys) required for the consolidation.
At this point you have two options left:
Dump the data from each of the source databases into raw data using your RDBMS administration utility of choice. Alter that data to fit your merge strategy and constraints. Document this, and then merge all of the data into your new database structure.
Use a tool like opendbcopy to map columns from one database to another and run a mass import.
Hope this helps.
Was wondering if anyone had any insight or recommended tools for exporting the records from a PostgreSQL database and importing them into a MySQL database. I believe the table structure is 100% identical.
Thoughts? Thanks!
The command
pg_dump --data-only --column-inserts <database_name>
will generate SQL-standard-compliant INSERT statements with all column names listed and one VALUES clause per INSERT. This is the most portable way of moving data from PostgreSQL to any other SQL database.
Check out SquirrelSQL, it can pump data from one database brand into another via the DBCopy plugin. When the table structures are really identical it works quite well.
There is a ruby app called Taps that will do it. I've used it before with great success:
http://adam.heroku.com/past/2009/2/11/taps_for_easy_database_transfers/
If I have a MySQL database with several tables on a live server, now I would like to migrate this database to another server. Of course, the migration I mean here involves some database tables, for example: add some new columns to several tables, add some new tables etc..
Now, the only method I can think of is to use some php/python(two scripts I know) script, connect two databases, dump the data from the old database, and then write into the new database. However, this method is not efficient at all. For example: in old database, table A has 28 columns; in new database, table A has 29 columns, but the extra column will have default value 0 for all the old rows. My script still needs to dump the data row by row and insert each row into the new database.
Using MySQLDump etc.. won't work. Here is the detail. For example: I have FOUR old databases, I can name them as 'DB_a', 'DB_b', 'DB_c', 'DB_d'. Now the old table A has 28 columns, I want to add each row in table A into the new database with a new column ID 'DB_x' (x to indicate which database it comes from). If I can't differentiate the database ID by the row's content, the only way I can identify them is going through some user input parameters.
Is there any tools or a better method than writing a script yourself? Here, I dont need to worry about multithread writing problems etc.., I mean the old database will be down (not open to public usage etc.., only for upgrade ) for a while.
Thanks!!
I don't entirely understand your situation with the columns (wouldn't it be more sensible to add any new columns after migration?), but one of the arguably fastest methods to copy a database across servers is mysqlhotcopy. It can copy myISAM only and has a number of other requirements, but it's awfully fast because it skips the create dump / import dump step completely.
Generally when you migrate a database to new servers, you don't apply a bunch of schema changes at the same time, for the reasons that you're running into right now.
MySQL has a dump tool called mysqldump that can be used to easily take a snapshot/backup of a database. The snapshot can then be copied to a new server and installed.
You should figure out all the changes that have been done to your "new" database, and write out a script of all the SQL commands needed to "upgrade" the old database to the new version that you're using (e.g. ALTER TABLE a ADD COLUMN x, etc). After you're sure it's working, take a dump of the old one, copy it over, install it, and then apply your change script.
Use mysqldump to dump the data, then echo output.txt > msyql. Now the old data is on the new server. Manipulate as necessary.
Sure there are tools that can help you achieving what you're trying to do. Mysqldump is a premier example of such tools. Just take a glance here:
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
What you could do is:
1) You make a dump of the current db, using mysqldump (with the --no-data option) to fetch the schema only
2) You alter the schema you have dumped, adding new columns
3) You create your new schema (mysql < dump.sql - just google for mysql backup restore for more help on the syntax)
4) Dump your data using the mysqldump complete-insert option (see link above)
5) Import your data, using mysql < data.sql
This should do the job for you, good luck!
Adding extra rows can be done on a live database:
ALTER TABLE [table-name] ADD [row-name] MEDIUMINT(8) default 0;
MySql will default all existing rows to the default value.
So here is what I would do:
make a copy of you're old database with MySql dump command.
run the resulting SQL file against you're new database, now you have an exact copy.
write a migration.sql file that will modify you're database with modify table commands and for complex conversions some temporary MySql procedures.
test you're script (when fail, go to (2)).
If all OK, then goto (1) and go live with you're new database.
These are all valid approaches, but I believe you want to write a sql statement that writes other insert statements that support the new columns you have.
I need to convert data that already exists in a MySQL database, to a SQL Server database.
The caveat here is that the old database was poorly designed, but the new one is in a proper 3N form. Does any one have any tips on how to go about doing this? I have SSMS 2005.
Can I use this to connect to the MySQL DB and create a DTS? Or do I need to use SSIS?
Do I need to script out the MySQL DB and alter every statement to "insert" into the SQL Server DB?
Has anyone gone through this before? Please HELP!!!
See this link. The idea is to add your MySQL database as a linked server in SQL Server via the MySQL ODBC driver. Then you can perform any operations you like on the MySQL database via SSMS, including copying data into SQL Server.
Congrats on moving up in the RDBMS world!
SSIS is designed to do this kind of thing. The first step is to map out manually where each piece of data will go in the new structure. So your old table had four fields, in your new structure fileds1 and 2 go to table a and field three and four go to table b, but you also need to have the autogenerated id from table a. Make notes as to where data types have changed and you may need to make adjustments or where you have required fileds where the data was not required before etc.
What I usually do is create staging tables. Put the data in the denormalized form in one staging table and then move to normalized staging tables and do the clean up there and add the new ids as soon as you have them to the staging tables. One thing you will need to do if you are moving from a denormalized database to a normalized one is that you will need to eliminate the duplicates from the parent tables before inserting them into the actual production tables. You may also need to do dataclean up as there may be required fileds in the new structure that were not required in the old or data converstion issues becasue of moving to better datatypes (for instance if you stored dates in the old database in varchar fields but properly move to datetime in the new db, you may have some records which don't have valid dates.
ANother issue you need to think about is how you will convert from the old record ids to the new ones.
This is not a an easy task, but it is doable if you take your time and work methodically. Now is not the time to try shortcuts.
What you need is an ETL (extract, transform, load) tool.
http://en.wikipedia.org/wiki/Extract,_transform,_load#Tools
I don't really know how far an 'ETL' tool will get you depending on the original and new database designs. In my career I've had to do more than a few data migrations and we usually always had to design a special utility which would update a fresh database with records from the old database, and yes we coded it complete with all the update/insert statements that would transform data.
I don't know how many tables your database has, but if they are not too many then you could consider going the grunt root. That's one technique that's guaranteed to work after all.
If you go to your database in SSMS and right-click, under tasks should be an option for "Import Data". You can try to use that. It's basically just a wizard that creates an SSIS package for you, which it can then either run for you automatically or which you can save and then alter as needed.
The big issue is how you need to transform the data. This goes into a lot of specifics which you don't include (and which are probably too numerous for you to include here anyway).
I'm certain that SSIS can handle whatever transformations you need to do to change it from the old format to the new. An alternative though would be to just import the tables into MS SQL as-is into staging tables, then use SQL code to transform the data into the 3NF tables. It's all a matter of what your most comfortable with. If you go the second route, then the import process that I mentioned above in SSMS could be used. It will even create the destination tables for you. Just be sure that you give them unique names, maybe prefixing them with "STG_" or something.
Davud mentioned linked servers. That's definitely another way that you can go (and got my upvote). Personally, I prefer to copy the tables over into MS SQL first since linked servers can sometimes have weirdness, especially when it comes to data types not mapping between different providers. Having the tables all in MS SQL will also probably be a bit faster and saves time if you have to rerun or correct portions of the data. As I said though, the linked server method would probably be fine too.
I have done this going the other direction and SSIS works fine, although I might have needed to use a script task to deal with slight data type weirdness. SSIS does ETL.