Migrate from Oracle to MySQL - mysql
We ran into serious performance problems with our Oracle database and we would like to try to migrate it to a MySQL-based database (either MySQL directly or, more preferably, Infobright).
The thing is, we need to let the old and the new system overlap for at least some weeks if not months, before we actually know, if all the features of the new database match our needs.
So, here is our situation:
The Oracle database consists of multiple tables with each millions of rows. During the day, there are literally thousands of statements, which we cannot stop for migration.
Every morning, new data is imported into the Oracle database, replacing some thousands of rows. Copying this process is not a problem, so we could, in theory, import in both databases in parallel.
But, and here the challenge lies, for this to work we need to have an export from the Oracle database with a consistent state from one day. (We cannot export some tables on Monday and some others on Tuesday, etc.) This means, that at least the export should be finished in less than one day.
Our first thought was to dump the schema, but I wasn't able to find a tool to import an Oracle dump file into MySQL. Exporting tables in CSV files might work, but I'm afraid it could take too long.
So my question now is:
What should I do? Is there any tool to import Oracle dump files into MySQL? Does anybody have any experience with such a large-scale migration?
PS: Please, don't suggest performance optimization techniques for Oracle, we already tried a lot :-)
Edit: We already tried some ETL tools before, only to find out, that they were not fast enough: Exporting only one table already took more than 4 hours ...
2nd Edit: Come on folks ... did nobody ever try to export a whole database as fast as possible and convert the data so that it can be imported into another database system?
Oracle does not supply an out-of-the-box unload utility.
Keep in mind without comprehensive info about your environment (oracle version? server platform? how much data? what datatypes?) everything here is YMMV and you would want to give it a go on your system for performance and timing.
My points 1-3 are just generic data movement ideas. Point 4 is a method that will reduce downtime or interruption to minutes or seconds.
1) There are 3rd party utilities available. I have used a few of these but best for you to check them out yourself for your intended purpose. A few 3rd party products are listed here: OraFaq . Unfortunately a lot of them run on Windows which would slow down the data unload process unless your DB server was on windows and you could run the load utility directly on the server.
2) If you don't have any complex datatypes like LOBs then you can roll your own with SQLPLUS. If you did a table at a time then you can easily parallelize it. Topic has been visited on this site probably more than once, here is an example: Linky
3) If you are 10g+ then External Tables might be a performant way to accomplish this task. If you create some blank external tables with the same structure as your current tables and copy the data to them, the data will be converted to the external table format (a text file). Once again, OraFAQ to the rescue.
4) If you must keep systems in parallel for days/weeks/months then use a change data capture/apply tool for near-zero downtime. Be prepared to pay $$$. I have used Golden Gate Software's tool that can mine the Oracle redo logs and supply insert/update statements to a MySQL Database. You can migrate the bulk of the data with no downtime the week before go-live. Then during your go-live period, shut down the source database, have Golden Gate catch up the last remaining transactions, then open up access to your new target database. I have used this for upgrades and the catch up period was only a few minutes. We already had a site licenses for Golden Gate so it wasn't anything out of pocket for us.
And I'll play the role of Cranky DBA here and say if you can't get Oracle performing well I would love to see a write up of how MySQL fixed your particular issues. If you have an application where you can't touch the SQL, there are still lots of possible ways to tune Oracle. /soapbox
I have built a C# application that can read an Oracle dump (.dmp) file and pump it's tables of data into a SQL Server database.
This application is used nightly on a production basis to migrate a PeopleSoft database to SQL Server. The PeopleSoft database has 1100+ database tables and the Oracle dump file is greater than 4.5GB in size.
This application creates the SQL Server database and tables and then loads all 4.5GB of data in less than 55 minutes running on a dual-core Intel server.
I don't believe it would be too difficult to modify this application to work with other databases provided they have an ADO.NET provider.
yeah, Oracle is pretty slow. :)
You can use any number of ETL tools to move data from Oracle into MySQL. My favourite is SQL Server Integration Services.
If you have Oracle9i or higher, you can implement Change Data Capture. Read more here http://download-east.oracle.com/docs/cd/B14117_01/server.101/b10736/cdc.htm
Then you can take a delta of changes from Oracle to your MySQL or Infobright using any ETL technologies.
We had the same issue. Needed to get tables and data from oracle dbms to mysql dbms.
We used this tool we found online... It worked well.
http://www.sqlines.com/download
This tool will basically help you:
Connect to your source DBMS(ORACLE)
Connect to destination DBMS(MySQL)
Specify schema and tables in the ORACLE DBMS you want to migrate
Press a "Transfer" button to Run the migration process(running inbuilt migration queries)
Get a transfer log, which will tell how many records were READ from SOURCE and WRITTEN on the destination database, what queries failed.
Hope this will help others that will land on this question.
I've used Pentaho Data Integration to migrate from Oracle to MySql (I also migrated the same data to Postresql, which was about 50% quicker, which I guess was largely due to the different JDBC drivers being used). I followed Roland Bouman's instructions here, almost to the letter, and was very pleasantly suprised at how easy it was:
Copy Table data from one DB to another
I don't know whether it will be appropriate for your data load, but it's worth a shot.
I recently released etlalchemy to accomplish this task. It is an open-sourced solution which allows migration between any 2 SQL databases with 4 lines of Python, and was initially designed to migrate from Oracle to MySQL. Support has been added for MySQL, PostgreSQL, Oracle, SQLite and SQL Server.
This will take care of migrating schema (arguably the most challenging), data, indexes and constraints, with many more options available.
To install:
$ pip install etlalchemy
On El Capitan: pip install --ignore-installed etlalchemy
To run:
from etlalchemy import ETLAlchemySource, ETLAlchemyTarget
orcl_db_source = ETLAlchemySource("oracle+cx_oracle://username:password#hostname/ORACLE_SID")
mysql_db_target = ETLAlchemyTarget("mysql://username:password#hostname/db_name", drop_database=True)
mysql_db_target.addSource(orcl_db_source)
mysql_db_target.migrate()
Concerning performance, this tool utilizes BULK import tools across various RDBMS such as mysqlimport and COPY FROM (postgresql) to carry out migrations efficiently. I was able to migrate a 5GB SQL Server database with 33,105,951 rows into MySQL in 40 minutes, and a 3GB 7,000,000 row Oracle database to MySQL in 13 minutes.
To get more background on the origins of the project, check out this post. If you get any errors running the tool, open an issue on the github repo and I'll patch it up in less than a week!
(To install the "cx_Oracle" Python driver, follow these instructions)
You can use Python, SQL*Plus and mysql.exe (MySQL client) script to copy whole table of just query results.
It will be portable because all those tools exist on Windows and Linux.
When I had to do it I implemented following steps using Python:
Extract data into CSV file using SQL*Plus.
Load dump file into MySQL
using mysql.exe.
You can improve performance by performing parallel load using Tables/Partitions/Sub-partitions.
Disclosure: Oracle-to-MySQL-Data-Migrator is the script I wrote for data integration between Oracle and MySQL on Windows OS.
Related
SSMA mySQL to SQLServer. Migrate only part of the table
I need to migrate a series of tables from mySQL 5.6 -> SQLServer 2016. A couple of these tables are extremely large (6.10^9 rows, billions of records). I've tried to migrate them using SSMA but, unfortunately, the process takes over a week and frequently crashes before it is completed. My questions are: 1) Is there a better alternative, e.g. free software tools? Any suggestions welcome. 2) If not: is it possible to configure SSMA to migrate only parts of the table (e.g. 10%)? An option is to migrate the table to a .SQL file, then run it in SQLServer as a query. The problem then is opening a file that can easily be 50-100 GB. 3) Are there any SSMA tweaks that can speed up the migration? E.g. the options file has parameters such as the buffer size & executing from Server side/client side -but no documentation explains the advantages.
How to make a database migration where the target remains fully operational during it
At our company we are trying to migrate data from an old Local SQL Server database to a RDS MySql database using SSIS. The original database is roughly 4GB in size and we are required to do the migration without taking down the production servers. The dev team reports that the migration runs fine with data being transferred, but after several hours (roughly 8 hours, but it's not exact. Sometimes it's less sometimes it's more) the connection abruptly closes. We have tried everything we can possibly think of on our side but we don't know what else could be going wrong. Based on their tests and ours, we think it could be the instance is closing the connection after being open for too long. Does anyone know what could be causing this?. We need another alternative tool to make the migration and the target databases remains fully operational during the process?
I recommend you try the MySQL workbench 6.3 that oracle has out which has a piece precisely designed for your purpose. It is under GNU license so they have a community version which is free. There is also Data Loader which has a free trial version. The standard version is only $99. You can use logical export and convert it, so there will be no down time. GoldenGate would be perfect, but it is crazy expensive. I know people who have used Kettle to do what you are doing. Kettle is open source but you will have to write transforms so it will be a bit more tedious. With SqlServer you can clone the database, and then use the cloned version to do whatever you need to do to get it converted to MySql, bring it down, whatever, while the original stays up. Cheers Why cannot a 4GB database be brought down for a bit? And why would a 4GB database take 8 hours using SSIS ? I commonly move terabytes around in less time than that. That is in an Oracle shop, but still...
Seeking a Faster way to update MySql database
I have a database with data that is read-only as far as the application using it is concerned. However, different groups of tables within the database need to be refreshed weekly, or monthly (the organization generating the data provides entirely new flat files for this purpose). Most of the updates are small but some are large (more than 5 million rows with a large number of fields). I like to load the data to a test database and then just replace the entire database in production. So far I have been doing this by exporting the data using mysqldump and then importing it into production. The problem is that the import takes 6-8 hours and the system is unusable during that time. I would like to get the downtime as short as possible. I’ve tried all the tips I could find to speed mysqldump, such as those listed here: http://vitobotta.com/smarter-faster-backups-restores-mysql-databases-with-mysqldump/#sthash.cQ521QnX.hpG7WdMH.dpbs. I know that many people recommend Percona’s XtraBackup, but unfortunately I’m on a Windows 8 Server and Perconia does not run on Windows. Other fast backups/restore options are too expensive (e.g., MySql Enterprise). Since my test server and production server are both 64 bit Windows machines and are both running the same version of MySql,(5.6) I thought I could just zip up the database files and copy them over to swap out the whole database at once (all are innodb). However, that didn’t work. I saw the tables in MySql Workbench, but couldn’t access them. I’d like to know if copying the database files is a viable option (I may have done it wrong) and if it is not, then what low cost options are available to reduce my downtime?
How to dump data from Oracle to MySql [duplicate]
We ran into serious performance problems with our Oracle database and we would like to try to migrate it to a MySQL-based database (either MySQL directly or, more preferably, Infobright). The thing is, we need to let the old and the new system overlap for at least some weeks if not months, before we actually know, if all the features of the new database match our needs. So, here is our situation: The Oracle database consists of multiple tables with each millions of rows. During the day, there are literally thousands of statements, which we cannot stop for migration. Every morning, new data is imported into the Oracle database, replacing some thousands of rows. Copying this process is not a problem, so we could, in theory, import in both databases in parallel. But, and here the challenge lies, for this to work we need to have an export from the Oracle database with a consistent state from one day. (We cannot export some tables on Monday and some others on Tuesday, etc.) This means, that at least the export should be finished in less than one day. Our first thought was to dump the schema, but I wasn't able to find a tool to import an Oracle dump file into MySQL. Exporting tables in CSV files might work, but I'm afraid it could take too long. So my question now is: What should I do? Is there any tool to import Oracle dump files into MySQL? Does anybody have any experience with such a large-scale migration? PS: Please, don't suggest performance optimization techniques for Oracle, we already tried a lot :-) Edit: We already tried some ETL tools before, only to find out, that they were not fast enough: Exporting only one table already took more than 4 hours ... 2nd Edit: Come on folks ... did nobody ever try to export a whole database as fast as possible and convert the data so that it can be imported into another database system?
Oracle does not supply an out-of-the-box unload utility. Keep in mind without comprehensive info about your environment (oracle version? server platform? how much data? what datatypes?) everything here is YMMV and you would want to give it a go on your system for performance and timing. My points 1-3 are just generic data movement ideas. Point 4 is a method that will reduce downtime or interruption to minutes or seconds. 1) There are 3rd party utilities available. I have used a few of these but best for you to check them out yourself for your intended purpose. A few 3rd party products are listed here: OraFaq . Unfortunately a lot of them run on Windows which would slow down the data unload process unless your DB server was on windows and you could run the load utility directly on the server. 2) If you don't have any complex datatypes like LOBs then you can roll your own with SQLPLUS. If you did a table at a time then you can easily parallelize it. Topic has been visited on this site probably more than once, here is an example: Linky 3) If you are 10g+ then External Tables might be a performant way to accomplish this task. If you create some blank external tables with the same structure as your current tables and copy the data to them, the data will be converted to the external table format (a text file). Once again, OraFAQ to the rescue. 4) If you must keep systems in parallel for days/weeks/months then use a change data capture/apply tool for near-zero downtime. Be prepared to pay $$$. I have used Golden Gate Software's tool that can mine the Oracle redo logs and supply insert/update statements to a MySQL Database. You can migrate the bulk of the data with no downtime the week before go-live. Then during your go-live period, shut down the source database, have Golden Gate catch up the last remaining transactions, then open up access to your new target database. I have used this for upgrades and the catch up period was only a few minutes. We already had a site licenses for Golden Gate so it wasn't anything out of pocket for us. And I'll play the role of Cranky DBA here and say if you can't get Oracle performing well I would love to see a write up of how MySQL fixed your particular issues. If you have an application where you can't touch the SQL, there are still lots of possible ways to tune Oracle. /soapbox
I have built a C# application that can read an Oracle dump (.dmp) file and pump it's tables of data into a SQL Server database. This application is used nightly on a production basis to migrate a PeopleSoft database to SQL Server. The PeopleSoft database has 1100+ database tables and the Oracle dump file is greater than 4.5GB in size. This application creates the SQL Server database and tables and then loads all 4.5GB of data in less than 55 minutes running on a dual-core Intel server. I don't believe it would be too difficult to modify this application to work with other databases provided they have an ADO.NET provider.
yeah, Oracle is pretty slow. :) You can use any number of ETL tools to move data from Oracle into MySQL. My favourite is SQL Server Integration Services. If you have Oracle9i or higher, you can implement Change Data Capture. Read more here http://download-east.oracle.com/docs/cd/B14117_01/server.101/b10736/cdc.htm Then you can take a delta of changes from Oracle to your MySQL or Infobright using any ETL technologies.
We had the same issue. Needed to get tables and data from oracle dbms to mysql dbms. We used this tool we found online... It worked well. http://www.sqlines.com/download This tool will basically help you: Connect to your source DBMS(ORACLE) Connect to destination DBMS(MySQL) Specify schema and tables in the ORACLE DBMS you want to migrate Press a "Transfer" button to Run the migration process(running inbuilt migration queries) Get a transfer log, which will tell how many records were READ from SOURCE and WRITTEN on the destination database, what queries failed. Hope this will help others that will land on this question.
I've used Pentaho Data Integration to migrate from Oracle to MySql (I also migrated the same data to Postresql, which was about 50% quicker, which I guess was largely due to the different JDBC drivers being used). I followed Roland Bouman's instructions here, almost to the letter, and was very pleasantly suprised at how easy it was: Copy Table data from one DB to another I don't know whether it will be appropriate for your data load, but it's worth a shot.
I recently released etlalchemy to accomplish this task. It is an open-sourced solution which allows migration between any 2 SQL databases with 4 lines of Python, and was initially designed to migrate from Oracle to MySQL. Support has been added for MySQL, PostgreSQL, Oracle, SQLite and SQL Server. This will take care of migrating schema (arguably the most challenging), data, indexes and constraints, with many more options available. To install: $ pip install etlalchemy On El Capitan: pip install --ignore-installed etlalchemy To run: from etlalchemy import ETLAlchemySource, ETLAlchemyTarget orcl_db_source = ETLAlchemySource("oracle+cx_oracle://username:password#hostname/ORACLE_SID") mysql_db_target = ETLAlchemyTarget("mysql://username:password#hostname/db_name", drop_database=True) mysql_db_target.addSource(orcl_db_source) mysql_db_target.migrate() Concerning performance, this tool utilizes BULK import tools across various RDBMS such as mysqlimport and COPY FROM (postgresql) to carry out migrations efficiently. I was able to migrate a 5GB SQL Server database with 33,105,951 rows into MySQL in 40 minutes, and a 3GB 7,000,000 row Oracle database to MySQL in 13 minutes. To get more background on the origins of the project, check out this post. If you get any errors running the tool, open an issue on the github repo and I'll patch it up in less than a week! (To install the "cx_Oracle" Python driver, follow these instructions)
You can use Python, SQL*Plus and mysql.exe (MySQL client) script to copy whole table of just query results. It will be portable because all those tools exist on Windows and Linux. When I had to do it I implemented following steps using Python: Extract data into CSV file using SQL*Plus. Load dump file into MySQL using mysql.exe. You can improve performance by performing parallel load using Tables/Partitions/Sub-partitions. Disclosure: Oracle-to-MySQL-Data-Migrator is the script I wrote for data integration between Oracle and MySQL on Windows OS.
Replicating data from mySQL to Hbase using flume: how?
I have a large mySQL database with heavy load and would like to replicate the data in this database to Hbase in order to do analytical work on it. edit: I want the data to replicate relatively quickly, and without any schema changes (no timestamped rows, etc) I've read that this can be done using flume, with mySQL as a source, possibly the mySQL bin logs, and Hbase as a sink, but haven't found any detail (high or low level). What are the major tasks to make this work? Similar question were asked and answered earlier but didn't really explain how or point to resources that would: Flume to migrate data from MySQL to Hadoop Continuous data migration from mysql to Hbase
You are better off using SQOOP for this purpose, IMHO. It was developed for exactly this purpose. Flume was made for a rather different purpose, like aggregating log data, data generated from sensors etc. See this for more details.
So far there are three options worth considering: Sqoop: After initial bulk import, it supports two types of incremental udpates import: APPEND, LAST-MODFIED. But being said, It won't give you Real-Time or even near Real-Time replication. It's not because Sqoop can't run that fast, it's because you don't want to plug in a Sqoop pipe to your Mysql server and puling data every 1 or 2 mins. Trigger: This is a quick-dirty solution, by adding triggers to the source RDBMS, and update your HBase according. This one gives you Real-Time satisfaction. But you have to mess up the source DB by adding triggers. It might be ok as a temporal solution, but long term, it just won't do. Flume: This one, you will need to put in the most development effort. It doesn't need to touch the DB, it doesn't add in Reading traffic to the DB neither(It tails the transaction logs). Personally I'd go for flume, not only it channels the data from RDBMS to your HBase, but also can you do something with the data while they are streaming through your flume pipe. (e.g. transformation, notification, alerting etc etc)