I want to migrate the data source of my Java web application from oracle to either MySQL or PostgreSQL. Oracle database size is huge and contains Large objects as well. Please suggest which would be the better choice? Please explain with some issues, example and points.
PostgreSQL is much more similar to Oracle than MySQL is. Significant difference still exist, but you'd probably find it to me an easier migration, and more similar behaviour by the database.
Large objects as BLOB might give you some problems as Oracle has some vendor specific implementation which is often used in Java code. I.e check you import statements where BLOB is used. For some more see here: http://www-css.fnal.gov/dsg/external/freeware/mysql-vs-pgsql.html
Related
We are currently using Oracle 10G database for the backend support of our application. We need to migrate the entire Oracle database schema into MySQL database, including all existing tables, views, procedures, triggers and sequences etc.
Can anyone kindly help me to guide the steps of migration, without hampering any schema definitions, keys and constraints etc.??
Also I came to know that MySQL does not support 'sequences'. In that case how can we convert the sequences which are present in Oracle database?
Please don't just mention any tool name, because I found few tools online but those are really lengthy and cumbersome processes to follow. Kindly mention step-wise, so that it's understandable easily.
I used Sql Developer IDE earlier, but it supports the reverse way migration, that is, from MySql to Oracle, not the one I need. Hence, I could not use it.
There is an Oracle Doc ID 1477151.1 for that case.
Though you asked to not mention any tool name, in that document Oracle advises to use MySQL Migration Wizard and provide some script examples for manual migration in case if automatisaton won't work.
Check those out. I hope that'll help
UPD: Again, I'm aware of you asking not to mention any tool, but here's another excerpt from that doc where even Oracle clearly says you have to use a third-party tool
Migration of Stored Procedures, Functions, Packages, Triggers, Views, Sequences must be performed using third party tools and needs manual effort. This document highlights method to perform data migration.
There are a host of third party tools, some of which are open source. For example:
http://www.sqlines.com/oracle-to-mysql
http://kettle.pentaho.com/
http://www.convert-in.com/ora2sql.htm
http://www.ispirer.com/products/oracle-to-mysql-migration
We are in the process of migrating a fairly large Oracle database (approx. 200 tables) to MariaDB. We used CA ERwin to help create the MariaDB schema objects, and are now trying to figure out the best way to migrate the data. For example, if I use SQL Developer's export utility, the resulting Insert statements are created with Oracle syntax, which can differ quite a bit from MariaDB's syntax. Rather than handling every set of Inserts on an individual basis, I'm hoping somebody has been through this before and can suggest an alternative.
We looked at the MySQL Workbench Migration Wizard, but it doesn't directly support migration from Oracle to MariaDB/MySQL (no surprise as Oracle owns MySQL).
Has anybody come across a utility that will make this task easier and save time? Thanks.
Has anyone been successful in creating DDL using Erwin for Amazon Redshift? If not, does anyone know of a way to convert, say a MySQL DDL from Erwin, to an Amazon Redshift compliant DDL?
I understand that Redshift is based on PostgreSQL 8.0.2. However, there are numerous PostgreSQL features that are no compatible with Redshift. So, if I use a tool to convert MySQL DDL to PostgreSQL DDL and try to execute is against Redshift, I always run into issues.
I would appreciate any help.
One approach that works (with limited features) is to forward engineer erwin model into ODBC 3.x compliant schema i.e.
Select Target Database (Actions ---> Target Database) as ODBC/Generic with version as 3.0.
The reason this works is because ODBC/Generic sql can be executed on Redshift without any changes.
NOTE: Features like
Identity
Encode
may need manipulating FET template or more.
However, just selecting target database to ODBC may suffice in general.
Update:
this link suggests that newer DM may have further support for Redshift.
I found alot of question on stack about converting mysql to mssql, but i would like to convert it otherwise.
From mssql server to mysql.
is there a (free) tool for this to do that without connecting to the databases?
i have an sql query dump and i want to convert that by putting that code in an tool.
Thanks.
You're probably best off doing this yourself to ensure everything is correct rather than relying on a third party tool (Also the additional benefit of understanding the differences between the two pieces of code). However you could use this SQL to MySQL tool:
http://download.cnet.com/SQL-To-MySQL-Converter/3000-10254_4-75693763.html
Are there any ways to import data in databases such as MS SQL, MySQL into in-memory databases like HSQLDB, H2 etc ?
H2 supports a special database URL that initialized the database from a SQL script file:
"jdbc:h2:mem;INIT=RUNSCRIPT FROM '~/create.sql'"
HSQLDB and Apache Derby don't support such a feature as far as I know.
I think you need to do
query the data out from MS SQL
import the data into in-memory DB with its API
Either SQL expressions or DB related APIs
In Hibernate: Adding import.sql to the class path works great, hbm2dll checks if the file exists and executes it. The only details is that every sql command most be on one row, otherwise it will fail to execute
You could dump the data as SQL INSERT statements then read it back.
You could read to a temporay object (like a struct) then write back to the internal db.
Look at the free "universal database converter" http://eva-3-universal-database-converter-udc.optadat-com.qarchive.org/ -- it does claim to support MySQL, MS-SQL, and HSQLDB, among others.
It really depends on what ways you think about.
Is there a tool that could do it automatically without programming? Maybe.
Do you want to develop it? Then find out whether your favorite language supports both database engines(standard and in memory) and if it does, just write a script that does it.
Process everything in chunks(fetch n rows at a time then insert them; repeat). How big the chunk size? It's up to you, try different sizes(say 100, 500, 1k etc.) see which one performs better on your hardware, fine tune to the sweet spot.
If your favorite language on the other hand doesn't support both of them, try using something that does.
You can use dbunit for dumping the database to xml files and importing it back to another rdbms.
Latest versions of HSQLDB allow you to open a CSV (comma separated values) or other delimiter separated data file as a TEXT TABLE in HSQLDB even with mem: databases, which can then be copied to other tables.
As others have pointed out, there are also capable and well maintained third party tools for this purpose.