mysql to oracle - mysql

I've googled this but can't get a straight answer. I have a mysql database that I want to import in to oracle. Can I just use the mysql dump?

Nope. You need to use some ETL (Export, Transform, Load) tool.
Oracle SQL Developer has inbuilt feature for migrating MySQL DB to Oracle.
Try this link - http://forums.oracle.com/forums/thread.jspa?threadID=875987&tstart=0 This is for migrating MySQL to Oracle.

If the dump is a SQL script, you will need to do a lot of copy & replace to make that script work on Oracle.
Things that come to my mind
remove the dreaded backticks
remove all ENGINE=.... options
remove all DEFAULT CHARSET=xxx options
remove all UNSIGNED options
convert all DATETIME types to DATE
replace BOOLEAN columns with e.g. integer or a CHAR(1) (Oracle does not support boolean)
convert all int(x), smallint, tinyint data types to simply integer
convert all mediumtext, longtext data types to CLOB
convert all VARCHAR columns that are defined with more than 4000 bytes to CLOB
remove all SET ... commands
remove all USE commands
remove all ON UPDATE options for columns
rewrite all triggers
rewrite all procedures

The answer depends on which MySQL features you use. If you don't use stored procedures, triggers, views etc, chances are you will be able to use the MySQL export without major problems.
Take a look at:
mysqldump --compatible=oracle
If you do use these features, you might want to try an automatic converter (Google offers some).
In every case, some knowledge of both syntaxes is required to be able to debug problems (there almost certainly will be some). Also remember to test everything thoroughly.

Related

Sqoop compatibility with TINYTEXT, TEXT, MEDIUMTEXT, and LONGTEXT

For a project of mine, I would like to transfer multiple tables fomr a MySQL database into hive using sqoop. Because I have a few columns that use the MEDIUMTEXT datatype, I'd like to check the compatibility with someone that has experience, to prevent sudden surprises down the road.
Taken from the latest Sqoop userguide (1.4.6) there is no compatibility for BLOB, CLOB, or LONGVARBINARY columns in direct mode.
Given that there is no mention of incompatibilities with "TEXT" datatypes, will I be able to import them from MySQL without problems?
In MySQL, TEXT is same as CLOB. What ever limitations user guide mentions for CLOB is applicable to TEXT types.
Unlike typical datatypes, CLOB and TEXT need not store data inline to the record, instead the contents can be stored in a separate file and there will be pointer in the record. That is why direct path does not work for special types like CLOB/TEXT, BLOB in most of the databases.
I finally got around to setting up my hadoop cluster for my project. I am using hadoop 2.6.3 with hive 1.2.1 and sqoop 1.4.6.
It turns out that there is no problem with importing TEXT datatypes from MySQL into Hive using Sqoop. You can even supply the '--direct' parameter that makes use of the mysqldump tool for quicker transfers. In my project I had to import multiple tables containing 2 MEDIUMTEXT columns each. The tables were only about 2 GB each, so not that massive.
I hope this helps someone that is in the same sitation I was in.

Function Similar with to_char(datetime) that can be used both Oracle and MySQL?

Function Similar with to_char(datetime) that can be used both Oracle and MySQL?
I want to generate the ANSI SQL script to run both in oracle and in MySQL.
But, the generated ANSI SQL is working well. except the the error from to_char().
Is there any function that can be used in both db?
Date formatting abilities couldn't be more different. I think your best chance is to pick one of these:
Run an ALTER SESSION statement when you connect to Oracle to replicate the MySQL default date format and do all date formatting in your client app.
Write a custom wrapper function and use it in your queries. You have to fork function code and maintain two versions.
You still have DBMS-dependent code but it's isolated in your initialisation code (option #1) or your installation script (option #2).
Perhaps there's a third option: tweak your database abstraction library to detect column types in result sets and convert dates to custom objects (e.g., DateTime if you use PHP, Date if you use JavaScript, etc.).
Mysql and Oracle uses different syntax for converting date to string.
You should use different queries.

Create datafield with gmt-timezone timestamp in mysql?

I'm trying to convert a postgresql sql-query to mysql. Using a translator.
this is the query in postgres:
comment_date_gmt timestamp without time zone DEFAULT timezone('gmt'::text, now()) NOT NULL,
it's converted to
comment_date_gmt timestamp DEFAULT timezone('gmt',
The none-closed parenthesis is a sign that everything isn't right. I'm trying to figure out what this query should look like. Any suggestions?
The only reliable SQL query dialect converter is the human brain.
Tools can be useful for the basics, like data type renaming, but lots of that sort of thing can be avoided by just writing the queries using standard types in the first place.
You'll have a very hard time converting a MySQL query that uses query variables to a PostgreSQL query, or converting a PostgreSQL (well, SQL-standard) recursive common table expression to something MySQL understands. The two have totally different stored procedure languages, different built-in functions, and all sorts of things. array_agg, unnest, etc ... most of that stuff would require translation to queries using MySQL variables where it's possible to do it at all. Then you've got window functions like row_number, lead, lag, and aggregates used as running windows like sum(blah) OVER (...). A generic converter would need to "understand" the query to actually do the job.
A specific answer for the named problem isn't really possible since you haven't identified the converter tool.
At a guess, if you change the PostgreSQL query to:
comment_date_gmt timestamp without time zone DEFAULT (current_timestamp AT TIME ZONE 'utc') NOT NULL,
which is the standard phrasing understood by PostgreSQL and other compliant databases.

switching from MySQL to PostgreSQL for Ruby on Rails for the sake of Heroku

I'm trying to push a brand new Ruby on Rails app to Heroku. Currently, it sits on MySQL. It looks like Heroku doesn't really support MySQL and so we are considering using PostgreSQL, which they DO support.
How difficult should I expect this to be? What do I need to do to make this happen?
Again, please note that my DB as of right now (both development & production) are completely empty.
Common issues:
GROUP BY behavior. PostgreSQL has a rather strict GROUP BY. If you use a GROUP BY clause, then every column in your SELECT must either appear in your GROUP BY or be used in an aggregate function.
Data truncation. MySQL will quietly truncate a long string to fit inside a char(n) column unless your server is in strict mode, PostgreSQL will complain and make you truncate your string yourself.
Quoting is different, MySQL uses backticks for quoting identifiers whereas PostgreSQL uses double quotes.
LIKE is case insensitive in MySQL but not in PostgreSQL. This leads many MySQL users to use LIKE as a case insensitive string equality operator.
(1) will be an issue if you use AR's group method in any of your queries or GROUP BY in any raw SQL. Do some searching for column "X" must appear in the GROUP BY clause or be used in an aggregate function and you'll see some examples and common solutions.
(2) will be an issue if you use string columns anywhere in your application and your models aren't properly validating the length of all incoming string values. Note that creating a string column in Rails without specifying a limit actually creates a varchar(255) column so there actually is an implicit :limit => 255 even though you didn't specify one. An alternative is to use t.text for your strings instead of t.string; this will let you work with arbitrarily large strings without penalty (for PostgreSQL at least). As Erwin notes below (and every other chance he gets), varchar(n) is a bit of an anachronism in the PostgreSQL world.
(3) shouldn't be a problem unless you have raw SQL in your code.
(4) will be an issue if you're using LIKE anywhere in your application. You can fix this one by changing a like b to lower(a) like lower(b) (or upper(a) like upper(b) if you like to shout) or a ilike b but be aware that PostgreSQL's ILIKE is non-standard.
There are other differences that can cause trouble but those seem like the most common issues.
You'll have to review a few things to feel safe:
group calls.
Raw SQL (including any snippets in where calls).
String length validations in your models.
All uses of LIKE.
If you have no data to migrate, it should be as simple as telling your Gemfile to use the pg gem instead, running bundle install, and updating your database.yml file to point to your PostgreSQL databases. Then just run your migrations (rake db:migrate) and everything should work great.
Don't feel you have to migrate to Postgres - there are several MySQL Addon providers available on Heroku - http://addons.heroku.com/cleardb is the one I've had the most success with.
It should be simplicity itself: port the DDL from MySQL to PostgreSQL.
Does Heroku have any schema creation scripts? I'd depend on those if they were available.
MySQL and PostgreSQL are different (e.g. identity type for MySQL, sequences for PostgreSQL). But the port shouldn't be too hard. How many tables? Tens are doable.

How to load column names, data from a text file into a MySQL table?

I have a dataset with a lot of columns I want to import into a MySQL database, so I want to be able to create tables without specifying the column headers by hand. Rather I want to supply a filename with the column labels in it to (presumably) the MySQL CREATE TABLE command. I'm using standard MySQL Query Browser tools in Ubuntu, but I didn't see in option for this in the create table dialog, nor could I figure out how to write a query to do this from the CREATE TABLE documentation page. But there must be a way...
A CREATE TABLE statement includes more than just column names
Table name*
Column names*
Column data types*
Column constraints, like NOT NULL
Column options, like DEFAULT, character set
Table constraints, like PRIMARY KEY* and FOREIGN KEY
Indexes
Table options, like storage engine, default character set
* mandatory
You can't get all this just from a list of column names. You should write the CREATE TABLE statement yourself.
Re your comment: Many software development frameworks support ways to declare tables without using SQL DDL. E.g. Hibernate uses XML files. YAML is supported by Rails ActiveRecord, PHP Doctrine and Perl's SQLFairy. There are probably other tools that use other format such as JSON, but I don't know one offhand.
But eventually, all these "simplified" interfaces are no less complex to learn as SQL, while failing to represent exactly what SQL does. See also The Law of Leaky Abstractions.
Check out SQLFairy, because that tool might already convert from files to SQL in a way that can help you. And FWIW MySQL Query Browser (or under its current name, MySQL Workbench) can read SQL files. So you probably don't have to copy & paste manually.