Converting PostgreSQL database to MySQL - mysql

I've seen questions for doing the reverse, but I have an 800MB PostgreSQL database that needs to be converted to MySQL. I'm assuming this is possible (all things are possible!), and I'd like to know the most efficient way of going about this and any common mistakes there are to look out for. I have next to no experience with Postgre. Any links to guides on this would be helpful also! Thanks.

One advise is to start with a current version of MySQL, otherwise you will not have sub-queries, stored procedures or views. The other obvious difference is auto-increment fields. Check out:
pg2mysql
/Allan

You should not convert to new database engine based solely on the fact that you do not know the old one. These databases are very different - MySQL is speed and simplicity, Postgres is robustness and concurrency. It will be easier for you to learn Postgres, it is not that hard.

pg_dump can do the dump as insert statements and create table statements. That should get you close. The bigger question, though, is why do you want to switch. You may do a lot of work and not get any real gain from it.

Related

querying for data from two tables in different databases

Initially I thought this would be a stupid question, but now I am inspired by the following question.
Background: I have a lot of data in MySQL, but MySQL's spatial support is terrible. Ideally I would like to migrate everything to Postgres, but converting from MySQL to Postgres is a massive ball of hurt (I've already wasted close to a week struggling with it). Now I am thinking, if only I could maintain only the spatial portion in Pg, do the spatial queries in Pg, then use those row ids to query non-spatial data from MySQL.
I am a Perl DBI person. My question is thus -- can I create a single database handle that actually allows querying by JOINing a table from Pg with a table from MySQL, assuming they have a common id column?
No, you will need to query both separately and combine the data at the application layer. See a more informed answer here:
How to create linked server MySQL
No, I don't think you could do it that way. You would have to query the data separately and combine the results in your code. I believe there are no REAL RDB's that can do what you want.

How many of you have gone from MySQL to Postgresql? Was it worth it?

I'm thinking about moving from MySQL to Postgres for Rails development and I just want to hear what other developers that made the move have to say about it.
I'm looking for personal experiences, not a Mysql v Postgres shootout, just the pros and cons that you yourself have arrived at. Stuff that folks might not necessarily think.
Feel free to explain why you moved in the first place as well.
I made the switch and frankly couldn't be happier. While Postgres lacks a few things of MySQL (Insert Ignore, Replace, Upsert stuff, and Load Data Infile for me mainly), the features it does have MORE than make up. Its stored procedures are so much more powerful and it's far easier to write complex functions and aggregates in Postgres.
Performance-wise, if you're comparing to InnoDB (which is only fair because of MVCC), then it feels at least as fast, possibly faster - we weren't able to do some real measurements here due to some constraints, but there certainly hasn't been a performance issue. The complex queries with several joins are certainly faster, MUCH faster.
I find you're more likely to get the correct answer to your issue from the Postgres community. Everybody and their grandmother has 50 different ways to do something in MySQL. With Postgres, hit up the mailing list and you're likely to get lots of very very good help.
Any of the syntax and the like differences are a bit trivial.
Overall, Postgres feels a lot more "grown-up" to me. I used MySQL for years and I now go out of my way to avoid it.
Oh dear, this could end in tears.
Speaking from personal experience only, we moved from MySQL solely because our production system (Heroku) is running PostgreSQL. We had custom-built-for-MySQL queries which were breaking on PostgreSQL. So I guess the morale of the story here is to run on the same DBMS over everything, otherwise you may run into problems.
We also sometimes needs to insert records Über-quick-like. For this, we use PostgreSQL's built-in COPY function, used similarly to this in our app:
query = "COPY users(email) FROM STDIN WITH CSV"
values = users.map! do |user|
# Be wary of the types of the objects here, they matter.
# For instance if you set the id to a string it will error.
%Q{#{user["email"]}}
end.join("\n")
raw_connection.exec(query)
raw_connection.put_copy_data(values)
raw_connection.put_copy_end
This inserts ~500,000 records into the database in just under two minutes. Around about the same time if we add more fields.
Another couple of nice things PostgreSQL has over MySQL:
Full text searching
Geographical querying (PostGIS)
LIKE syntax is like this email ~ 'hotmail|gmail', NOT LIKE is like email !~ 'hotmail|gmail'. The | indicates an or.
In summary: PostgreSQL is like bricks & mortar, where MySQL is Lego. Go with whatever "feels" right to you. This is only my personal opinion.
We switched to PostgreSQL for several reasons in early 2007 (or was it the year before?). The main reasons were:
SQL support - PostgreSQL is much better for complex SQL-queries, for example with lots of joins and aggregates
MySQL's stored procedures didn't feel very mature
MySQL license changes - dual licensed, open source and commercial, a split that made me wonder about the future. With PG's BSD license you can do whatever you want.
Faulty behaviour - when MySQL was counting rows, sometimes it just returned an approximated value, not the actual counted rows.
Constraints behaved a bit odd, inserting truncated/adapted values. See http://use.perl.org/~Smylers/journal/34246
The administrative interface PgAdminIII felt more stable and mature than the MySQL counterpart
PostgreSQL is very solid and crash safe in case of an outage
// John
Haven't made the switch myself, but got bitten a few times by MySQL's lack of transactional schema changes which apparently Postgre supports.
This would solve those nasty problems you get when you move from your dev environment with sqlite to your MySQL server and realise your migrations screwed up and were left half-done! (No I didn't do this on a production server but it did make a mess of our shared testing server!)

Is there some kind of "strict performance mode" for MySQL?

I'd like to setup one instance of MySQL to flat-out reject certain types of queries. For instance, any JOINs not using an index should just fail and die and show up on the application stack trace, instead of running slow and showing up on the slow_query_log with no easy way to tie it back to the actual test case that caused it.
Also, I'd like to disallow "*" (as in "SELECT * FROM ...") and have that throw essentially a syntax error. Anything which is questionable or dangerous from a MySQL performance perspective should just cause an error.
Is this possible? Other than hacking up MySQL internals... is there an easy way?
If you really want to control what users/programmers do via SQL, you have to put a layer between MySQL and your code that restricts access, like an ORM that only allows for certain tables to be accessed, and only certain queries. You can then also check to make sure the tables have indexes, etc.
You won't be able to know for sure if a query uses an index or not though. That's decided by the query optimizer layer in the database and the logic can get quite complex.
Impossible.
What you could do to make things work better, is createing views optimized by you and give the users only access to these views. Now you're sure the relevent SELECT's will use indexes.
But they can still destroy performance, just do a crazy JOIN on some views and performance is gone.
As far as I'm aware there's nothing baked into MySQL that provides this functionality, but any answer of "Impossible", or similar, is incorrect. If you really want to do this then you could always download the source and add the functionality yourself, unfortunately this would certainly class as "hacking up the MySQL internals".

mySQL to postgreSQL

We are trying to switch our application from talking to mySQL to postgres. I wished we would have some kind of a subroutine (in VB6) were I can input the mySQl SQL query and get a string out containing the appropriate postgres query.
Does anybody have a solution or is interested working with us on that?
I would consult the "Comparison of Different SQL Implementations;" it's a most useful reference when converting queries from one RDBMS to another. WikiBooks also has a page entitled "Converting MySQL to PostgreSQL." It has a short list of the big differences between the two.
I don't know of any (free/open source) utility to translate queries, but unless you have really big, complicated queries, you shouldn't have much difficulty translating them (and, if you do have big, complicated queries, an automated tool probably won't help).
I don't believe that there's any 'silver bullet' tool that would convert all of your queries from being MySQL to Postgres compatible.
What you do need is:
a reference of the differences between the two RDBMS (see #"James McNellis")
a good test plan for your application that will put it through the paces to ensure that your converted database backend works
a good reason to go through all this trouble; performance? management edict? etc.

Coming from MySQL, Going to Oracle: the Pitfalls

All my development life I have only worked with MySQL for extended periods of time, and for a client we now need to work with an Oracle database for some performance testing and tuning.
Any obvious pitfalls in moving from working with MySQL to Oracle I should watch out for?
The things I discovered so far:
There is only one database
What MySQL calls a database is a Schema in Oracle
A User and Schema is almost the same (still unclear of the differences here)
There is no auto increment. Instead you need to create your own sequence
Inserting multiple entries at the same time through multiple literal value tuples is not possible.
Numeric formats are localized, which can cause headaches when importing from CSV files.
Other tips would be greatly appreciated. Any good resources/documented past experiences with the transition would also be welcome. Note that we're not actually migrating a database from one to the other, it is more about the adjustments I'd have to make personally in my way of thinking etc.
See this resources
Migrating MySQL to Oracle Part I
Migrating MySQL to Oracle Part 2
A great tool
Oracle Migration Workbench
Bye.
If you insert or update longer string than length of varchar2 column, Oracle will throw an exception. MySQL will silently truncate it. Better double check if your code doesn't (even inadvertently) depend on this behavior.