All my development life I have only worked with MySQL for extended periods of time, and for a client we now need to work with an Oracle database for some performance testing and tuning.
Any obvious pitfalls in moving from working with MySQL to Oracle I should watch out for?
The things I discovered so far:
There is only one database
What MySQL calls a database is a Schema in Oracle
A User and Schema is almost the same (still unclear of the differences here)
There is no auto increment. Instead you need to create your own sequence
Inserting multiple entries at the same time through multiple literal value tuples is not possible.
Numeric formats are localized, which can cause headaches when importing from CSV files.
Other tips would be greatly appreciated. Any good resources/documented past experiences with the transition would also be welcome. Note that we're not actually migrating a database from one to the other, it is more about the adjustments I'd have to make personally in my way of thinking etc.
See this resources
Migrating MySQL to Oracle Part I
Migrating MySQL to Oracle Part 2
A great tool
Oracle Migration Workbench
Bye.
If you insert or update longer string than length of varchar2 column, Oracle will throw an exception. MySQL will silently truncate it. Better double check if your code doesn't (even inadvertently) depend on this behavior.
Related
Im currently using a MySQL DB to pull/run queries injunction with Tableau. Based on the amount of data the queries are taking hours to run. Im thinking of switching to PostgreSQL but new to it. Would this be a good idea or can I optimize MySQL for my needs? I will be adding various data sources as we grow as well.
It's hard to answer definitively without knowing: your schema, your indexes and your queries sent by Tableau.
MySQL (and MariaDB) are excellent databases for certain use cases. Postgres is excellent for most of those use cases, and also others. [risk of generalizing alert]: Postgres can utilize complex indexes better, and also can be finer tuned.
Your statement "Based on the amount of data" suggests indexes are not aligned with the info you want to pull. I know from experience, an index that supports my data pulls makes queries run like a hot knife through butter, no matter what db is used.
8 times out of 10, MySQL or Postgres would suffice. This tableau page suggests a conversation with your DBA would help you.
If you are your own DBA as is often the case, I'd go with Postgres.
I have a bunch of data from a scientific experiment stored in a MySQL database, but I want to use MongoDB to take advantage of its map/reduce functionality to power some web charts. What is the best way to have new writes to MySQL replicate into Mongo? Some solution where I inspect the binary MySQL log and update accordingly, just like standard MySQL replication?
Thanks!
Alex
MySQL and MongoDB uses very different data and query models, so you can't transfer data directly.
Alas, moving data between the two must be done manually, and doing that efficiently depends very much on your data. Eg. you could transfer each table to a separate collection (roughly a table in MongoDB-lingo), and making the unique attributes in the tables to the _id-attribute. Alternately, you can make the _id to tablename+unique_id.
Basically, as Document databases are essentially free-form, you are free to invent your on schemes ad-infinitum (as long as the _id-attributes are unique within the collection).
Tungsten Replicator is data replication engine for MySQL.
Using heterogeneous replication, you may be able to set up MySQL to MongoDB replication.
I am not familiar with MongoDB but my quick look shows it is incompatible with mySQL so unless someone has written something to import form mySQL you are out of luck.
You could write your own import function.
assuming your mySQL tables use an incrementing unique 'id' field you could track the last row in mysql and then send it to mongodb when it changes.
Alterations and deletions would be much more difficult to deal with. if this is important then inserting the data at the source is probably the best bet.
Do you need to insert the data into mysql at all? could you do it all in mongodb and save all the trouble?
DC
I'm thinking about moving from MySQL to Postgres for Rails development and I just want to hear what other developers that made the move have to say about it.
I'm looking for personal experiences, not a Mysql v Postgres shootout, just the pros and cons that you yourself have arrived at. Stuff that folks might not necessarily think.
Feel free to explain why you moved in the first place as well.
I made the switch and frankly couldn't be happier. While Postgres lacks a few things of MySQL (Insert Ignore, Replace, Upsert stuff, and Load Data Infile for me mainly), the features it does have MORE than make up. Its stored procedures are so much more powerful and it's far easier to write complex functions and aggregates in Postgres.
Performance-wise, if you're comparing to InnoDB (which is only fair because of MVCC), then it feels at least as fast, possibly faster - we weren't able to do some real measurements here due to some constraints, but there certainly hasn't been a performance issue. The complex queries with several joins are certainly faster, MUCH faster.
I find you're more likely to get the correct answer to your issue from the Postgres community. Everybody and their grandmother has 50 different ways to do something in MySQL. With Postgres, hit up the mailing list and you're likely to get lots of very very good help.
Any of the syntax and the like differences are a bit trivial.
Overall, Postgres feels a lot more "grown-up" to me. I used MySQL for years and I now go out of my way to avoid it.
Oh dear, this could end in tears.
Speaking from personal experience only, we moved from MySQL solely because our production system (Heroku) is running PostgreSQL. We had custom-built-for-MySQL queries which were breaking on PostgreSQL. So I guess the morale of the story here is to run on the same DBMS over everything, otherwise you may run into problems.
We also sometimes needs to insert records Über-quick-like. For this, we use PostgreSQL's built-in COPY function, used similarly to this in our app:
query = "COPY users(email) FROM STDIN WITH CSV"
values = users.map! do |user|
# Be wary of the types of the objects here, they matter.
# For instance if you set the id to a string it will error.
%Q{#{user["email"]}}
end.join("\n")
raw_connection.exec(query)
raw_connection.put_copy_data(values)
raw_connection.put_copy_end
This inserts ~500,000 records into the database in just under two minutes. Around about the same time if we add more fields.
Another couple of nice things PostgreSQL has over MySQL:
Full text searching
Geographical querying (PostGIS)
LIKE syntax is like this email ~ 'hotmail|gmail', NOT LIKE is like email !~ 'hotmail|gmail'. The | indicates an or.
In summary: PostgreSQL is like bricks & mortar, where MySQL is Lego. Go with whatever "feels" right to you. This is only my personal opinion.
We switched to PostgreSQL for several reasons in early 2007 (or was it the year before?). The main reasons were:
SQL support - PostgreSQL is much better for complex SQL-queries, for example with lots of joins and aggregates
MySQL's stored procedures didn't feel very mature
MySQL license changes - dual licensed, open source and commercial, a split that made me wonder about the future. With PG's BSD license you can do whatever you want.
Faulty behaviour - when MySQL was counting rows, sometimes it just returned an approximated value, not the actual counted rows.
Constraints behaved a bit odd, inserting truncated/adapted values. See http://use.perl.org/~Smylers/journal/34246
The administrative interface PgAdminIII felt more stable and mature than the MySQL counterpart
PostgreSQL is very solid and crash safe in case of an outage
// John
Haven't made the switch myself, but got bitten a few times by MySQL's lack of transactional schema changes which apparently Postgre supports.
This would solve those nasty problems you get when you move from your dev environment with sqlite to your MySQL server and realise your migrations screwed up and were left half-done! (No I didn't do this on a production server but it did make a mess of our shared testing server!)
We are trying to switch our application from talking to mySQL to postgres. I wished we would have some kind of a subroutine (in VB6) were I can input the mySQl SQL query and get a string out containing the appropriate postgres query.
Does anybody have a solution or is interested working with us on that?
I would consult the "Comparison of Different SQL Implementations;" it's a most useful reference when converting queries from one RDBMS to another. WikiBooks also has a page entitled "Converting MySQL to PostgreSQL." It has a short list of the big differences between the two.
I don't know of any (free/open source) utility to translate queries, but unless you have really big, complicated queries, you shouldn't have much difficulty translating them (and, if you do have big, complicated queries, an automated tool probably won't help).
I don't believe that there's any 'silver bullet' tool that would convert all of your queries from being MySQL to Postgres compatible.
What you do need is:
a reference of the differences between the two RDBMS (see #"James McNellis")
a good test plan for your application that will put it through the paces to ensure that your converted database backend works
a good reason to go through all this trouble; performance? management edict? etc.
I've seen questions for doing the reverse, but I have an 800MB PostgreSQL database that needs to be converted to MySQL. I'm assuming this is possible (all things are possible!), and I'd like to know the most efficient way of going about this and any common mistakes there are to look out for. I have next to no experience with Postgre. Any links to guides on this would be helpful also! Thanks.
One advise is to start with a current version of MySQL, otherwise you will not have sub-queries, stored procedures or views. The other obvious difference is auto-increment fields. Check out:
pg2mysql
/Allan
You should not convert to new database engine based solely on the fact that you do not know the old one. These databases are very different - MySQL is speed and simplicity, Postgres is robustness and concurrency. It will be easier for you to learn Postgres, it is not that hard.
pg_dump can do the dump as insert statements and create table statements. That should get you close. The bigger question, though, is why do you want to switch. You may do a lot of work and not get any real gain from it.