I and two others are working on a project at the university.
In the project we are making a prototype of a MMORPG.
We have decided to use PostgreSQL as our database. The other databases we considered were MS SQL-server and MySQL.
Does somebody have a good reference which will justify our choice? (preferably written during the last year)
Someone recently recommended me wikivs.com: MySQL vs. PostgreSQL - it is a quite detailed comparison of those two, and might be of help to you.
the most mentioned difference between MySQL and PostgreSQL is about your reading/writing ratios. If you read a lot more than you write, MySQL is usually faster; but if you do a lot of heavy updates to a table, as often as other threads have to read, then the default locking in MySQL is not the best, and PostgreSQL can be a better choice, performance-wise.
IOW, PostgreSQL scales better regarding to DB writes.
that's why it's usually said that MySQL is best for webapps, while PostgreSQL is more 'enterprisey'.
Of course, the picture is not so simple:
InnoDB tables on MySQL have a very different performance behaviour
At the load levels where PostgreSQL's better locks overtake MySQL's, other parts of your platform could be the bottlenecks.
PostgreSQL does comply better with standards, so it can be easier to replace later.
in the end, the choice has so many variables that no matter which way you go, you'll find some important issue that makes it the right choice.
Go with something that someone in your team has actual experience of using in production. All databases have issues which frequent users are aware of.
I cannot stress enough that someone in the team needs PRODUCTION experience of using it. Not using it for their homework, or to keep their list of CDs in.
All of these databases have their advantages and disadvantages. Which is better is dependent on:
Your teams experience
Your exact requirements
Your current environemnt e.g. whats your app written in and going to be hosted on?
SQL servers main problem is the cost unless you use express edition which has performance limitations however its very easy to use and has a number of good tools.
There is a comparison of the different sql versions at:
http://www.microsoft.com/sql/prodinfo/features/compare-features.mspx
You could then compare these with MySQL and PostGre.
If the purpose of this comparison is a theoretical one for your essay then you can reference web pages such as the microsoft link and compare performance, cost etc.
Postgresql has a page of case studies that you can quote and link to.
Really, any of the above would have worked for you. I personally like PostgreSQL. One solid advantage it has over MSSQL (even assuming you can get it for "free") is that PostgreSQL is non-proprietary. If you're going to introduce a dependency into your project (and re-inventing an RDBMS would be crazy), you don't want it to be a black box.
Related
EDIT: I've learnt, and it's probably true that YouTube uses MySQL. But it probably would be the enterprise edition and not free edition. The only alternative seems to be PostgreSQL. Long question short - - Can PostgreSQL used instead of MySQL? Is it a very good alternative in any case?
Firstly, I noticed that these are the most common names when it comes to (relational) database management systems - - DB2 (IBM), Oracle Database, Microsoft SQL, Ingres, MySQL, PostgreSQL and FireBird. So, should I presume these are the best?
Okay, of the above - - DB2 (IBM), Oracle Database and Microsoft SQL, the so-called Enterprise DBMSs, come with a bill; while MySQL (exclude enterprise version), PostgreSQL and FireBird are open source and free.
As should be clear from my previous questions here, I plan to build a photo-sharing site (something like Flickr, Picasa), and like any other, it's going to be database-heavy and (hopefully) busy.
Here's what I would love to know: (1) does any one of the free DBMSs stand up to the mark with the paid enterprise DBMSs? (2) Can any of the free DBMSs scale and perform well for enormous and busy databases without too much headbanging and facepalming?
Things in my mind w.r.t the DB:
Mature
Fast
Perform great/fine under heavy load
Perform great/fine as database grows
Scalable (smooth transition)
support for languages (preferably Python, PHP, JS, C++)
Feature-rich
etc (whatever I am missing)
PLZ NOTE: I know Facebook, Twitter etc use (or at least used) MySQL, and I see reports from time to time, how their sysadmins cry over that decision. So, please don't say, XXX uses it, so why can't you. They've started small, I am too. They've made mistakes, I don't want to. I want to keep the scaling-transition smooth. I hope I am not asking too much. Thanks.
"Which is the best database" is a huge question and is the subject of much contention. I've noticed on StackOverflow there is a tendency to close such questions; although the question is interesting, it is also quite unresolvable ;-)
FWIW, I would go with this:
Use what you know
If it doesn't conflict too heavily with the first rule, use something that is free of charge
Use what works with other parts of your stack
Use what you can hire for at reasonable cost (so, maybe not Oracle unless you really have to)
Don't optimise too early. Working slowly is much better than an unfinished, efficient website.
Also, scalability is not really to do with your db platform, but to do with how you design your site. Note also that some platforms scale better when adding more servers (MySQL) and others do better when increasing your server resources (PostgreSQL).
Please note as of today, MySql is not a free project aka as free as postgresql. One of the main reason why i had to switch over to PG. (Thankx to NPGSQL and PgAdmin III, it was a lot easier than it was rumoured)
However MySql does have number of advantages related to applications,addons,forums and looked good on resume.
PostgreSql is a much mature DBMS. It is a objectRDBMS. It has been around for more than 15 years. It is not known to have defaulted on any major issues. It is well known to handle transactions running in millions of rows successfully. The most important is, it's high rate of compliance with SQL standards. Infact in professional circles, it is more of an Oracle of Free RDBMS rather than MySql of popular applications.
Every single book that teaches programming (or almost anything else) starts off with a whole bunch of spiel on why what it's about (C++, MySQL, waterskiing, skydiving, dentistry, whatever) is the greatest thing in the world. So I open the MySQL O'Reilly book, and read the intro, and get the traditional sermon. The main points that the book mentioned were:
MySQL has been shown to have tied Oracle as the fastest and most scalable database software.
It's free and open source.
Sounds pretty convincing, but I know there's always at least two sides of every story. I knew I needed to be disillusioned when I saw someone suggest to someone to use Oracle instead of MySQL and thought, "Why in the world would you want to do that?!", just because of the few paragraphs I'd read, with no other justification. So lets investigate the other side of the story:
What are some reasons NOT to use MySQL?
Here's just a random list of stuff that popped into my head. It's CW, so feel free to add to it as necessary.
Oracle provides a top notch ERP built on their database. If your company is subject to Sarbanes-Oxley regulations, this is quite a bit above "crucial."
SQL Server licenses come with Analysis Services, Integration Services, and Reporting Services. If you want to do anything with OLAP, ETL, or reporting, these three are great applications that are built on the SQL Server stack.
SQL Server has native .NET data types (in 2008). Absolutely brilliant for .NET shops dealing with geospatial datasets.
MySQL does not support check constraints.
SQL Server includes the over clause, which helps when dealing with the "top n rows in each group" problem. Essentially, you can do aggregate functions partitioned over the dataset any way you'd like.
SQL Server uses Kerberos and Windows authentication natively. MySQL does not tie into Active Directory.
Superior performance on subqueries (almost any database has subquery performance that is superior to MySQL's)
Oracle, SQL Server, PostgreSQL and others have a richer set of join algorithms available to them; this means joins can often be performed faster, especially when large tables are involved.
MySQL has been shown to have tied oracle as the fastest and most scalable database software.
Making that statement about any two database systems is probably enough to throw the book away without reading the rest. Database systems are not commodities that can be compared with a couple lines of information, and will not be for the foreseeable future.
One reason that the statement is obviously false is that MySQL has very limited plan choices available. For instance, MySQL can't use merge join or hash join -- two fundamental algorithms that have useful performance characteristics. That's pretty much the end of the story for many query workloads. It is trivial to show a reasonable query that is orders of magnitude faster with a merge join.
There are plenty of other criticisms of MySQL versus XYZ and vice-versa. My point is that this is a complex issue, and the book is drastically oversimplifying. If you're getting involved in databases at all, you need to spend time diversifying your knowledge and understanding fundamentals.
My personal opinion is that MySQL and SQLite are the worst places to start. Pick something like Oracle (which can be downloaded free of charge for learning/evaluation, which many don't realize), PostgreSQL (BSD license), or MS SQL. FirebirdSQL might be good, too. Once you familiarize yourself with a few systems, you'll be able to make an informed choice about whether the trade-offs MySQL makes are right for you.
Everyone seems to be missing one of the main reasons to stick with Oracle/MS. You've already got a stable full of DBAs that know those products inside and out.
The default collation in mysql is case-insensitive. This is not a problem per se, but I think this strange default is an indication that it was targeted at hobby-developers, rather than professionals. This is a big assumption, but I'd think any professional would expect a database to compare strings for identity by default (i.e. using a binary collation).
Manipulation of tables during transactions causes implicit COMMITs. While this might not look grieve at the first glance, you will notice that you cannot cannot work under ACID conditions if altering/creating tables is an inherent part of your application.
MySQL can certainly match or beat Oracle in speed. I've done it numerous times myself. Ok, so I had to use various table types like black hole, merge, innodb, and myisam in just the right laces. And it took me a few days to get everything working just right. The Oracle DBA got things working in an hour or two.
MySQL is fine for 98% of the sites out there, maybe more. But it is fairly easy to bring it to a crawl without a lot of data if you don't know what you are doing. Oracle is quite a bit harder to bring to a crawl, but it can still be done. I've worked with both with datasets in the hundreds of millions of records (tiny by some measures). MySQL takes quite a bit more attention.
No database can scale indefinitely, which is why nosql "databases" are becoming so popular. I think the real question is if MySQL is "good enough" for what you need to do. The price is certainly right. The same could be said about PHP.
Why does Facebook use MySQL? Could you imagine what it would cost them to buy enough Oracle licenses!? It's good enough.
The future is of sun (the company behind mysql) is unclear and you don't know whether there will be a company to back the product.
MySQL is very tolerant of ambiguities -- something you don't want in a database system. Here are a few examples off the top of my head:
As another poster stated, CHAR and VARCHAR columns are case-insensitive, already a pretty bad sign.
You can INSERT into a table that has a column without a default value that is also NOT NULL. Yes, really! Instead of throwing an error, MySQL will pick a value for you based on the data type, e.g. 0 for numbers.
You can use a GROUP BY statement while some columns are neither using an aggregate function, nor included in the GROUP BY statement. The outcome is pretty much random. No warnings or errors here either, in my experience.
MySQL is also far from rock-solid. Just this month, I discovered a bug in the (admittedly old, but a "stable release") version of MySQL used by DreamHost that results in data loss. (Certain conditions when creating a table with variable-length rows.)
I've been using MySQL for many years and still do, but would never dream of using it for anything serious, where data loss would be a big problem. It's great for non-mission-critical web sites and blogs though.
I knew I needed to be disillusioned
when I saw someone suggest to someone
to use oracle instead of MySQL and
thought, "Why in the world would you
want to do that?!"
Because your company has been using Oracle for the past ten years, or because you equate enterprise usage with 'must be good' and open-source with 'free crap'. That's just about the only reason. Everyone I know who has worked with Oracle loathes it. Everyone I know who has worked with MySQL, assuming they don't love it, at least consider it a better alternative to Oracle in almost every regard.
SQL RMDBs are so complex though, that in almost every respect there's something one DB does that another doesn't. It is also, unfortunately, a fact of comparing databases that people quote statistics without using properly configured servers. If you have two default configurations for a server, one might be better than the other, but that's about as far as the comparisons usually go. They don't reflect the fact that these gigantic applications have a million little switches and toggles you can use to speed certain things up, increase reliability and generally screw up bad science.
MySQL tends to be a very general purpose database system, you can use it for almost anything that you'd use Oracle, SQL Server, PostgreSQL, DB2, etc for.
However, these different systems have different strengths, PostgreSQL has a ton more functionality than MySQL and can handle some very specific tasks that MySQL struggles with. SQL Server usually integrates with Microsoft products very easily whereas MySQL you'd have to do some extra work to make them play together. Oracle is MASSIVE, they're not just databases and when you're dealing with large, expansive systems Oracle probably has the gear to cover everything under the 1 roof, whereas you'd need to tie a bunch of disparate systems together to have MySQL has your database system.
Whether or not to use MySQL should be based upon whether or not it is reasonable to use MySQL.
Disclaimer: I have been using MySQL since 2001 and still love it, but here are a few reasons that make me doubt about my fidelity...
There are some false arguments (it was true a few years ago) in some of the answers I read. Before making a choice, check MySQL documentation and its up-to-date list of features. You could be surprised.
Each DB server lack functionalities. This is not a real blocking issue if you do not specifically need them.
For me, the main issues are elsewhere:
The time needed to have a bug fixed and published in a stable release. It is a shame. (For some bugs... it takes years (no kidding)!)
The frequency of stable releases.
But since this year, the new issues are:
The number of increasing branches (Percona, Google, Facebook, etc.).
Sun is unclear with his strategy.
Many MySQL employees left the company.
It's free and open source.
True. But keep in mind that MySQL is, in many cases, not free for commercial use. MySQL and the connectors (the official drivers for various languages), are GPL licensed.
If you use, say, the Connector/.NET to connect to MySQL your code have to be GPL compatible. It's dual licensed though, so you can buy an enterprise version under another license - and I believe they have a (either free or just very cheap) program that lets you license the connectors under a different license.
Everyone I know using MySQL is unaware of this :-)
Basically, there are several choices for a database. Frankly, in today's world, DB choice is less important than it was a few years ago. Here are a few issues to consider.
Most of the current database systems in widespread use such as SQL Server (and SQL Server Express), Oracle, MySQL, SQLLite, etc. are relatively standards compliant and can be used somewhat interchangeably. Some serve different niche markets. For example, SQL Server, MySQL, and Oracle are all good choices for large Enterprise applications. SQLLite is very good for applications which deploy on a client and need a local database with a small footprint and minimal configuration. (In my opinion, Oracle is extremely over-priced, is backed by an arrogant unresponsive company. It would never be my first choice on any project. I would only use it if it was mandated by the client or by necessity.)
A high percentage of top-end developers are using tools such as Hibernate(Java)/NHibernate(.NET) to build their data access layers. Hibernate variants strongly encourage developers to start with development of the object model rather than the database model. The Hibernate application then generates the data model automatically--and even handles data model updates. Hibernate variants can be used with any of the major database vendors. Changing your database choice can be as simple and painless as selecting a different database type in your configuration. On a side note, I should mention that while Hibernate and NHibernate are cross-database-compatible, they do not work on the lowest common denominator. The data access code in these applications is often designed to take advantages of special features within a given database engine. For example NHibernate supports access to the NVarchar(Max) data type in SQL Server which allows for very long strings.
In most applications, issues with database performance do not derive directly from the speed of reads and writes. Most of the issues relate to how the application manages the caching of frequently accessed data. For example, in online blog site, it makes sense to cache blog posts once they have been read so they are not repeatedly fetched from the database. This caching mechanism is almost always primarily handled by the application code rather than database server--though database servers do provide some caching. Hibernate/NHibernate have excellent caching support built in as does Microsoft's ASP.NET and their new MVC framework built on top of ASP.NET.
Enterpise databases (SQL Server, Oracle, MySQL) are best for situations where functionality such as replication, clustering, huge datasets, etc. are required.
I don't like MySQL licence : Firebird and PostgreSQL are better
There is no real hotbackup include in the MySQL by Sun
you can also look here which is interresting link and comment !
MySQL is free, but it takes an expert to maintain. Someone who naturally uses the command prompt and is not afraid to experiment. In some cases, MySQL problems are too complex, and the right people to troubleshoot them may not be available for any amount of money.
SQL Server is priced in the middle range. It can be maintained by "normal people", the kind who go home every day on 17:00 and have a natural disinclination to fifty page HOW-TO's. SQL Sever performs well in most instances but can break down in specific scenarios.
Oracle is the most expensive and requires highly paid operators. If you have the money, Oracle is a "safe" choice, because there's nothing Oracle won't do for money.
Three products, three markets!
A couple of pages listing gotchas (such as this and this) make me want to stay as far away from MySQL as possible. Here's a more neutral comparison of Postgres and MySQL.
As for the open source aspect others mentioned: MySQL is open source and free, only if your application is, too. If it's not, you need a commercial license.
My personal story:
Adding a new index to a table of about 10k rows.
MySQL side
about 30 seconds.
Postgres side
about 1 second.
I've worked with MySQL for years, and SQL Server only over the past year. I don't really see one being any easier or harder to use than the other in most cases. I do wish, however, that MSSQL had some of the features that MySQL possesses (e.g. being able to insert multiple rows on a single INSERT statement).
Also, if you don't have to use RDBMS, checkout redis. It is basically memchached with persistence with asynchronous write through. The performance is not on the same scale with MySQL.
Well... I guess the comparison isn't really fair to MySQL since it's not RDBMS...
I am developing a Rails application that will access a lot of RSS feeds or crawl sites for data (mostly news). It will be something like Google News but with a different approach, so I'll store a lot of news (or news summaries), classify them in different categories and use ranking and recommendation techniques.
Should I go with MySQL?
Is it worthwhile using IBM DB2
purexml to store the doucuments?
Also Ruby search implementations
(Ferret, Ultrasphinx and others) are
not needed If I choose DB2. Is that correct?
What are the advantages of
PostreSQL in this?
Does it makes sense to use Couch DB in
this scenario?
I'd like to choose the best option but without over-complicating the solution. So I discarded the idea to use two different storage solutions (one for the news documents and other for the rest of the data). I'm also considering only "free" options, so I didn't look at Oracle or MS SQL Server.
purexml is heavier than SQL, so you pay more for your roundtrip between webserver and DB. If you plan to have lots of users, I'd avoid it, your better off letting your webserver cache the requests, thus avoiding creating xml(rss) everytime, if that is what you are thinking about.
I'd go with MySQL because its really good at serving and its totally free, well PostgreSQL is too, but haven't used it so I can't say.
CouchDB could make sense, but not if you plan on doing OLAP (Offline Analysis) of your data, a normal RDBMS will be better at it.
Admitting firstly that I generally don't like mysql, I will say that there has been writing on this topic regarding postgres:
http://oldmoe.blogspot.com/2008/08/101-reasons-why-postgresql-is-better.html
This is always my choice when I need a pure relational database. I don't know whether a document database would be more appropriate for your application without knowing more about it. It does sound like it's something you should at least investigate.
MySQL is probably one of the best options out there; light, easy to install and maintain, multiplatform and free. On top of that there are some good free client tools.
Something to think about; because of the nature of your system you will probably have some tables that will grow quite a lot very quickly so you might want to think about performance.
Thus, MySQL supports vertical partitioning but only from V 5.1.
It sounds to me the application you will build can easily become a large-scale web app. I would suggest PostgreSQL, for it has been known for its reliability.
You can check out the following link -- Bob Ippolito from MochiMedia tells us why they ditched MySQL for PostgreSQL. Although the posts are more than 3 years old, the issues MySQL 5.1 has recently tend to prove that they are still relevant.
http://bob.pythonmac.org/archives/category/sql/mysql/
MySQL is good in production. I haven't used PostgreSQL for rails, but it's a good solution as well.
In the dev and test environments I'd start out with SQLite (default), and perhaps migrate to your target DB in the test environment as you move closer to completion.
Is there any formal performance and stress test reports of open source database, specially sqlite,MySQL an PgSQL?
I want to use sqlite in server for its simple structure and easy embeddable capability. But I can not find any pros and cons (by Googling and Yahoo!ing) regarding performance of these database software.
Please suggest.
I found this article. It has a disclaimer at the top about the age of the information. However, it may be some help to you.
Here is another article that seems a little more recent and up2date.
Seems from reading these that SQLite is quite adequate in terms of performance.
Sysbench is a great utility for benchmarking mysql and I believe has plugins or the capability to test PostgreSQL. Keep in mind that you're not going to get a simple number that says "DBMS A is faster than DBMS B" -- at best you can hope to get an idea of what kind of scaling you'll get for a particular type of workload that is hopefully similar to whatever workload you'll end up throwing at your system.
Regardless of performance, if you really know what you are doing with RDBMS software and need an open source solution, you'll probably want to go with PostgreSQL -- otherwise, stick with MySQL.
Benchmark is not the most important in database choice.
I think SQLite and MySQL are quicker than Progres or Firebird but if you need some specific features like CTEs, only few database have even if it is SQL Standard
Benchmarking is hard. And expensive. And in the installations most of them are done, SQLite won't even be tested, because it's designed for completely different workloads and simply doesn't deal with the situation. (For example, any real benchmark will have clients running on different machines from the server, which SQLite AFAIK doesn't really do - whereas it does do very well in the case where you have a single client locally).
You can always look at something like spec, for example http://www.spec.org/jAppServer2004/results/jAppServer2004.html that shows both pg and mysql at least. But beware that the hardware platforms are different (and that these tests are also not from today).
But the bottom line is that if you want to compare performance for your application, the only really relevant benchmark you can run is your own application in a testing environment.
Why don't databases automatically index tables based on query frequency? Do any tools exist to analyze a database and the queries it is receiving, and automatically create, or at least suggest which indexes to create?
I'm specifically interested in MySQL, but I'd be curious for other databases as well.
That is a best question I have seen on stackoverflow. Unfortunately I don't have an answer. Google's bigtable does automatially index the right columns, but BigTable doesn't allow arbitrary joins so the problem space is much smaller.
The only answer I can give is this:
One day someone asked, "Why can't the computer just analyze my code and and compile & statically type the pieces of code that run most often?"
People are solving this problem today (e.g. Tamarin in FF3.1), and I think "auto-indexing" relational databases is the same class of problem, but it isn't as much a priority. A decade from now, manually adding indexes to a database will be considered a waste of time. For now, we are stuck with monitoring slow queries and running optimizers.
There are database optimizers that can be enabled or attached to databases to suggest (and in some cases perform) indexes that might help things out.
However, it's not actually a trivial problem, and when these aids first came out users sometimes found it actually slowed their databases down due to inferior optimizations.
Lastly, there's a LOT of money in the industry for database architects, and they prefer the status quo.
Still, databases are becoming more intelligent. If you use SQL server profiler with Microsoft SQL server you'll find ways to speed your server up. Other databases have similar profilers, and there are third party utilities to do this work.
But if you're the one writing the queries, hopefully you know enough about what you're doing to index the right fields. If not then having the right indexes is likely the least of your problems...
-Adam
MS SQL 2005 also maintains an internal reference of suggested indexes to create based on usage data. It's not as complete or accurate as the Tuning Advisor, but it is automatic. Research dm_db_missing_index_groups for more information.
There is a script on I think an MS SQL blog with a script for suggesting indexes in SQL 2005 but I can't find the exact script right now! Its just the thing from the description as I recall. Here's a link to some more info http://blogs.msdn.com/bartd/archive/2007/07/19/are-you-using-sql-s-missing-index-dmvs.aspx
PS just for SQL Server 2005 +
There are tools out there for this.
For MS SQL, use the SQL Profiler (to record activity against the database), and the Database Engine Tuning Advisor (SQL 2005) or the Index Tuning Wizard (SQL 2000) to analyze the activities and recommend indexes or other improvements.
Yes, some engines DO support automatic indexing. One such example for mysql is Infobright, their engine does not support "conventional" indexes and instead implicitly indexes everything - this is a column-based storage engine.
The behaviour of such engines tends to be very different from what developers (And yes, you need ot be a DEVELOPER to even be thinking about using Infobright; it is not a plug-in replacement for a standard engine) expect.
I agree with what Adam Davis says in his comment. I'll add that if such a mechanism existed to create indexes automatically, the most common reaction to this feature would be, "That's nice... How do I turn it off?"
Part of the reason may be that indexes don't just give a small speedup. If you don't have a suitable index on a large table queries can run so slowly that the application is entirely unusable, and possibly if it is interacting with other software it simply won't work. So you really need the indexes to be right before you start trying to use the application.
Also, rather than building an index in the background, and slowing things down further while it's being built, it is better to have the index defined before you start adding significant amounts of data.
I'm sure we'll get more tools that take sample queries and work out what indexes are necessary; also probably we will eventually get databases that do as you suggest and monitor performance and add indexes they think are necessary, but I don't think they will be a replacement for starting off with the right indexes.
Seems that MySQL doesn't have a user-friendly profiler. Maybe you want to try something like this, a php class based in MySQL profiler.
Amazon's SimpleDB has automatic indexing on all columns based on your usage:
http://aws.amazon.com/simpledb/
It has other limitations though:
It's a key-value store, not an RDB. Obviously that means slow joins (and no built-in join support).
It has a 10gb limit on table size. There are libraries that will handle partitioning big data for you although this locks you into that library's way of doing things, which can have its own problems.
It stores all values as strings, even numbers, which makes sorting a column with a 1,9, and 10 come out like 1,10,9 unless you use a library which hacks this by 0 padding. This also impacts negative numbers.
The 10gb limit is bigger than many might assume, so you could proceed with this for a simple site that you plan on rewriting if it ever hits big.
It's unfortunate this kind of automatic indexing didn't make it into DynamoDb, which appears to have replaced it - they don't even mention SimpleDb in their Product list anymore, you have to find it through old links to it.
Google App Engine does that (see the index.yaml file).