There is a growing tendency for shifting from mysql to NOSQL, SQLite, etc. I have read many blogs and articles comparing the speed of mysql with other types of DBMS. However, I believe that speed is not a problem with mysql, as it is really fast; but the problem is more connected with resource usage. It is common to face extreme server load due to mysql slow queries. For instance, an advantage of Oracle over mysql is to have less problem associated with memory leaks.
Is it true that mysql consumes significantly more resources (CPU and memory) comparing with other databases such as SQLite, Non-relational databases, key/value databases. By significantly I mean is it the main reason for not using mysql for large databases (to save server costs).
If YES (to 1), what can be an estimate of better resource usage of a similar system like SQLite comparing with Mysql.
Note: Consider a simple system as advanced features of mysql is not needed. Just comparing the performance for simple queries.
If you're only using "simple" queries, I don't think there's much of a difference regarding ressource usage between MySQL and e.g. Oracle.
Those "professional" DBMS do a lot of "magic" regarding caching, prefetching and data maintanance.
Of course MySQL does that as well, but it might not be as efficient for really complex databases and advanced queries.
Your choise of DBMS highly depends on what you're planning to do, especially if you're choosing between SQL/NoSQL/Key-Value/..., which are for completely different scenarios… that's not so much a question of memory and CPU usage.
CPU and Memory never are the reason, as they are cheap. The problem is with the I/O speed. NoSQL databases are used in write-intensive applications, as well as in applications which need schema-less database (because changing the table schema in MySQL involves rewriting the table, which may be extremely slow). So some trade-offs are made to optimize the disk operations, which often lead to consuming more CPU, memory or disk space.
Another reason could be pessimistic vs optimistic locks. Which is another topic.
But since the answer to the question "Is it true that mysql consumes significantly more resources (CPU and memory) comparing with other databases" is NO, it is pointless to discuss it further :)
Related
Let's imagine very simple task to be done on server. There are many users chatting on our site and we would like to know, if each of them is online or not.
There are two obvious approaches to do that — use MySQL database or apply memcached NoSQL solution.
But why should memcached perform faster? If I understand it correct, MySQL will also read data from memory, not from the disk (if set up and tuned correctly). Few resources for persistence, but also not too much — just few memory pages to flush on disk.
The main question. Is there a strong reason to go NoSQL for such a task or MySQL will also perform ok?
For such a trivial task, you're right, it won't significantly change the performance, as the data will remain in memory and I/O won't be an issue.
Your question seems to imply memcached is a typical NoSQL engine; let me emphasize that memcached is an entity on its own and usually not conceptualized as a NoSQL database, but more as a fast and volatile key-value store, more often than not backed by a disk-bound database.
SQL and NoSQL each have strong points and weaknesses out of scope with your question, and more info about that is available in another thread.
NoSQL in general is for analysis of Big Data. Memcached is for making a fast caching system.
A chat doesn't need analysis of Big Data, nor a cache system, because you only need to show a few data, and data are often updated. So, a relational DBMS is the best choice.
Imagine you have a complex site which rarely changes. Imagine that your pages are complex, and several complicated queries must be executed to compose each page. In that case, using memcached makes sense, because you can compose the pages, and store them in-memory.
Imagine you have enormous business intelligence data. You need to get some aggregating operations, like avg, standard deviation, sums... well, a Big Data solution may perform better than MySQL. Thought, there is a great number of caveats.
Conclusion: NoSQL is not for chats :)
In our (currently MySQL) database there are over 120 million records, and we make frequent use of complex JOIN queries and application-level logic in PHP that touch the database. We're a marketing company that does data mining as our primary focus, so we have many large reports that need to be run on a daily, weekly, or monthly basis.
Concurrently, customer service operates on a replicated slave of the same database.
We would love to be able to make these reports happen in real time on the web instead of having to manually generate spreadsheets for them. However, many of our reports take a significant amount of time to pull data for (in some cases, over an hour).
We do not operate in the cloud, choosing instead to operate using two physical servers in our server room.
Given all this, what is our best option for a database?
I think you're going the wrong way about the problem.
Thinking if you drop in NoSQL that you'll get better performance is not really true. At the lowest level, you're writing and retrieving a fair chunk of data. That implies your bottleneck is (most likely) HDD I/O (which is the common bottleneck).
Sticking to the hardware you have momentarily and using a monolithic data storage isn't scalable and as you noticed - has implications when wanting to do something in real-time.
What are your options? You need to scale your server and software setup (which is what you'd have to do with any NoSQL anyway, stick in faster hard drives at some point).
You also might want to look into alternative storage engines (other than MyISAM and InnoDB - for example, one of better engines that seemingly turn random I/O to sequential I/O is TokuDB).
Implementing faster HDD subsystem would also aid to your needs (FusionIO if you have the resources to get it).
Without more information on your end (what the server setup is, what MySQL version you're using and what storage engines + data sizes you're operating with), it's all speculation.
Cassandra still needs Hadoop for MapReduce, and MongoDB has limited concurrency with regard to MapReduce...
... so ...
... 120 mio records is not that much, and MySQL should easily be able to handle that. My guess is an IO bottleneck, or you're doing lots of random reads instead of sequential reads. I'd rather hire a MySQL techie for a month or so to tune your schema and queries, instead of investing into a new solution.
If you provide more information about your cluster, we might be able to help you better. "NoSQL" by itself is not the solution to your problem.
As much as I'm not a fan of MySQL once your data gets large, I have to say that you're nowhere near needing to move to a NoSQL solution. 120M rows is not a big deal: the database I'm currently working with has ~600M in one table alone and we query it efficiently. Managing that much data from an ops perspective is the problem; querying it isn't.
It's all about proper indexes and the correct use of them when joining, and secondarily memory settings. Find your slow queries (mysql slow query log FTW!), and learn to use the explain keyword to understand whey they are slow. Then tweak your indexes so your queries are efficient. Further, make sure you understand MySQL's memory settings. There are great pages in the docs explaining how they work, and they aren't that hard to understand.
If you've done both of those things and you're still having problems, make sure disk I/O isn't an issue. Then you should look in to another solution for querying your data if it is.
NoSQL solutions like Cassandra have a lot of benefits. Cassandra is fantastic at writing data. Scaling your writes is very easy--just add more nodes! But the tradeoff is that it's harder to get the data back out. From a cost perspective, if you have expertise in MySQl, it's probably better to leverage that and scale your current solution until it hits a limit before completely switching your underlying architecture.
Could SQLite be an alternative for mysql in high traffic web sites?
Thanks
SQLite usually will work great as the
database engine for low to medium
traffic websites (which is to say,
99.9% of all websites). The amount of web traffic that SQLite can handle
depends, of course, on how heavily the
website uses its database. Generally
speaking, any site that gets fewer
than 100K hits/day should work fine
with SQLite. The 100K hits/day figure
is a conservative estimate, not a hard
upper bound. SQLite has been
demonstrated to work with 10 times
that amount of traffic.
Source: http://www.sqlite.org/whentouse.html
The short answer is: SQLite is embedded database. It is purpose is different than standalone RBDMS. While it is quicker with simple queries than MySQL, keep in mind that SQLite has:
no good networking support (SQLite purpose is different), so replication is PITA
coarse-grained locking (one write at a time)
no advanced table statistics
no sophisticated query optimizer
high memory consumption with large databases (a 100GB database would require about 25MB or RAM before each transaction)
Then if you do not plan to use SQLite over network, database sizes are quite small, queries are rather simple, and you have a lots of reads (and really small number of writes), then SQLite may be a better choice.
About MySQL: optimizing and using MySQL in super high traffic sites is not for faint hearted. I recommend some good reading:
http://oreilly.com/catalog/9780596003067
https://www.packtpub.com/high-availability-mysql-cookbook/book
No way. SQLLite deals terribly with concurrency. The database would be a huge performance bottleneck.
No! It cannot be!
Only if you push your data to a cache and read from the cache. SQLite can be used as persistence for cache, but its really not recommended.
How do I know when a project is just to big for MySQL and I should use something with a better reputation for scalability?
Is there a max database size for MySQL before degradation of performance occurs? What factors contribute to MySQL not being a viable option compared to a commercial DBMS like Oracle or SQL Server?
Google uses MySQL. Is your project bigger than Google?
Smart-alec comments aside, MySQL is a professional level database application. If your application puts a strain on MySQL, I bet it'll do the same to just about any other database.
If you are looking for a couple of examples:
Facebook moved to Cassandra only after it was storing over 7 Terabytes of inbox data. (Source: Lakshman, Malik: Cassandra - A Decentralized Structured Storage System.) (... Even though they were having quite a few issues at that stage.)
Wikipedia also handles hundreds of Gigabytes of text data in MySQL.
I work for a very large Internet company. MySQL can scale very, very large with very good performance, with a couple of caveats.
One problem you might run into is that an index greater than 4 gigabytes can't go into memory. I spent a lot of time once trying to improve the MySQL's full-text performance by fiddling with some index parameters, but you can't get around the fundamental problem that if your query hits disk for an index, it gets slow.
You might find some helper applications that can help solve your problem. For the full-text problem, there is Sphinx: http://www.sphinxsearch.com/
Jeremy Zawodny, who now works at Craig's List, has a blog on which he occasionally discusses the performance of large databases: http://blog.zawodny.com/
In summary, your project probably isn't too big for MySQL. It may be too big for some of the ways that you've used MySQL before, and you may need to adapt them.
Mostly it is table size.
I am assuming here that you will use the Oracle innoDB plugin for mysql as your engine. If you do not, that probably means you're using a commercial engine such as infiniDB, InfoBright for Tokutek, in which case your questions should be sent to them.
InnoDB gets a bit nasty with very large tables. You are advised to partition your tables if at all possible with very large instances. Essentially, if your (frequently used) indexes don't all fit into ram, inserts will be very slow as they need to touch a lot of pages not in ram. This cannot be worked around.
You can use the MySQL 5.1 partitioning feature if it does what you want, or partition your tables at the application level if it does not. If you can get your tables' indexes to fit in ram, and only load one table at a time, then you're on a winner.
You can use the plugin's compression to make your ram go a bit further (as the pages are compressed in ram as well as on disc) but it cannot beat the fundamental limtation.
If your table's indexes don't all (or at least MOSTLY - if you have a few indexes which are NULL in 99.99% of cases you might get away without those ones) fit in ram, insert speed will suck.
Database size is not a major issue, provided your tables individually fit in ram while you're doing bulk loading (and of course, you only load one at once).
These limitations really happen with most row-based databases. If you need more, consider a column database.
Infobright and Infinidb both use a mysql-based core and are column based engines which can handle very large tables.
Tokutek is quite interesting too - you may want to contact them for an evaluation.
When you evaluate the engine's suitability, be sure to load it with very large data on production-grade hardware. There's no point in testing it with a (e.g.) 10G database, that won't prove anything.
MySQL is a commercial DBMS, you just have the option to get the support/monitoring that is offered by Oracle or Microsoft. Or you can use community support or community provided monitoring software.
Things you should look at are not only size at operations. Critical are also:
Scenaros for backup and restore?
Maintenance. Example: SQL Server Enterprise can rebuild an index WHILE THE OLD ONE IS AVAILABLE - transparently. This means no downtime for an index rebuild.
Availability (basically you do not want to have to restoer a 5000gb database if a server dies) - mirroring preferred, replication "sucks" (technically).
Whatever you go for, be carefull with Oracle RAC (their cluster) - it is known to be "problematic" (to say it finely). SQL Server is known to be a lot cheaper, scale a lot worse (no "RAC" option) but basically work without making admins want to commit suicide every hour (the "RAC" option seems to do that). Scalability "a lot worse" still is good enough for the Terra Server (http://msdn.microsoft.com/en-us/library/aa226316(SQL.70).aspx)
THere wer some questions here recently of people having problems rebuilding indices on a 10gb database or something.
So much for my 2 cents. I am sure some MySQL specialists will jump in on issues there.
What are the best practices for optimizing a MySQL installation for best performance when handling somewhat larger tables (> 50k records with a total of around 100MB per table)? We are currently looking into rewriting DelphiFeeds.com (a news site for the Delphi programming community) and noticed that simple Update statements can take up to 50ms. This seems like a lot. Are there any recommended configuration settings that we should enable/set that are typically disabled on a standard MySQL installation (e.g. to take advantage of more RAM to cache queries and data and so on)?
Also, what performance implications does the choice of storage engines have? We are planning to go with InnoDB, but if MyISAM is recommended for performance reasons, we might use MyISAM.
The "best practice" is:
Measure performance, isolating the relevant subsystem as well as you can.
Identify the root cause of the bottleneck. Are you I/O bound? CPU bound? Memory bound? Waiting on locks?
Make changes to alleviate the root cause you discovered.
Measure again, to demonstrate that you fixed the bottleneck and by how much.
Go to step 2 and repeat as necessary until the system works fast enough.
Subscribe to the RSS feed at http://www.mysqlperformanceblog.com and read its historical articles too. That's a hugely useful resource for performance-related wisdom. For example, you asked about InnoDB vs. MyISAM. Their conclusion: InnoDB has ~30% higher performance than MyISAM on average. Though there are also a few usage scenarios where MyISAM out-performs InnoDB.
InnoDB vs. MyISAM vs. Falcon benchmarks - part 1
The authors of that blog are also co-authors of "High Performance MySQL," the book mentioned by #Andrew Barnett.
Re comment from #ʞɔıu: How to tell whether you're I/O bound versus CPU bound versus memory bound is platform-dependent. The operating system may offer tools such as ps, iostat, vmstat, or top. Or you may have to get a third-party tool if your OS doesn't provide one.
Basically, whichever resource is pegged at 100% utilization/saturation is likely to be your bottleneck. If your CPU load is low but your I/O load is at its maximum for your hardware, then you are I/O bound.
That's just one data point, however. The remedy may also depend on other factors. For instance, a complex SQL query may be doing a filesort, and this keeps I/O busy. Should you throw more/faster hardware at it, or should you redesign the query to avoid the filesort?
There are too many factors to summarize in a StackOverflow post, and the fact that many books exist on the subject supports this. Keeping databases operating efficiently and making best use of the resources is a full-time job requiring specialized skills and constant study.
Jeff Atwood just wrote a nice blog article about finding bottlenecks in a system:
The Computer Performance Shell Game
Go buy "High Performance MySQL" from O'Reilly. It's almost 700 pages on the topic, so I doubt you'll find a succinct answer on SO.
It's hard to broadbrush things, but a moderately high-level view is possible.
You need to evaluate read:write ratios. For tables with ratios lower than about 5:1, you will probably benefit from InnoDB because then inserts won't block selects. But if you aren't using transactions, you should change innodb_flush_log_at_trx_commit to 1 to get performance back over MyISAM.
Look at the memory parameters. MySQL's defaults are very conservative and some of the memory limits can be raised by a factor of 10 or more on even ordinary hardware. This will benefit your SELECTs rather than INSERTs.
MySQL can log things like queries that aren't using indices, as well as queries that just take too long (user-defineable).
The query cache can be useful, but you need to instrument it (i.e. see how much it is used). Cacti can do that; as can Munin.
Application design is also important:
Lightly caching frequently fetched but smallish datasets will have a big difference (i.e. cache lifetime of a few seconds).
Don't re-fetch data that you already have to hand.
Multi-step storage can help with a high volume of inserts into tables that are also busily read. The basic idea is that you can have a table for ad-hoc inserts (INSERT DELAYED can also be useful), but a batch process to move the updates within MySQL from there to where all the reads are happening. There are variations of this.
Don't forget that perspective and context are important, too: what you might think is a long time for an UPDATE to happen might actually be quite trivial if that "long" update only happens once a day.
There are tons of best practices which have been previously discussed so there is no reason to repeat them. For actually concrete advice on what to do, I would try running MySQL Tuner. Its a perl script that you can download and then run on your database server, it will give you a bunch of statistics on how your database is performing (e.g. cache hits) along with some concrete recommendations for what issues or config parameters need to be adjusted to improve performance.
While these statistics are all available in MySQL itself, I find that this tool provides them in a much easier to understand fashion. While it is important to note that YMMV with respect to the recommendations, I have found them to generally be pretty accurate. Just make sure that you have done a good job exercising the database beforehand with realistic traffic.