I want to create a 'google analytics' type application for the web - i.e. a web-based tool to do some reporting and graphing for my database. The problem is that the database is HUGE, so I can't do the queries in real time because they will take too long and the tool will be unresponsive.
How can I use a cron job to help me? What is the best way to be able to make my graphs responsive? I think I will need to denomalize some of my database tables, but how do I make these queries faster? What intermediate values can I store in another database table to make it quicker?
Thanks!
Business Intelligence (BI) is a pretty mature discipline - and you'll find answers to your questions in any book on scaling databases for reporting & data warehousing.
A high-level list of tactics would include:
partitioning (because indexes are little help for most reporting)
summary tables (generated usually through a batch process submit via cron)
you need a good optimizer (some databases like mysql don't - so make poor joining decisions)
query parallelism (some databases will provide linear speedups just by splitting your query into multiple threads)
star-schema - a good data model is crucial to good performance
In general dynamic reporting beats the pants off static reporting - so if you're after powerful reporting I'd just try to copy data into an appropriate model, use aggregates, possibly change the database to get a good optimizer and the appropriate features rather than run reports in batch.
Related
As a beginner starting out in Data Analytics, I would like to know if they are similar (or different versions of the same thing), or if I have them confused for two entirely different concepts.
Similarities
The similar thing between the 2 is that we can use SQL to query data stored in both MySQL and BigQuery
Differences
We can say that the 2 technologies have completely different use cases. So their philosophy, design, and internal architecture are different.
You can use MySQL to store data for a transactional system or OLTP. For example, if you have an ecommerce website then you can use a MySQL database to store data about users, orders, payments... You could have a lot of transactions/seconds but a transaction usually involves 1 or some lines in your database. MySQL and other relational database engines are good for that. They use some form of normalization to make write operation efficient and keep data consistent.
Now imagine you need to analyze the data of your Ecommerce website over the last 5 years. Your query now will involve all your entries (or rows), but usually for some columns only. And you don't have the same number of queries/second as in the previous situation. You can see the 2 conditions are different. And in this situation, MySQL isn't an optimal choice anymore but an OLAP system. BigQuery is an example of OLAP. With BigQuery, you store data for analysis, not for operational purposes.
Now you see that 2 technologies serve different purposes, you can understand the difference in their design and architecture. For example, with BigQuery, you're encouraged to denormalize data to avoid expensive JOIN operators. Internally, BigQuery stores data by columns and not by rows like MySQL. These decisions share a common goal, make analytic queries run efficiently.
You can research further about OLTP vs OLAP :).
MySQL is a free RDBMS that runs everywhere, extremely popular, general purpose, is really well supported is extremely flexible.
BigQuery is a proprietary google-owned, Cassandra-like, expensive database that uses SQL but is more limited in features, but can be easier to scale for certain types of problems and is deeper embedded into the google ecosystem.
You should always default to MySQL or Postgres unless you have a specific reason to use something like BigQuery. If you don't know which one you should use, you should use MySQL or Postgres.
I am working on a project using ZK Framework, Hibernate, Spring and Mysql.
I need to generate some charts from Mysql database, but after I calculate the number of objects that I need to calculate the values of those charts I found it more than 1400 objects and same numbers of queries and transactions.
So i thought if using stored procedures in Mysql to calculate those values and save them in a separate tables (using an architecture close to Data Warehouse), and then use my web application to just read the values of those tables and display them as charts.
I want to know in terms of speed and performance, which of those methods is better?
And thank you
No way to tell, really, without many more details. However:
What you want to do is called Denormalisation. This is a recognised technique for speeding up reporting and making it easier. (If it doesn't, your denormalisation has failed!) When it works it has the following advantages:
Reports run faster
Report code is easier to write
On the other hand:
Report Data is out of date, containing only data as at the time you
last did the calculations
An extreme form of doing this is to take the OLTP database (a standard database) and export it into an Analysis database (aka a Cube or an OLAP database).
One of the problems of Denormalisation is that a) it is usually a significant effort, b) it adds extra code which adds complexity and thus increases support costs, and c) it might not make enough (or any) difference. Because of this, it is usual not to do it until you know you have a problem. This will happen when you have done your reports on the basic database and have found that they either are too difficult to write and/or run too slowly. I would strongly suggest that only when you reach that point do you go for Denormalisation.
There can be times when you don't need to do that, but I've only seen 1 such example in over 25 years of development; and that decision was helped by a desire to use an OLAP database by Management for political purposes.
I am developing an application to provide daily dynamic information like prices, availability, etc for around 50,000 objects. I need to store data for about the next 200 days. That would mean a total of 10 million rows. The prices will be batch updated and new data will be added once daily. Let me say about 10,000 existing rows get updated and 50,000 rows are inserted daily. What is the best backend framework that I can use.
Can MySQL be scalable with limited hardware capability. Or is NoSQL database the way to go? If yes, then which NoSQL database will be best suited for fast fetching and updating the data.
I would recommend you to use Cassandra, as you need to write more than read, and Cassandra is optimized for high throughput while write.
It provide scalability, no single point failure and high throughput. And you can update records as well.
Cassandra also supports batch operation for DML (data manipulation language) i.e. write, update and delete. And batch operation of Cassandra provides atomicity as well.
This type of volume is well within the capabilities/capacities of traditional RDBMS. I would say that if you are familiar with MySQL you will be safe to stick with it. A lot depends also, on what kind of queries you want to run. With a properly structured, denormalized setup, you can run ad hoc queries in an RDBMS, whereas with document stores, you need to think quite carefully about structure up front -- embedding versus referencing, see: MongoDB relationships: embed or reference?. MongoDB has added a very nice aggregation framework, which goes a long way towards being able to query data as you would in an RDBMS, but in many other NoSQL systems, queries are essentially map-reduce jobs and joins are either painful or impossible.
It sounds like your data is structured around dates/days. One thing you can do that will yield dramatic speed improvements on queries is partitioning by date ranges. I have worked on dbs over 100m rows in MySQL where historical data had to be kept for auditing purposes but where most of the read/write was on current data, and partitioning led to truly dramatic read query improvements.
You might be interested by this link which shows what some very high volume sites are using: What databases do the World Wide Web's biggest sites run on? Anecdotally, I know that Facebook had trillions of rows in MySQL across various clusters before they started hitting real bottlenecks, but it is no suprise that Cassandra ultimately came out of Facebook engineering, given the truly colossal data volumes they now handle.
Cassandra, Riak, CouchDB, MongoDB, etc all arose to solve very real problems, but these come with tradeoffs, both in terms of the CAP theorem, and in terms of ad hoc queries being more difficult than in RDBMS. Having said that, MongoDB and Cassandra (which I have most experience with) are easy to set up and fun to work with, so if you want to give them a go, I'm sure you will have no problems, but I would say your usage requirements are well within the capabilities of MySQL. Just my 2c.
I am building a data ware house that is the range of 15+ TBs. While storage is cheap, but due to limited budget we have to squeeze as much data as possible in to that space while maintaining performance and flexibility since the data format changes quiet frequently.
I tried Infobright(community edition) as a SQL solution and it works wonderful in term of storage and performance, but the limitation on data/table alteration is making it almost a no go. and infobright's pricing on enterprise version is quiet steep.
After checking out MongoDB, it seems promising except one thing. I was in a chat with a 10gen guy, and he stated that they don't really give much of a thought in term of storage space since they flatten out the data to achieve the performance and flexibility, and in their opinion storage is too cheap nowadays to be bother with.
So any experienced mongo user out there can comment on its storage space vs mysql (as it is the standard for what we comparing against to right now). if it's larger or smaller, can you give rough ratio? I know it's very situation dependent on what sort of data you put in SQL and how you define the fields, indexing and such... but I am just trying to get a general idea.
Thanks for the help in advance!
MongoDB is not optimized for small disk space - as you've said, "disk is cheap".
From what I've seen and read, it's pretty difficult to estimate the required disk space due to:
Padding of documents to allow in-place updates
Attribute names are stored in each collection, so you might save quite a bit by using abbreviations
No built in compression (at the moment)
...
IMHO the general approach is to build a prototype, insert data and see how much disk space your specific use case requires. The more realistic you can model your queries (inserts and updates) the better your result will be.
For more details see http://www.mongodb.org/display/DOCS/Excessive+Disk+Space as well.
Pros and Cons of MongoDB
For the most part, users seem to like MongoDB. Reviews on TrustRadius give the document-oriented database 8.3 out of 10 stars.
Some of the things that authenticated MongoDB users say they like about the database include its:
Scalability.
Readable queries.
NoSQL.
Change streams and graph queries.
A flexible schema for altering data elements.
Quick query times.
Schema-less data models.
Easy installation.
Users also have negative things to say about MongoDB. Some cons reported by authenticated users include:
User interface, which has a fairly steep learning curve.
Lack of joins, which can make some data retrieval projects difficult.
Occasional slowness in the cloud environment.
High memory consumption
Poorly structured documentation.
Lack of built-in analytics.
Pros and Cons of MySQL
MySQL gets a slightly higher rating (8.6 out of 10 stars) on TrustRadius than MongoDB. Despite the higher rating, authenticated users still mention plenty of pros and cons of choosing MySQL.
Some of the positive features that users mention frequently include MySQL’s:
Portability that lets it connect to secondary databases easily.
Ability to store relational data.
Fast speed.
Excellent reliability.
Exceptional data security standards.
User-friendly interface that helps beginners complete projects.
Easy configuration and management.
Quick processing.
Of course, even people who enjoy using MySQL find features that they don’t like. Some of their complaints include:
Reliance on SQL, which creates a steeper learning curve for users who
do not know the language.
Lack of support for full-text searches in InnoDB tables.
Occasional stability issues.
Dependence on add-on features.
Limitations on fine-tuning and common table expressions.
Difficulties with some complex data types.
MongoDB vs MySQL Performance
When comparing the performance of MongoDB and MySQL, you must consider how each database will affect your projects on a case-by-case basis. While some performance features may appear to be objectively promising, your team members may never use the features that drew you to a database in the first place.
MongoDB Performance
Many people claim that MongoDB outperforms MySQL because it allows them to create queries in multiple ways. To put it another way, MongoDB can be used without knowing SQL. While the flexibility improves MongoDB's performance for some organizations, SQL queries will suffice for others.
MongoDB is also praised for its ability to handle large amounts of unstructured data. Depending on the types of data you collect, this feature could be extremely useful.
MongoDB does not bind you to a single vendor, giving you the freedom to improve its performance. If a vendor fails to provide you with excellent customer service, look for another vendor.
MySQL Performance
MySQL performs extremely well for teams that want an open-source relational database that can store information in multiple tables. The performance that you get, however, depends on how well you configure the MySQL database. Configurations should differ depending on the intended use. An e-commerce site, for example, might need a different MySQL configuration than a team of research scientists.
No matter how you plan to use MySQL, the database’s performance gets a boost from full-text indexes, a high-speed transactional system, and memory caches that prevent you from losing crucial information or work.
If you don’t get the performance that you expect from MySQL data warehouses and databases, you can improve performance by integrating them with an excellent ETL tool that makes data storage and manipulation easier than ever.
MySQL vs MongoDB Speed
In most speed comparisons between MySQL and MongoDB, MongoDB is the clear winner. MongoDB is much faster than MySQL at accepting large amounts of unstructured data. When dealing with large projects, it's difficult to say how much faster MongoDB is than MySQL. The speed you get depends on a number of factors, including the bandwidth of your internet connection, the distance between your location and the database server, and how well you organise your data.
If all else is equal, MongoDB should be able to handle large data projects much faster than MySQL.
Choosing Between MySQL and MongoDB
Whether you choose MySQL or MongoDB probably depends on how you plan to use your database.
Choosing MySQL
For projects that require a strong relational database management system, such as storing data in a table format, MySQL is likely to be the better choice. MySQL is also a great choice for cases requiring data security and fault tolerance. MySQL is a good choice if you have high-quality data that you've been collecting for a long time.
Keep in mind that to use MySQL, your team members will need to know SQL. You'll need to provide training to get them up to speed if they don't already know the language.
Choosing MongoDB
When you want to use data clusters and search languages other than SQL, MongoDB may be a better option. Anyone who knows how to code in a modern language will be able to get started with MongoDB. MongoDB is also good at scaling quickly, allowing multiple teams to collaborate, and storing data in a variety of formats.
Because MongoDB does not use data tables to make browsing easy, some people may struggle to understand the information stored there. Users can grow accustomed to MongoDB's document-oriented storage system over time.
For a bit of background - this question deals with a project running on a single small EC2 instance, and is about to migrate to a medium one. The main components are Django, MySQL and a large number of custom analysis tools written in python and java, which do the heavy
lifting. The same machine is running Apache as well.
The data model looks like the following - a large amount of real time data comes in streamed from various networked sensors, and ideally, I'd like to establish a long-poll approach rather than the current poll every 15 minutes approach (a limitation of computing stats and writing into the database itself). Once the data comes in, I store the raw version in
MySQL, let the analysis tools loose on this data, and store statistics in another few tables. All of this is rendered using Django.
Relational features I would need -
Order by [SliceRange in Cassandra's API seems to satisy this]
Group by
Manytomany relations between multiple tables [Cassandra SuperColumns seem to do well for one to many]
Sphinx on this gives me a nice full text engine, so thats a necessity too. [On Cassandra, the Lucandra project seems to satisfy this need]
My major problem is that data reads are extremely slow (and writes aren't that hot either). I don't want to throw a lot of money and hardware on it right now, and I'd prefer something that can scale easily with time. Vertically scaling MySQL is not trivial in that sense (or cheap).
So essentially, after having read a lot about NOSQL and experimented with things like MongoDB, Cassandra and Voldemort, my questions are,
On a medium EC2 instance, would I gain any benefits in reads/writes by shifting to something like Cassandra? This article (pdf) definitely seems to suggest that. Currently, I'd say a few hundred writes per minute would be the norm. For reads - since the data changes every 5 minutes or so, cache invalidation has to happen pretty quickly. At some point, it should be able to handle a large number of concurrent users as well. The app performance currently gets killed on MySQL doing some joins on large tables even if indexes are created - something to the order of 32k rows takes more than a minute to render. (This may be an artifact of EC2 virtualized I/O as well). Size of tables is around 4-5 million rows, and there are about 5 such tables.
Everyone talks about using Cassandra on multiple nodes, given the CAP theorem and eventual consistency. But, for a project that is just beginning to grow, does it make sense
to deploy a one node cassandra server? Are there any caveats? For instance, can it replace MySQL as a backend for Django? [Is this recommended?]
If I do shift, I'm guessing I'll have to rewrite parts of the app to do a lot more "administrivia" since I'd have to do multiple lookups to fetch rows.
Would it make any sense to just use MySQL as a key value store rather than a relational engine, and go with that? That way I could utilize a large number of stable APIs available, as well as a stable engine (and go relational as needed). (Brett Taylor's post from Friendfeed on this - http://bret.appspot.com/entry/how-friendfeed-uses-mysql)
Any insights from people who've done a shift would be greatly appreciated!
Thanks.
Cassandra and the other distributed databases available today do not provide the kind of ad-hoc query support you are used to from sql. This is because you can't distribute queries with joins performantly, so the emphasis is on denormalization instead.
However, Cassandra 0.6 (beta officially out tomorrow, but you can build from the 0.6 branch yourself if you're impatient) supports Hadoop map/reduce for analytics, which actually sounds like a good fit for you.
Cassandra provides excellent support for adding new nodes painlessly, even to an initial group of one.
That said, at a few hundred writes/minute you're going to be fine on mysql for a long, long time. Cassandra is much better at being a key/value store (even better, key/columnfamily) but MySQL is much better at being a relational database. :)
There is no django support for Cassandra (or other nosql database) yet. They are talking about doing something for the next version after 1.2, but based on talking to django devs at pycon, nobody is really sure what that will look like yet.
If you're a relational database developer (as I am), I'd suggest/point out:
Get some experience working with Cassandra before you commit to its use on a production system... especially if that production system has a hard deadline for completion. Maybe use it as the backend for something unimportant first.
It's proving more challenging than I'd anticipated to do simple things that I take for granted about data manipulation using SQL engines. In particular, indexing data and sorting result sets is non-trivial.
Data modelling has proven challenging as well. As a relational database developer you come to the table with a lot of baggage... you need to be willing to learn how to model data very differently.
These things said, I strongly recommend building something in Cassandra. If you're like me, then doing so will challenge your understanding of data storage and make you rethink a relational-database-fits-all-situations outlook that I didn't even realize I held.
Some good resources I've found include:
Dominic Williams' Cassandra blog posts
Secondary Indexes in Cassandra
More from Ed Anuff on indexing
Cassandra book (not fantastic, but a good start)
"WTF is a SuperColumn" pdf
The Django-cassandra is an early beta mode. Also Django didn't made for no-sql databases. The key in Django ORM is based on SQL (Django recommends to use PostgreSQL). If you need to use ONLY no-sql (you can mix sql and no-sql in same app) you need to risky use no-sql ORM (it significantly slower than traditional SQL orm or direct use of No-SQL storage). Or you'll need to completely full rewrite django ORM. But in this case i can't presume, why you need Django. Maybe you can use something else, like Tornado?