best storage solution for data samples - mysql

i'm developing a system that will collect user activities samples (opened a window, scrolls, enter page, leave page, etc.) and i'm looking for the best way to store these samples and query it.
i prefer something smart where i can execute sql-like group by queries (for example give me all the window open events grouped by date and hour), and of course something flexible enough in case i'll need to add columns in the future.
i'm trying to avoid thinking about all the queries i might need and just save an aggregated version of the data by time, since i'd like to do drill-downs. (for example, count all the window open events by date and time, and then see all event in each time-frame, or change it to be by unique userId).
thanks.
PS - i currently use MySql for this task, but the data is expected to grow rapidly. I've experimented with mongoDB as well.

I believe mongoDB can be a good solution. First of all it's designed to hold big data and it's really easy to use and scale (replica set or sharding). Also the expression language is solid. I mean it's not as powerful as SQL, but still good enough. Here is a good link about mapping SQL command to MongoDB.
There are other alternatives, but I think or they are too complex or their expression language is not powerful enough.
Have a look at this link too, which can help you find the right solution for you.

Related

Business Intelligence: Live reports from MySQL

I wanted to create a (nearly) live dashboard from MySQL databases I tried PowerBI, SSRS and other similar tools but they were not as fast as I wanted. What I have in mind is the data to be updated every 1 minute or even less. Is it possible? and are there any free (or inexpensive) tools for this?
Edit: I want to build a wallboard to show some data on a big TV screen. I need it to be real-time. I tried SSRS autorefresh as well but it has a loading sign and very slow, plus PowerBI uses Azure which is very complex to configure and blocked for my country.
This is a topic which has many more layers than to ask which tool is best for this case.
You have to consider
Velocity
Veracity
Variety
Kind
Use Case
of the data. Sure, this is usually only being recounted if talking about Big Data, but will give you a feeling about the size and complexity of data.
Loading
Is the data being loaded and you "just" use it? Or do you also need to load it realtime or near-realtime (for clarification read this answer here)?
Polling/Pushing
Do you want to poll data every x seconds or minutes? Or do you want to work event based? What are the requirements which will need you to show data this fast?
Use case
Do you want to show financial data? Do you need to show data about error and system logs of servers and applications? Do you want to generate insights as soon as a visitor of a webpage is making a request?
Conclusion
When thinking about those questions, keep in mind this should just be a hint to go into one direction or another. Depending on the data and the use case, you might use an ELK stack (for logs), Power BI (for financial data) or even some scripts (for billing).

Database for counting page accesses

So let's say I have a site with appx. 40000 articles.
What Im hoping to do is record the number of page visits per each article overtime.
Basically the end goal is to be able to visualize via graph the number of lookups for any article between any period of time.
Here's an example: https://books.google.com/ngrams
I've began thinking about mysql data structure -> but my brain tells me it's probably not the right task for mysql. Almost seems like I'd need to use some specific nosql analytics solution.
Could anyone advice what DB is the right fit for this job?
SQL is fine. It supports UPDATE statements that guarantee your count is correct rather than just eventual consistency.
Although most people will just use a log file, and process this on-demand. Unless you are Google scale, that will be fast enough.
There exist many tools for this, often including some very efficient specialized data structures such as RDDs that you won't find in any database. Why don't you just use them?

How to: log and anlyze clicks, pageviews and sessions to optimize conversion

We have a medium size e-commerce site. We sell books. On said site we have promotions, user recommendations, regular book pages, related books, etcetera. Quite similar to amazon.com except ofcourse the volume of the site.
We have a traditional LAMP setup, where the M still stands for MariaDB.
TPTB want to log and analyze user behaviour in order to optimize conversion.
Bottom line, each click has to be logged, I think. (I fear)
This will add up to a few million clicks every month. The system has to be able to go back in time at least 3 years.
Questions that might be asked the system are: Given a page (eg: homepage), and clicks on a promotional banner, which color of said banner gives the best conversion. Now split that question into new and returning customers. (Multi-dimensional or A/B-testing) Or, given a view of book A and B, which books do users buy next. The range of queries is going to be very wide. Aggregating the data will be pointless.
I have serious doubts about MySQL's ability to provide a good platform for storing, analyzing and querying this data. We could store the rows, feeding them to MySQL via RabbitMQ as to avoid delays, but query and analyze this data efficiently might not be optimal in MySQL, given 50M rows.
There have been a number of articles about using MongoDB to store analytical data. But all the posts seem to increment a counter in a document (pre-aggregating the data), which is not good enough for us.
The big question is: Is there any database (or other system) that is particularly well-suited to store and analyze data like this? Might MySQL still do the trick? Am I correct in my assessment that MongoDB probably might not be of any added value here?
If I understand correctly, then you only want to have reports with aggregated data done say once a day (As opposed to "live")? If that's the case, I would suggest to use Hadoop, as it allows you to run massive Map/Reduce jobs running this aggregations for you, and then present you with a report. At this amount of data, any "live" solution will just not work.
If you don't want to mess with the complexity of Hadoop and Map/Reduce, then perhaps MongoDB might work. It has quite a powerful aggregation framework that can be tasked to do many aggregations in a sort-of-live environment. It's not really meant for running at every pageview, but it's also not a "let's do this once a day" kinda thing. It depends a little bit on your data aggregation requirements whether the Aggregation Framework can help you, but if it doesn't, then MongoDB also supports Map/Reduce for some more complex tasks (at a slower pace). MongoDB is a quite a good fit, as you can have large write performance - if one node doesn't work, you can always shard to have higher write performance.
If your primary convern is to offer recommendations based on past user choices, you may also consider a graph database like Neo4j or FlockDB.
Those database would allow you to build relationship between buyers and the items they bought (which should be a lot less data to store, since you will have a lot less user data redundancies) which you can use to do some Triadic closure processes- In other words finding out what similar users bought that user 'A' did not buy yet.
I can not say I have done it yet, but I am also seriously looking into this.
Otherwise MongoDB in addition to the Map Reduce paradigm, has now (v 2.4.6) an Aggregation Pipeline Framework that I have found very powerful.

MongoDB, Mysql and relationships

I'm creating an online chat.
Context (if needed):
So far I was using PHP/MySQL and AJAX to do the job but this is not a healthy solution as I'm stuck with a "pull" type application with concerns about scalability.
I read about the "push" method alternatives and it seems that my choices are limited and exclude PHP.
Websockets could be a very interesting option if it was integrated in every browser but that's not the case (and it seems that for most of those implementing it, it is disabled by default).
Long polling would also be a candidate but it involves other issues like the number of concurrent open connections that may kill your web app too.
This is why, against my will, I think that my only viable option is to use server-side javascript (node.js + now.js would be my choice then).
This said, I may need to rethink the use of a database too.
I need to keep stored data of each users and link these users to their submitted messages.
In case of a chat engine driven by a push system, would MySQL still be a valuable choice then?
I read about NoSQL data management and it seems that MongoDB would be a good addition to node.js.
My two questions:
Is there a reason I'm better off moving to a NoSQL system (which I need to learn from scratch) instead of MySQL (which I know already) in case of a real time web app?
Let's say that in MySQL:
I have a table called user (user_id_p, username)
I have a table called messages (message_id, message, user_id_f)
I want to make a single query to get all the messages associated with the username "omgtheykilledkenny".
Simple enough but how can I achieve that with MongoDB and its collections philosophy?
Thank you for your help.
Working with node.js/MongoDB is cool because Mongo's document structure is already JSONish, so you don't have to convert your queries to JSON. If you already know JavaScript, you have a headstart learning MongoDB. Mongo does scale for writes and reads pretty easily, the speed is pretty awesome, although I've seen some MySQL benchmarks on a single system that compare well to Mongo--it really shines when you start needing multiple boxes.
Assuming you have a separate messages collection, and you already know the id of the user you could just do: db.messages.find({user_id:ObjectId(...)});
Update: If you don't know the user id, then you need to do two queries, yes (unless you use an embedded array as recommended in the other answer--I would advise against that for this sort of use case, though, because you'll end up querying the entire document/list of messages even to display just a subset). Depending on your use case, obviously, if you have the username, you could also keep the user id handy, for situations like this. If it's client input giving the username that wouldn't work.
Update2: If you have unique usernames, you could make the username the _id for the users collection to avoid this issue. Most people would probably advise against this, and it has some definite drawbacks, such as making it harder to change a username.
You can't perform joins in MongoDB, so you can't achieve your second requirement. The Mongo way so do this would be either to nest messages within the user collection:
{ username: 'abc', messages: [...]}
Or use refId's, which is a kind of half-way house between joins and nested documents:
http://uk3.php.net/manual/en/class.mongodbref.php
In terms of switching from MySQL to Mongo, you don't necessarily need to ditch MySQL entirely. There are use cases where one is more appropriate than the other. You could use both for different parts of the system if it's appropriate to do so. Personally, I've used MySQL for a lot of things in the past, and I'm using MongoDB for a big project at the moment. I found the move very easy to make, because it's so easy to use the MongoDB driver, and the MongoDB site is very good for documentation on the whole.
You can convert to and from JSON with json_encode and json_decode from the front end, and you query and insert/update with arrays with MongoDB's PHP driver, so it's arguably more intuitive and easier to use than MySQL. It's just a question of getting used to it.

Which is the right database for the job?

I am working on a feature and could use opinions on which database I should use to solve this problem.
We have a Rails application using MySQL. We have no issues with MySQL and it runs great. But for a new feature, we are deciding whether to stay MySQL or not. To simplify the problem, let's assume there is a User and Message model. A user can create messages. The message is delivered to other users based on their association with the poster.
Obviously there is an association based on friendship but there are many many more associations based on the user's profile. I plan to store some metadata about the poster along with the message. This way I don't have to pull the metadata each time when I query the messages.
Therefore, a message might look like this:
{
id: 1,
message: "Hi",
created_at: 1234567890,
metadata: {
user_id: 555,
category_1: null,
category_2: null,
category_3: null,
...
}
}
When I query the messages, I need to be able to query based on zero or more metadata attributes. This call needs to be fast and occurs very often.
Due to the number of metadata attributes and the fact any number can be included in a query, creating SQL indexes here doesn't seem like a good idea.
Personally, I have experience with MySQL and MongoDB. I've started research on Cassandra, HBase, Riak and CouchDB. I could use some help from people who might have done the research as to which database is the right one for my task.
And yes, the messages table can easily grow into millions or rows.
This is a very open ended question, so all we can do is give advice based on experience. The first thing to consider is if it's a good idea to decide on using something you haven't used before, instead of using MySQL, which you are familiar with. It's boring not to use shiny new things when you have the opportunity, but believe me that it's terrible when you've painted yourself in a corner because you though that the new toy would do everything it said on the box. Nothing ever works the way it says in the blog posts.
I mostly have experience with MongoDB. It's a terrible choice unless you want to spend a lot of time trying different things and realizing they don't work. Once you scale up a bit you basically can't use things like secondary indexes, updates, and other things that make Mongo an otherwise awesomely nice tool (most of this has to do with its global write lock and the database format on disk, it basically sucks at concurrency and fragments really easily if you remove data).
I don't agree that HBase is out of the question, it doesn't have secondary indexes, but you can't use those anyway once you get above a certain traffic load. The same goes for Cassandra (which is easier to deploy and work with than HBase). Basically you will have to implement your own indexing which ever solution you choose.
What you should consider is things like if you need consistency over availability, or vice versa (e.g. how bad is it if a message is lost or delayed vs. how bad is it if a user can't post or read a message), or if you will do updates to your data (e.g. data in Riak is an opaque blob, to change it you need to read it and write it back, in Cassandra, HBase and MongoDB you can add and remove properties without first reading the object). Ease of use is also an important factor, and Mongo is certainly easy to use from the programmer's perspective, and HBase is horrible, but just spend some time making your own library that encapsulates the nasty stuff, it will be worth it.
Finally, don't listen to me, try them out and see how they perform and how it feels. Make sure you try to load it as hard as you can, and make sure you test everything you will do. I've made the mistake of not testing what happens when you remove lots of data in MongoDB, and have paid for that dearly.
I would recommend to look at presentation about Why databases suck for messaging which is mainly targeted on the fact why you shouldn't use databases such as MySQL for messaging.
I think in this scenario CouchDB's changes feed may come quite handy although you probably would also have to create some more complex views based on querying message metadata. If speed is critical try to also look at redis which is really fast and comes with pub/sub functionality. MongoDB with it's ad hoc queries support may also be a decent solution for this use case.
I think you're spot-on in storing metadata along with each message! Sacrificing storage for faster retrieval time is probably the way to go. Note that it could get complicated if you ever need to change a user's metadata and propagate that to all the messages. You should consider how often that might happen, whether you'll actually need to update all the message records, and based on that whether it's worth paying the price for the sake of less queries (it probably is worth it, but that depends on the specifics of your system).
I agree with #Andrej_L that Hbase isn't the right solution for this problem. Cassandra falls in with it for the same reason.
CouchDB could solve your problem, but you're going to have to define views (materialized indices) for any metadata you're going to want to query. If the whole point of not using MySQL here is to avoid indexing everything, then Couch is probably not the right solution either.
Riak would be a much better option since it queries your data using map-reduce. That allows you to build any query you like without the need to pre-index all your data as in couch. Millions of rows are not a problem for Riak - no worries there. Should the need arise, it also scales very well by simply adding more nodes (and it can balance itself too, so this is really a non-issue).
So based on my own experience, I'd recommend Riak. However, unlike you, I've no direct experience with MongoDB so you'll have to judge it agains Riak yourself (or maybe someone else here can answer on that).
From my experience with Hbase is not good solution for your application.
Because:
Doesn't contain secondary index by default(you should install plugins or something like these). So you can effectively search only by primary key. I have implemented secondary index using hbase and additional tables. So you can't use this one in online application because of for getting result you should run map/reduce job and it will take much time on million data.
It's very difficult to support and adjust this db. For effective work you will use HBAse with Hadoop and it's necessary powerful computers or several ones.
Hbase is very useful when you need make aggregation reports on big amount of data. It seems that you needn't.
Due to the number of metadata attributes and the fact any number can
be included in a query, creating SQL indexes here doesn't seem like a
good idea.
It sounds like you need a join, so you can mostly forget about CouchDB till they sort out the multiview code that was worked on (not actually sure it is still worked on).
Riak can query as fast as you make it, depends on the nodes
Mongo will let you create an index on any field, even if that is an array
CouchDB is very different, it builds indexes using a stored Map-Reduce(but without the reduce) they call a "view"
RethinkDB will let you have SQL but a little faster
TokuDB will too
Redis will kill all in speed, but it's entirely stored in RAM
single level relations can be done in all of them, but differently for each.