I have built and app which does random stuff and I want to collect various statistics which I want to display in graphs. My problem is I'm not sure how to store the data in a database except writing each log into new row which seems very inefficient.
Example Data (5 minute averages):
Number of Users
Number of Online Users
Number of Actions
What would be the best way to store this information? Do I need a separate table for thing that I'm logging or could they all go into one table?
Often you don't need the full resolution data kept for all time and you can re-process it periodically into lower resolution data to save space. For example you could store one day of full resolution (5 minute averages) but periodically re-average that data into 1 hour bins/1 day bins/ 1 month bins/ etc while also culling the data.
This allows you to have the data you need to display nice graphs of the activity over different time ranges (hour, day, week, month, etc) while limiting the number of rows to just what your application requires.
There are also excellent applications to store and display time-series data. MRTG and RRDTool come to mind. See this tutorial for good food for thought:
rrdtutorial
Related
So i am trying to work out a database design to keep track of views, likes and downloads. Now the amount of entries to keep track of is expected to be 1m or more, so normally i would just track each entry daily, but with 1m i am having concerns about performance and maybe even size on harddisk.
Customers wish is being able to show top statistics in ranges like last week, last month, last year. So i am not sure if should split the data up by adding up numbers from days to weeks or months and delete everything else that isn't relevant any more or just keep it a bit more flexible by tracking all of it, being able to query needed statistics freely.
Database: MySQL
I have to take save the counters once a day.
I think you need to keep track of it daily and then by the end of the week you delete the daily stats and you do so for the weekly data by the second week and do on for the months and year if you want to keep your last stats. You can create a job to make this job. I hope this may help you.
Due the operation only appending the data, store it on time series format.
Whenever you receive the clicks/views data goes in, count it and put the result as smallest dimension as we can. If we can summarizing it hourly, it's best choice ~ good for performance. Whenever we need to know data from higher dimension, just sum it up. Don't counting it (the statistic) on demand, for example : Scanning 10 millions of rows just for counting 1 day data, this is heavy operation.
With this approach you will save two thing :
Storage, you can backup older data that more than x times ( example more than 3 months), so the db size keep compact
Performance
We have a website with many users. To manage users who transacted on a given day, we use Redis and stored a list of binary numbers as the values. For instance, if our system had five users, and user 2 and 5 transacted on 2nd January, our key for 2nd January will look like '01001'. This also helps us to determine unique users over a given period and new users using simple bit operations. However, with growing number of users, we are running out of memory to store all these keys.
Is there any alternative database that we can use to store the data in a similar manner? If not, how should we store the data to get similar performance?
Redis' nemory usage can be affected by many parameters so I would also try looking in INFO ALL for starters.
With every user represented by a bit, 400K daily visitors should take at least 50KB per value, but due to sparsity in the bitmap index that could be much larger. I'd also suspect that since newer users are more active, the majority of your bitmaps' "active" flags are towards its end, causing it to reach close to its maximal size (i.e. total number of users). So the question you should be trying to answer is how to store these 400K visits efficiently w/o sacrificing the functionality you're using. That actually depends what you're doing with the recorded visits.
For example, if you're only interested in total counts, you could consider using the HyperLogLog data structure to count your transacting users with a low error rate and small memory/resources footprint. On the other hand, if you're trying to track individual users, perhaps keep a per user bitmap mapped to the days since signing up with your site.
Furthermore, there are bitmap compression techniques that you could consider implementing in your application code/Lua scripting/hacking Redis. The best answer would depend on what you're trying to do of course.
My application tracks clicks from adverts shown on remote sites, and redirects users to a product sales page.
I'm currently using MySQL to store click information (date, which link was used, ip address, custom data sent from the advertiser etc). The table is getting so big that it no longer fits our needs, which are:
High throughput (the app is processing 5 - 10M clicks per day and this is projected to grow)
Ability to report on the data by date range (e.g. how many clicks for link 1 over the past month grouped by country)
My initial idea was to move clicks into Redis (we only need to store them for 30 days, at which point they expire if they don't lead to a sale) and then make a new MySQL table to store generated stats by day, where we just update a counter per link when it's clicked.
When we started using the statistics table the database quickly fell over because of the amount of queries to that table.
Would it be best to keep the clicks in Redis, and have a separate MongoDB (or other noSQL DB) for the reporting? or could Mongo be used to store the whole click (just like we've been doing in MySQL) or is the volume too high?
Also I remember reading that MongoDB is not good at reclaiming space from deleted records, would this cause us issues since 90% of the clicks would be deleted after 30 days anyway?
Thanks
MongoDB is enough to solve this problem as compared to store in Radis and move to MongoDB. Since, the amount of data is very large, so you can create a indexes on timestamp or field having high carnality. This make you query fast, also MongoDB provide aggregation which help in generating the report. I don't think, is there any issue with deletion.
I have a database called RankHistory that is populated daily with each user's username and rank for the day (rank as in 1,2,3,...). I keep logs going back 90 days for every user, but my user base has grown to the point that the MySQL database holding these logs is now in excess of 20 million rows.
This data is recorded solely for the use of generating a graph showing how a user's rank has changed for the past 90 days. Is there a better way of doing this than having this massive database that will keep growing forever?
How great is the need for historic data in this case? My first thought would be to truncate data older than a certain threshold, or move it to an archive table that doesn't require as frequent or fast access as your current data.
You also mention keeping 90 days of data per user, but the data is only used to show a graph of changes to rank over the past 30 days. Is the extra 60 days' data used to look at changes over previous periods? If it isn't strictly necessary to keep that data (or at least not keep it in your primary data store, as per my first suggestion), you'd neatly cut the quantity of your data by two-thirds.
Do we have the full picture, though? If you have a daily record per user, and keep 90 days on hand, you must have on the order of a quarter-million users if you've generated over twenty million records. Is that so?
Update:
Based on the comments below, here are my thoughts: If you have hundreds of thousands of users, and must keep a piece of data for each of them, every day for 90 days, then you will eventually have millions of pieces of data - there's no simple way around that. What you can look into is minimizing that data. If all you need to present is a calculated rank per user per day, and assuming that rank is simply a numeric position for the given user among all users (an integer between 1 - 200000, for example), storing twenty million such records should not put unreasonable strain on your database resources.
So, what precisely is your concern? Sheer data size (i.e. hard-disk space consumed) should be relatively manageable under the scenario above. You should be able to handle performance via indexes, to a certain point, beyond which the data truncation and partitioning concepts mentioned can come into play (keep blocks of users in different tables or databases, for example, though that's not an ideal design...)
Another possibility is, though the specifics are somewhat beyond my realm of expertise, you seem to have an ideal candidate for an OLAP cube, here: you have a fact (rank) that you want to view in the context of two dimensions (user and date). There are tools out there for managing this sort of scenario efficiently, even on very large datasets.
Could you run an automated task like a cron job that checks the database every day or week and deletes entries that are more than 90 days old?
Another option, do can you create some "roll-up" aggregate per user based on whatever the criteria is... counts, sales, whatever and it is all stored based on employee + date of activity. Then you could have your pre-aggregated rollups in a much smaller table for however long in history you need. Triggers, or nightly procedures can run a query for the day and append the results to the daily summary. Then your queries and graphs can go against that without dealing with performance issues. This would also help ease moving such records to a historical database archive.
-- uh... oops... that's what it sounded like you WERE doing and STILL had 20 million+ records... is that correct? That would mean you're dealing with about 220,000+ users???
20,000,000 records / 90 days = about 222,222 users
EDIT -- from feedback.
Having 222k+ users, I would seriously consider that importance it is for "Ranking" when you have someone in the 222,222nd place. I would pair the daily ranking down to say the top 1,000. Again, I don't know the importance, but if someone doesn't make the top 1,000 does it really matter???
I've created an affilaite system that tracks leads and conversions. The leads and conversions records will go into the millions so I need a good way to store them. Users will need to track the stats hourly, daily, weekly and monthly.
Whats the best way to store the leads and conversions?
For this type of system, you need to keep all of the detail records. Reason being at some point someone is going to contest an invoice.
However, you should have some roll up tables. Each hour, compute current hours work and store the results. Do the same for daily, weekly, and monthly.
If some skewing is okay, you can compute the daily amounts off of the 24 hourly computed records. Weekly, off of the last 7 daily records. For monthly you might want to compute back off of the hourly records, because each month doesn't quite add up to 4 full weeks.. Also, it helps reduce noise from any averaging you might be doing.
I'd recommend a two step archival process. The first one should run once a day and move the records into a separate "hot" database. Try to keep 3 months hot for any type of research queries you need to do.
The second archive process is up to you. You could simply move any records older than 3 months into some type of csv file and simply back it up. After some period of time (a year?) delete them depending on your data retention agreements.
Depending on the load, you may need to have multiple web servers handling the lead and conversion pixels firing. One option is to store the raw data records on each web/mysql server, and then run an archival process every 5-10 minutes that stores them in a highly normalized table structure, and which performs any required roll-ups to achieve the performance you are looking for.
Make sure you keep row size as small as possible, store IP's as unsigned ints, store referees as INTs that reference lookup tables, etc.