Large JSON Storage - json

Summary
What is the "best practice" way to store large JSON arrays on a remote web service?
Background
I've got a service, "service A", that generates JSON objects, an "item", no larger than 1KiB. Every time it emits an item, the item needs to be appended to a JSON array. Later, a user can get all these arrays of items, which can be 10s of MiB or more.
Performance
What is the best way to store JSON to make appending and retrieval performant? Ideally, insertation would be O(1) and retrieval would be fast enough that we didn't need to tell the user to wait until their files have downloaded.
The downloads have never become so large that the constraint is the time to download them from the server (if they were a 10 MiB file). The constraint has always been the time to compute the file.
Stack
Our current stack is running Django + Postgresql on Elasticbeanstalk. New services are acceptable (e.g. S3 if append were supported).
Attempted Solutions
When we try to store all JSON in a single row in the database, performance is understandably slow.
When we try to store each JSON object in a separate row, it takes too long to aggregate the separate rows into a single array of items. In addition, a user requests all item arrays in their account every time they visit the main screen of the app, so it is inefficient to recompute the aggregated array of items each time.

Related

How to index a 1 billion row CSV file with elastic search?

Imagine you had a large CSV file - let's say 1 billion rows.
You want each row in the file to become a document in elastic search.
You can't load the file into memory - it's too large so has to be streamed or chunked.
The time taken is not a problem. The priority is making sure ALL data gets indexed, with no missing data.
What do you think of this approach:
Part 1: Prepare the data
Loop over the CSV file in batches of 1k rows
For each batch, transform the rows into JSON and save them into a smaller file
You now have 1m files, each with 1000 lines of nice JSON
The filenames should be incrementing IDs. For example, running from 1.json to 1000000.json
Part 2: Upload the data
Start looping over each JSON file and reading it into memory
Use the bulk API to upload 1k documents at a time
Record the success/failure of the upload in a result array
Loop over the result array and if any upload failed, retry
The steps you've mentioned above looks good. A couple of other things which will make sure ES does not get under load:
From what I've experienced, you can increase the bulk request size to a greater value as well, say somewhere in the range 4k-7k (start with 7k and if it causes pain, experiment with smaller batches but going lower than 4k probably might not be needed).
Ensure the value of refresh_interval is set to a very great value. This will ensure that the documents are not indexed very frequently. IMO the default value will also do. Read more here.
As the above comment suggests, it'd be better if you start with a smaller batch of data. Of-course, if you use constants instead of hardcoding the values, your task just got easier.

How to store historic time series data

we're storing a bunch of time series data from several measurement devices.
All devices may provide different dimensions (energy, temp, etc.)
Currently we're using MySQL to store all this data in different tables (according to the dimension) in the format
idDevice, DateTime, val1, val2, val3
We're also aggregating this data from min -> Hour -> Day -> Month -> Year each time we insert new data
This is running quite fine, but we're running out of disk space as we are growing and in general I doubt that a RDBMS is the right answer to keep archive data.
So we're thinking of moving old/cold data on Amazon S3 and write some fancy getter that can recieve this data.
So here my question comes: what could be a good data format to support the following needs:
The data must be extensible in terms: once i a while a device will provide more data, then in the past -> the count of rows can grow/increase
The data must be updated. When a customer delivers historic data, we need to be able to update that for the past.
We're using PHP -> would be nice to have connectors/classes :)
I've had a look on HDF5, but it seems there is no PHP lib.
We're also willing to have a look on cloud based TimeSeries Databases.
Thank you in advance!
B
You might consider moving to a dedicated time-series database. I work for InfluxDB and our product meets most of your requirements right now, although it is still pre-1.0 release.
We're also aggregating this data from min -> Hour -> Day -> Month -> Year each time we insert new data
InfluxDB has built-in tools to automatically downsample and expire data. All you do is write the raw points and set up a few queries and retention policies, InfluxDB handles the rest internally.
The data must be extensible in terms: once i a while a device will provide more data, then in the past -> the count of rows can grow/increase
As long as historic writes are fairly infrequent they are no problem for InfluxDB. If you are frequently writing in non-sequential data the write performance can slow down, but only while the non-sequential points are being replicated.
InfluxDB is not quite schema-less, but the schema cannot be pre-defined, and is derived from the points inserted. You can add new tags (metadata) or fields (metrics) simply by writing a new point that includes them, and you can automatically compose or decompose series by excluding or including the relevant tags when querying.
The data must be updated. When a customer delivers historic data, we need to be able to update that for the past.
InfluxDB silently overwrites points when a new matching point comes in. (Matching means same series and timestamp, to the nanosecond)
We're using PHP -> would be nice to have connectors/classes :)
There are a handful of PHP libraries out there for InfluxDB 0.9. None are officially supported but likely one fits your needs enough to extend or fork.
You haven't specified what you want enough.
Do you care about latency? If not, just write all your data points to per-interval files in S3, then periodically collect them and process them. (No Hadoop needed, just a simple script downloading the new files should usually be plenty fast enough.) This is how logging in S3 works.
The really nice part about this is you will never outgrow S3 or do any maintenance. If you prefix your files correctly, you can grab a day's worth of data or the last hour of data easily. Then you do your day/week/month roll-ups on that data, then store only the roll-ups in a regular database.
Do you need the old data at high resolution? You can use Graphite to roll-up your data automatically. The downside is that it looses resolution as you age. But the upside is that your data is a fixed size and never grows, and writes can be handled quickly. You can even combine the above approach and send data to Graphite for quick viewing, but keep the data in S3 for other uses down the road.
I haven't researched the various TSDBs extensively, but here is a nice HN thread about it. InfluxDB is nice, but new. Cassandra is more mature, but the tooling to use it as a TSB isn't all there yet.
How much new data do you have? Most tools will handle 10,000 datapoints per second easily, but not all of them can scale beyond that.
I'm with the team that develops Axibase Time-Series Database. It's a non-relational database that allows you to efficiently store timestamped measurements with various dimensions. You can also store device properties (id, location, type, etc) in the same database for filtering and grouped aggregations.
ATSD doesn't delete raw data by default. Each sample takes 3.5+ bytes per tuple: time:value. Period aggregations are performed at request time and the list of functions includes: MIN, MAX, AVG, SUM, COUNT, PERCENTILE(n), STANDARD_DEVIATION, FIRST, LAST, DELTA, RATE, WAVG, WTAVG as well as some some additional functions for computing threshold violations per period.
Backfilling historical data is fully supported except that the timestamp has to be greater than January 1, 1970. Time precision is milliseconds or seconds.
As for deployment options, you could host this database on AWS. It runs on most Linux distributions. We could run some storage efficiency and throughput tests for you if you want to post sample data from your dataset here.

socket.io performance one emit per database row

I am trying to understand what is the best way to read and send a huge amount of database rows (50K-100K) to the client.
Should I simply read all the rows at once from the database at the backend and then send all the rows in a json format? This isn't that much responsive as user is just waiting for a long time, but this is faster for small no. of rows.
Should I stream the rows from the database and upon each reading of the row from the database, I call a socket.emit()? This causes too many socket emits, but is more responsive, but slow...
I am using node.js, socket.io
Rethink the Interface
First off, a user interface design that shows 50-100k rows on a client is probably not the best user interface in the first place. Not only is that a large amount of data to send down to the client and for the client to manage and is perhaps impractical in some mobile devices, but it's obviously way more rows than any single user is going to actually read in any given interaction with the page. So, the first order might be to rethink the user interface design and create some sort of more demand-driven interface (paged, virtual scroll, keyed by letter, etc...). There are lots of different possibilities for a different (and hopefully better) user interface design that lessens the data transfer amount. Which design would be best depends entirely upon the data and the likely usage models by the user.
Send Data in Chunks
That said, if you were going to transfer that much data to the client, then you're probably going to want to send it in chunks (groups of rows at a time). The idea with chunks is that you send a consumable amount of data in one chunk such that the client can parse it, process it, show the results and then be ready for the next chunk. The client can stay active the whole time since it has cycles available between chunks to process other user events. But, sending it in chunks reduces the overhead of sending a separate message for each single row. If your server is using compression, then chunks gives a greater chance for compression efficiency too. How big a chunk should be (e.g. how many rows of data is should contain) depends upon a bunch of factors and is likely best determined through experimentation with likely clients or the lowest power expected client. For example, you might want to send 100 rows per message.
Use an Efficient Transfer Format for the Data
And, if you're using socket.io to transfer large amounts of data, you may want to revisit how you use the JSON format. For example, sending 100,000 objects that all repeat exactly the same property names is not very efficient. You can often invent your own optimizations that avoid repeating property names that are exactly the same in every object. For example, rather than sending 100,000 of these:
{"firstname": "John", "lastname": "Bundy", "state": "Az", "country": "US"}
if every single object has the exact same properties, then you can either code the property names into your own code or send the property names once and then just send a comma separated list of values in an array that the receiving code can put into an object with the appropriate property names:
["John", "Bundy", "Az", "US"]
Data size can sometimes be reduced by 2-3x by simply removing redundant information.

redis as write-back view count cache for mysql

I have a very high throughput site for which I'm trying to store "view counts" for each page in a mySQL database (for legacy reasons they must ultimately end up in mySQL).
The sheer number of views is making it impractical to do SQL "UPDATE ITEM SET VIEW_COUNT=VIEW_COUNT+1" type of statements. There are millions of items but most are only viewed a small number of times, others are viewed many times.
So I'm considering using Redis to gather the view counts, with a background thread that writes the counts to mySQL. What is the recommended method for doing this? There are some issues with the approach:
how often does the background thread run?
how does it determine what to write back to mySQL?
should I store a Redis KEY for every ITEM that gets hit?
what TTL should I use?
is there already some pre-built solution or powerpoint presentation that gets me halfway there, etc.
I have seen very similar questions on StackOverflow but none with a great answer...yet! Hoping there's more Redis knowledge out there at this point.
I think you need to step back and look at some of your questions from a different angle to get to your answers.
"how often does the background thread run?"
To answer this you need to answer these questions: How much data can you lose? What is the reason for the data being in MySQL, and how often is that data accessed? For example, if the DB is only needed to be consulted once per day for a report, you might only need it to be updated once per day. On the other hand, what if the Redis instance dies? How many increments can you lose and still be "ok"? These will provide the answers to the question of how often to update your MySQL instance and aren't something we can answer for you.
I would use a very different strategy for storing this in redis. For the sake of the discussion let us assume you decide you need to "flush to db" every hour.
Store each hit in hashes with a key name structure along these lines:
interval_counter:DD:HH
interval_counter:total
Use the page id (such as MD5 sum of the URI, the URI itself, or whatever ID you currently use) as the hash key and do two increments on a page view; one for each hash. This provides you with a current total for each page and a subset of pages to be updated.
You would then have your cron job run a minute or so after the start of the hour to pull down all pages with updated view counts by grabbing the previous hour's hash. This provides you with a very fast means of getting the data to update the MySQL DB with while avoiding any need to do math or play tricks with timestamps etc.. By pulling data from a key which is no longer bing incremented you avoid race conditions due to clock skew.
You could set an expiration on the daily key, but I'd rather use the cron job to delete it when it has successfully updated the DB. This means your data is still there if the cron job fails or fails to be executed. It also provides the front-end with a full set of known hit counter data via keys that do not change. If you wanted, you could even keep the daily data around to be able to do window views of how popular a page is. For example if you kept the daily hash around for 7 days by setting an expire via the cron job instead of a delete, you could display how much traffic each page has had per day for the last week.
Executing two hincr operations can be done either solo or pipelined still performs quite well and is more efficient than doing calculations and munging data in code.
Now for the question of expiring the low traffic pages vs memory use. First, your data set doesn't sound like one which will require huge amounts of memory. Of course, much of that depends on how you identify each page. If you have a numerical ID the memory requirements will be rather small. If you still wind up with too much memory, you can tune it via the config, and if needs be could even use a 32 bit compile of redis for a significant memory use reduction. For example, the data I describe in this answer I used to manage for one of the ten busiest forums on the Internet and it consumed less than 3GB of data. I also stored the counters in far more "temporal window" keys than I am describing here.
That said, in this use case Redis is the cache. If you are still using too much memory after the above options you could set an expiration on keys and add an expire command to each ht. More specifically, if you follow the above pattern you will be doing the following per hit:
hincr -> total
hincr -> daily
expire -> total
This lets you keep anything that is actively used fresh by extending it's expiration every time it is accessed. Of course, to do this you'd need to wrap your display call to catch the null answer for hget on the totals hash and populate it from the MySQL DB, then increment. You could even do both as an increment. This would preserve the above structure and would likely be the same codebase needed to update the Redis server from the MySQL Db if you the Redis node needed repopulation. For that you'll need to consider and decide which data source will be considered authoritative.
You can tune the cron job's performance by modifying your interval in accordance with the parameters of data integrity you determine from the earlier questions. To get a faster running cron nob you decrease the window. With this method decreasing the window means you should have a smaller collection of pages to update. A big advantage here is you don't need to figure out what keys you need to update and then go fetch them. you can do an hgetall and iterate over the hash's keys to do updates. This also saves many round trips by retrieving all the data at once. In either case if you will likely want to consider a second Redis instance slaved to the first to do your reads from. You would still do deletes against the master but those operations are much quicker and less likely to introduce delays in your write-heavy instance.
If you need disk persistence of the Redis DB, then certainly put that on a slave instance. Otherwise if you do have a lot of data being changed often your RDB dumps will be constantly running.
I hope that helps. There are no "canned" answers because to use Redis properly you need to think first about how you will access the data, and that differs greatly from user to user and project to project. Here I based the route taken on this description: two consumers accessing the data, one to display only and the other to determine updating another datasource.
Consolidation of my other answer:
Define a time-interval in which the transfer from redis to mysql should happen, i.e. minute, hour or day. Define it in a way so that fast and easyly an identifying key can be obtained. This key must be ordered, i.e. a smaller time should give a smaller key.
Let it be hourly and the key be YYYYMMDD_HH for readability.
Define a prefix like "hitcount_".
Then for every time-interval you set a hash hitcount_<timekey> in redis which contains all requested items of that interval in the form ITEM => count.
There exists two parts of the solution:
The actual page that has to count:
a) get the current $timekey, i.e. by date- functions
b) get the value of $ITEM
b) send the redis-command HINCRBY hitcount_$timekey $ITEM 1
A cronjob which runs in that given interval, not too close to the limit of that intervals (in example: not at the full hour). This cronjob:
a) Extracts the current time-key (for now it would be 20130527_08)
b) Requests all matching keys from redis with KEYS hitcount_* (those should be a small number)
c) compares every such hash against the current hitcount_<timekey>
d) if that key is smaller than current key, then process it as $processing_key:
read all pairs ITEM => counter by HGETALL $processing_key as $item, $cnt
update the database with `UPDATE ITEM SET VIEW_COUNT=VIEW_COUNT+$cnt where ITEM=$item"
delete that key from the hash by HDEL $processing_key $item
no need to del the hash itself - there are no empty hashes in redis as far as I tried
If you want to have a TTL involved, say if the cleanup-cronjob may be not reliable (as might not run for many hours), then you could create the future hashes by the cronjob with an appropriate TTL, that means for now we could create a hash 20130527_09 with ttl 10 hours, 20130527_10 with TTL 11 hours, 20130527_11 with TTL 12 hours. Problem is that you would need a pseudokey, because empty hashes seem to be deleted automatically.
See EDIT3 for current state of the A...nswer.
I would write a key for every ITEM. A few tenthousand keys are definitely no problem at all.
Do the pages change very much? I mean do you get a lot of pages that will never be called again? Otherwise I would simply:
add the value for an ITEM on page request.
every minute or 5 minutes call a cronjob that reads the redis-keys, read the value (say 7) and reduce it by decrby ITEM 7. In MySQL you could increment the value for that ITEM by 7.
If you have a lot of pages/ITEMS which will never be called again you could make a cleanup-job once a day to delete keys with value 0. This should be locked against incrementing that key again from the website.
I would set no TTL at all, so the values should live forever. You could check the memory usage, but I see a lot of different possible pages with current GB of memory.
EDIT: incr is very nice for that, because it sets the key if not set before.
EDIT2: Given the large amount of different pages, instead of the slow "keys *" command you could use HASHES with incrby (http://redis.io/commands/hincrby). Still I am not sure if HGETALL is much faster then KEYS *, and a HASH does not allow a TTL for single keys.
EDIT3: Oh well, sometimes the good ideas come late. It is so simple: Just prefix the key with a timeslot (say day-hour) or make a HASH with name "requests_". Then no overlapping of delete and increment may happen! Every hour you take the possible keys with older "day_hour_*" - values, update the MySQL and delete those old keys. The only condition is that your servers are not too different on their clock, so use UTC and synchronized servers, and don't start the cron at x:01 but x:20 or so.
That means: a called page converts a call of ITEM1 at 23:37, May 26 2013 to Hash 20130526_23, ITEM1. HINCRBY count_20130526_23 ITEM1 1
One hour later the list of keys count_* is checked, and all up to count_20130523 are processed (read key-value by hgetall, update mysql), and deleted one by one after processing (hdel). After finishing that you check if hlen is 0 and del count_...
So you only have a small amount of keys (one per unprocessed hour), that makes keys count_* fast, and then process the actions of that hour. You can give a TTL of a few hours, if your cron is delayed or timejumped or down for a while or something like that.

Using Memcache as a counter for multiple objects

I have a photo-hosting website, and I want to keep track of views to the photos. Due to the large volume of traffic I get, incrementing a column in MySQL on every hit incurs too much overhead.
I currently have a system implemented using Memcache, but it's pretty much just a hack.
Every time a photo is viewed, I increment its photo-hits_uuid key in Memcache. In addition, I add a row containing the uuid to an invalidation array also stored in Memcache. Every so often I fetch the invalidation array, and then cycle through the rows in it, pushing the photo hits to MySQL and decrementing their Memcache keys.
This approach works and is significantly faster than directly using MySQL, but is there a better way?
I did some research and it looks like Redis might be my solution. It seems like it's essentially Memcache with more functionality - the most valuable to me is listing, which pretty much solves my problem.
There is a way that I use.
Method 1: (Size of a file)
Every time that someone hits the page, I add one more byte to a file. Then after x seconds or so (I set 600), I will count how many bytes that are in my file, delete my file, then I update it to the MySQL database. This will also allow scalability if multiple servers are adding to a small file in a cache server. Use fwrite to append to the file and you will never have to read that cache file.
Method 2: (Number stored in a file)
Another method is to store a number in a text file that contains the number of hits, but I recommend from using this because if two processes were simultaneously updating, data might be off (maybe same with method1).
I would use method 1 because although it is a bigger file size, it is faster.
I'm assuming you're keeping access logs on your server for this solution.
Keep track of the last time you checked your logs.
Every n seconds or so (where n is less than the time it takes for your logs to be rotated, if they are), scan through the latest log file, ignoring every hit until you find a timestamp after your last check time.
Count how many times each image was accessed.
Add each count to the count stored in the database.
Store the timestamp of the last log entry you processed for next time.