Should I use JSONField or FileField to store JSON datas? - json

I am wondering how I should store my JSON datas to have the best performances and scalability.
I have two options :
The first one would be to use JSONField, which will probably provides me an advantage in simplicity when it comes on performances and handling the datas since I don't have to get them out of a file each time.
My second option would be to store my JSON datas in FileFields as json files. This seems the best option since the huge quantity of JSON wouldn't be stored in a DataBase (only the location of the file). In my opinion it's the best option for scalability but maybe not for user performances since the file has to be read each time before displaying them in the template.
I would like to know if I am thinking reasonably, what's the best way between to store JSON datas for them to be reusable as fast as possible without making it complicated to the database & scalability ?

Json field will obviously has a good performance because of its indexing. A very good feature of it would be the native data access feature which means that you don't have to parse/load json and then query, you can just query directly from model field. Now since you have a huge json data it seems that file is a better option than model field but file only has advantage of storage.
Quoting from some random article from google search:
Postgres json field takes almost 11% extra data than the json file on your file system so test of 268mb file in json field is 233 mb (formatted json file)
Storing in a file has some cons which includes reading files parsing json and querying which is time consuming since it is disk based operations. Scalebility will not be a issue with json field although your db size will be high so moving the data might become tough for you.
So unless you have a shortage of database space you should choose jsonfield.

Related

What's stopping me from using a standalone JSON file instead of a local db?

I need to store data for a native mobile app I'm writing and I was wondering: 'why do I need to bother with DB setup when I can just read/write a JSON file?. All the interactions are basic and could most likely be parsed as JSON objects rather than queried.
what are the advantages?
DB are intent to work with standardized data or large data sets. If you know that there is only a few properties to read and it's not changing, JSON may be easier, but if you have a list of items, a DB can optimize the queries with index or ensure consistency through multiple tables

How to store JSON in DB without schema

I have a requirement to design an app to store JSON via REST API. I don't want to put limitation on JSON size(number of keys,etc). I see that MySQL supports to store JSON, but we have to create table/schema and then store the records
Is there any way to store JSON in any type of DB and have to query data with keys
EDIT: I don't want use any in-memory DB like Redis
Use ElasticSearch. In addition to schema less json, it support fast search.
The tagline of ElasticSearch is "You know, for search".
It is built on top of text indexing library called "Apache Lucene".
The advantage of using ElasticSearch are:
Scalable to petabytes of data clusters.
Fully open source. No cost to pay.
Enterprise support available for platinum license.
Comes with additional benefits such as analytics using Kibana.
I believe NoSQL is best solution. i.e MongoDB. I have tested MongoDB, looks good and has python module to interact easily. For quick overview on pros https://www.studytonight.com/mongodb/advantages-of-mongodb
I've had great results with Elasticsearch, so I second this approach as well. One question to ask yourself is how you plan to access the JSON data once it is in a repository like Elasticsearch. Will you simply store the JSON doc or will you attempt to flatten out the properties so that they can be individually aggregated? But yes, it is indeed fully scalable by increasing your compute capacity via instance size, expanding your disk space or by implementing index sharding if you have billions of records in a single index.

Process and Query big amount of large files in JSON Lines format

Which technology would be best to import large amount of large JSON Line format files (approx 2 GB per file).
I am thinking about Solr.
Once the data will be imported it will have to be query-able.
Which technology would you suggest to import and then query JSON line format data in a timely manner?
You can start prototyping with some scripting language you prefer, to read the lines, massage the format as needed to get valid Solr json and send it to Solr via HTTP. Would the faster to get going.
Longer term, SolrJ will allow you to get max perf (if you need to), as you can:
hit the leader replica in a Solrcloud environment directly
use multiple threads to ingest and send docs (you can also use multiple processes). Not that this is harder/impossible with all other technologies, but in some it is.
you have the full flexibility of using all SolrJ api

Loading a JSON file vs querying MongoDB

It is a performance question - I created a web app (in Node.js) that loads a JSON file that has around 10 000 records and then displays that data to the user. I'm wondering if it would be faster to use (for example) MongoDB(or any other noSQL database, CouchDB?) instead? And how much faster would it be?
If you are looking for speed, JSON is quite specifically "not-fast". JSON involves sending the Keys along with the Values and it requires some heavy parsing on the receiving end. Reading the data from file can be slower than reading from the DB. I wouldn't like to say which is better, so you'll have to test it.

Using mongodb to store a single but complex JSON object

I want to store a single, big and complex JSON object in mongodb and I want to be able to retrieve and modify specific parts of it. A simple solution would be to store it in a single document, but I'm not sure how that would play with multiple write requests. Another option would be to keep every node of the JSON in different documents, kind of like a pattern explained here in the mongodb documentation. This way I can retrieve only parts of the whole object and work on them that way.
My question is: do I get anything out of the latter approach? I'm kind of new to mongodb, but as I read it has database lock on multiple write requests, so it would seem that having my JSON taken apart like this would achieve nothing when it comes to scaling.
If you consider to store data larger then 16MB you should definitely use some sort of hashing as mongodb has a 16MB size limit on its documents.
From MongoDB Limits and Thresholds
The maximum BSON document size is 16 megabytes.