I am trying to learning HTML5's IndexedDB with mozilla' tutorial Using Indexed DB.
I understand that IndexedDB is object store implementation. But all the examples I tried, they are storing simple objects with key:value pairs. But how would I save a nested or hierarchical objects? For example parent object and have a list of child objects. What is the best way to deal with complex object structures into Indexed DB?
I know the OOPS representation or XML representation of parent-child objects.
How would I achieve it in IndexedDB? Any tutorial source will be very helpful.
they are only storing key:value pairs. But how would I save a nested object?
What is nested object? You can store any object that can represent by JSON (or more correctly serializable by structured cloning algorithm). Is that nested object? You can convert any OOPS into JSON and get it back through its construction. For XML, just store serialized string format.
If you refer relationship, it is different question. I have write a bit about IndexedDB relationship. Modeling a relaionship in IndexedDB is not a problem. In fact, it support very well.
Related
I want to use apache avro schema's for data serialization and deserialization.
I want to use it with json encoding.
I want to put several of this serialized objects using different schemas to the same "source" (it's a kafka topic).
When I read it back I have the need to be able to resolve the right schema for the current data entry.
But the serialized data don't have any schema information in it. And to test all possible schema's for compatibility (kind of a duck typing approach) would be pretty unclean and error prone (for data which fits to multiple schemas it would be unclear which one to take)
I'm currently thought about putting the namespace and object name inside the json data programatically. But such a solution would not belong to the avro standard and it would open a new error scenario where it's possible to put the wrong schema namespace and/or object name inside the data.
I'm wondering if there would be a better way. Or if I have a general flaw in my solution.
Background: I want to use it for kafka message but don't want to use the schema registry (don't wan't to have a new single point of failure). I also still want to have KSQL support which is only available for json format or for using avro with the schema registry)
I have a huge flat json string which has some 1000+ fields. I want to restructure the json into a nested/hierarchical structure based on certain business logic without doing a lot of object-to-json or json-to-object conversions, so that the performance will not get affected.
What are the ways to achieve this in scala?
Thanks in advance!
I suggest you to have a look into JSON transformers provided by play-json library. It allows you to manipulate json (moving fields, creating nested objects) without doing any object mapping.
Check this out : https://www.playframework.com/documentation/2.5.x/ScalaJsonTransformers
I read a JSON from a file and I'm using SwiftyJSON for that.
My code is aware of the structure and will never access any wrong keys. However it will access some keys a big number of times (I hold there my app strings).
My question is should I convert my data structure to an array when I read the JSON or will SwiftyJSON object be good enough?
Ideally you'll have a struct that can serialize/deserialize to/from JSON or [String:AnyObject]
If your question is primarily driven by performance considerations I'd say that it is good enough until proven that it is not.
Is there a proper way to store generic JSON in MongoDB? With 'generic' I mean any JSON, including hashes with keys that are restricted in MongoDB documents.
For example, we want to store JSON schemas which use the key $ref, which is not allowed in a MongoDB document. This means that a JSON schema as such cannot be stored as a MongoDB document.
Is there a smart way around this? The only options I've come up with is to do back-and-forth deep key replacements or to store it as JSON text.
We're using Morphia, so the solution should be compatible with it.
The solutions you have already thought of are probably the best. Store the schemas as JSON strings then parse them back to JSON on retrieval.
I encountered many troubles of dealing with serializing/deserializing Scala data types to/from JSON objects and then store them to/from MongoDB in BSON form.
1st question: why Play Framework uses JSON why MongoDb uses BSON.
2nd question: If I am not wrong, Javascript does not have readers and writers for serializing/deserializing BSON from MongoDB. How can this happen? Javascript can seamlessly handle JSON, but for BSON I expect it needs some sort of readers and writers.
3rd question: (I read somewhere) why Salat and ReactiveMongo uses different mechanisms to talk to MongoDB.
JSON is a widely used format for transfer data in this days. So pretty good to have it "from the box" in the web framework. That is the reason Play has it.
The same reason mongo use it - it is a good idea to store data in the same format as user query it and save it. So Why mongo use BSON but JSON ? Well, BSON is the same as JSON but have additional properties on every value - data length and data type. The reason of this - when you are looking a lot of data (like db query do) you need to read all the object in JSON to get to another one. We can skip reading in the case if we will know the length of the data.
So You just do not need any BSON readers in JS (it could be somewhere but rarely used) because BSON is format for inside DB usage.
you can read this article for more inforamtion