JSON query matching - json

For the given input JSON:
{
"person": {
"name": "John",
"age": 25
},
"status": {
"title": "assigned",
"type": 3
}
}
I need to build a string query that I could use to answer if the given JSON matches it or not. For example if the given person's name is "John" and his age is in the range of 20..30 and his status is not 4.
I need the query to be presented as a string and a commonly known library that can run it. I need it on multiple platforms (iOS, Android, Xamarin). I've tried JSON Path and JSON schemas, but didn't really figure out if it's able to achieve that with them. JSON Path seems to be specified on finding a single value in the JSON by a certain condition and JSON Schema mostly checks for the data structure and types.

Ok, found the solution. If I format the whole input object as a single-element array like that:
[
{
"person": {
"name": "John",
"age": 25
},
"status": {
"title": "assigned",
"type": 3
}
}
]
That will allow me to use JSON Path expressions:
$[?(#.person.name == 'John' && #.person.age >= 20 && #.status.type != 4)]
Basically if it doesn't match there won't be a result.

Related

Pentaho Kettle: How to dynamically fetch JSON file columns

Background: I work for a company that basically sells passes. Every order that is placed by the customer will contain N number of passes.
Issue: I have these JSON event-transaction files coming into a S3 bucket on a daily basis from DocumentDB (MongoDB). This JSON file is associated to the relevant type of event (insert, modify or delete) for every document key (which is an order in my case). The example below illustrates a "Insert" type of event that came through to the S3 bucket:
{
"_id": {
"_data": "11111111111111"
},
"operationType": "insert",
"clusterTime": {
"$timestamp": {
"t": 11111111,
"i": 1
}
},
"ns": {
"db": "abc",
"coll": "abc"
},
"documentKey": {
"_id": {
"$uuid": "abcabcabcabcabcabc"
}
},
"fullDocument": {
"_id": {
"$uuid": "abcabcabcabcabcabc"
},
"orderNumber": "1234567",
"externalOrderId": "12345678",
"orderDateTime": "2020-09-11T08:06:26Z[UTC]",
"attraction": "abc",
"entryDate": {
"$date": 2020-09-13
},
"entryTime": {
"$date": 04000000
},
"requestId": "abc",
"ticketUrl": "abc",
"tickets": [
{
"passId": "1111111",
"externalTicketId": "1234567"
},
{
"passId": "222222222",
"externalTicketId": "122442492"
}
],
"_class": "abc"
}
}
As we see above, every JSON file might contain N number of passes and every pass is - in turn - is associated to an external ticket id, which is a different column (as seen above). I want to use Pentaho Kettle to read these JSON files and load the data into the DW. I am aware of the Json input step and Row Normalizer that could then transpose "PassID 1", "PassID 2", "PassID 3"..."PassID N" columns into 1 unique column "Pass" and I would have to have to apply a similar logic to the other column "External ticket id". The problem with that approach is that it is quite static, as in, I need to "tell" Pentaho how many Passes are coming in advance in the Json input step. However what if tomorrow I have an order with 10 different passes? How can I do this dynamically to ensure the job will not break?
If you want a tabular output like
TicketUrl Pass ExternalTicketID
---------- ------ ----------------
abc PassID1Value1 ExTicketIDvalue1
abc PassID1Value2 ExTicketIDvalue2
abc PassID1Value3 ExTicketIDvalue3
And make incoming value dynamic based on JSON input file values, then you can download this transformation Updated Link
I found everything work dynamic in JSON input.

Get value from exact JSONpath level

I want to extract values from an JSON document with using the path operators.
For example I get all the product IDs included in the file via $..product_id.
But for getting the "id" when I use $..id I get an output for each id element, no matter on which level of the JSON the variable is.
For example in my output I get an row for the id "12345678" as well as for "11223344" which should not be because it is a subset of the first ID.
{
"next_offset": 20,
"records": [
{
"id": "12345678",
"date": "2020-02-14",
"product_id": "asdf1234",
"product_name": "Product_test^_1",
"template_link": {
"name": "aassddff",
"id": "11223344",
"_acl": {
"fields": [],
"_hash": "345thvz356b56v456b"
}
},
....
}
]
}
How can I set the path operator to only access the "id" fields of one specific level?
For the JSON shown in your question, use $.records.*.id.

Selector query comparing two fields Cloudant/CouchDB/Mango

I have a CouchDB database, which uses a query language Mango - which seems to be the same as Cloudant's query language.
I'm trying to search and compare two fields to each other and only return the relevant results when they're equal.
For example:
{
"_id": "ACCEPT0",
"_rev": "1-92ea4e727271aefd0a2befed0d4bb736",
"OfferID": "OFFER0"
}
{
"_id": "ACCEPT1",
"_rev": "3-986ca6e717b225ac909d644de54d5f7d",
"OfferID": "OFFER3"
}
{
"_id": "OFFER0",
"_rev": "1-2af5f5c7b1c59dd3f0997f748a367cb2",
"From": "merchant1",
"To": "customer1"
}
{
"_id": "OFFER1",
"_rev": "6-f0927c5d4f9fd8a2d2b602f1c265d6d5",
"From": "merchant1",
"To": "customer2"
}
I trying to come up with a query which will, in this example, return "OFFER0" - since OFFER0 exists in an "OfferID"
EDIT (clarification): The query needs to be able to select all the _id's which begin with OFFER and which exist in an OfferID field.
I know I can set this up with a view (as seen from: Cloudant query to return records where 2 fields are equal), but I need this in a Mango query as it'll be running over Hyperledger
You can easily return documents whose _id field starts with "OFFER" with the following query:
{
"selector": {
"_id": {
"$regex": "^OFFER"
}
}
}
but this is likely to be inefficient because Cloudant has to scan the whole database, testing each documents _id field with that regular expression.
A better way to design your data may be to have a type field which distinguishes between the document types in your database e.g.
{
"_id": "OFFER0",
"_rev": "1-2af5f5c7b1c59dd3f0997f748a367cb2",
"type": "offer",
"From": "merchant1",
"To": "customer1"
}
and then a query to return all documents where type = 'offer' becomes:
{
"selector": {
"type": "offer"
}
}
I don't fully understand the part of the question where you say "which exist in an OfferID field." but it's important to note that Cloudant Query & Mango can only query single documents - you can't say "get me all the documents which are offers, where another document has a certain property". Include all the data you need in each document and then you'll be able to query it cleanly.

Update MongoDB document using JSON

Is there a way to update a complex MongoDB document from C# using JSON? For example, suppose I have the following document:
{
"name": "John Smith",
"age": 35,
"readingList":
[{
"title": "Title1",
"ISBN": 6246246426724,
"author":
{
"name": "James Johnson",
"age": 40
}
},
{
"title": "Title2",
"ISBN": 3513531513551,
"author":
{
"name": "Sam Hill",
"age": 20
}
}]
}
Now I want to update the age of the second book's author (Sam Hill) from 20 to 21. Suppose I have the following JSON representation:
{
"readingList":
[
{
"title": "Title2",
"author":
{
"age": 21
}
}]
}
Basically the second JSON string is like the first one, minus all the fields and array elements that don't change, except for one field in any array being looked at that uniquely identifies that index. In this case, the "age" field is included since it is being updated with the given value. The "title" field is given to locate the right array element while searching for the field to update. There may also be even more subdocuments and arrays to go through, and the format is not static (it may change at a later time). This is just a simplified example.
Is it possible to pass in something like this to some function and update the correct field that way? Is there something at least similar to this, so I can just pass in some JSON to do the update?
The reason I am looking to do it this way, rather than through simpler means, is because I want to keep track of a history of changes to documents, and if I want to backtrack to an earlier version, I want an easy way to do so that can handle this level of complexity.
UPDATE:
I have some clarifications to make. In this particular scenario I have no way to predict what kinds of changes would need to be made. A change could be made to any field at any time, and that field could be anywhere in the document, possibly at the top level, or within multiple nested subdocuments/arrays. The data we're dealing with is for a separate party that may use it and modify it at will, so we have no control over what they choose to do with it. In addition, there is no fixed schema. The other party could add new fields, including new subdocuments or arrays, or delete them.
The reason I'm asking this question is because I would like to store a history of changes to documents in such a way that I could revert to an older snapshot of the document by applying the changes in reverse. In this case, changing the age from to 20 to 21 would revert the document to an older state (assuming that someone messed with the age beforehand and made it 20, and I wanted to fix it back to 21). Since somebody could make any change they wanted to the system, including to the underlying structure of the data itself, I can't just come up with my own schema, or hardcode a solution that changes specific fields using this specific schema.
In this example, the change in age from 20 to 21 would be from a record in the history whose structure I couldn't predict beforehand. So I am looking for an efficient solution to apply an unpredictable update to a document given a simplified JSON representation of the change to be made.
I am also open to alternatives that don't involve JSON if they are fairly efficient. I brought up JSON because I figured that, given MongoDB's usage of JSON to structure documents, it would make the most sense, and perhaps be superior to something like string manipulation. Another alternative I considered would involve storing the change using some kind of custom dot notation, like this: readingList[ISBN:3513531513551].author.age=21"
This would require me to create a custom function to interpret the string and turn it into something useful though, so it doesn't sound like the best solution.
Hi friend I used below JSON document
{
"_id" : ObjectId("56a99c121f25cc3a3c709151"),
"name" : "John Smith",
"age" : 35,
"readingList" : [
{
"title" : "Title1",
"ISBN" : NumberLong(6246246426724),
"author" : {
"name" : "James Johnson",
"age" : 40
}
},
{
"title" : "Title2",
"ISBN" : NumberLong(3513531513551),
"author" : {
"name" : "Sam Hill",
"age" : "25"
}
}
]
}
I just used condition as author name is Sam Hill and execute below query in C# and its work.
IMongoQuery query = Query.And(Query.EQ("name", "John Smith"), Query.EQ("readingList.author.name", "Sam Hill"));
var result =collection.Update(query,
MongoDB.Driver.Builders.Update.Set("readingList.$.author.age", "21"));
you can query your main document let's assume your main collection is named "books" this is the structure:
{
"id":"123",
"name": "John Smith",
"age": 35,
"readingList":
[{
"title": "Title1",
"ISBN": 6246246426724,
"author":
{
"name": "James Johnson",
"age": 40
}
},
{
"title": "Title2",
"ISBN": 3513531513551,
"author":
{
"name": "Sam Hill",
"age": 20
}
}]
}
// you need a query that returns the main document by id for example, when you have the main document you can query at the one you want to modify in the list and assing it to a varibale let's say readItem, then do the modifications you need and after that you can update only the fields you need using set and onle the element in the array using "$" something like:
readItem.title = "some new title";
readItem.age++;
var update = MongoDB.Driver.Builders.Update.Set("readingList.$", BsonDocumentWrapper.Create(readItem));
Update<Book>(query, update);
Actually I would not advise you to choose this kind of data model because in my experience it will get pretty messy. Still, you might have some very specific requirements which might force you to have this and only this data model.
I would create two collections: persons and readinglists.
persons would look like:
{
"id":"123",
"name": "John Smith",
"age": 35
}
and readinglists would look like (note that it has a compound natural id):
{
"_id": { "personid":"123", "title": "Title1"},
"ISBN": 6246246426724,
"author":
{
"name": "James Johnson",
"age": 40
}
}
Then you can easily update the readinglist:
var query = Query.EQ("_id", new BsonDocument(new BsonElement[]{ new BsonElement("personid":"123"), BsonElement("title":"Title1")}));
readingListCollection.Update(query, Update.Set("author.age": 22));
In your data mode you need to know the array index of the second document. It is better to model readingList attribute as a map. In following example I used isbn as a map key:
{
"id":"123",
"name":"John Smith",
"age":35,
"readingList":{
"6246246426724":{
"title":"Title1",
"ISBN":6246246426724,
"author":{
"name":"James Johnson",
"age":40
}
},
"3513531513551":{
"title":"Title2",
"ISBN":3513531513551,
"author":{
"name":"Sam Hill",
"age":20
}
}
}
}
In this data model you can access second book directly. For instance by dot notation:
db.authors.update(
{ item: "123" },
{ $set: { "readingList.3513531513551.author.age": 22 } }
)
Unfortunately I do know C# notation for that but should be straight forward.

Obtain a different JSON object structure in AngularJS

I'm Working on AngularJS.
In this part of the project my goal is to obtain a JSON structure after filling a form with some particulars values.
Here's the fiddle of my simple form: Fiddle
With the form I will do a query to KairosDB, that is my NoSql Database, I will query data from it by a JSON object. The form is structured in this way:
a Name
a certain Number of Tags, with Tag Id ("ch" for example) and tag value ("932" for example)
a certain Number of Aggregators to manipulate data coming from DB
Start Timestamp and End Timestamp (now they are static and only included in the final JSON Object)
After filling this form, with my code I'll obtain for example this JSON object:
{
"metrics": [
{
"tags": [
{
"id": "ch",
"value": "932"
},
{
"id": "ch",
"value": "931"
}
],
"aggregators": {
"name": "sum",
"sampling": [
{
"value": "1",
"unit": "milliseconds",
"type": "SUM"
}
]
}
}
],
"cache_time": 0,
"start_absolute": 123,
"end_absolute": 1234
}
Unfortunately, KairosDB accepts a different structure, and as you could see, Tag id "ch" doesn't hase an "id" string before, or for example, Tag values coming from the same tag id are grouped together
{
"metrics": [
{
"tags": {
"ch": [
"932",
"931"
]
},
"name": "AIENR",
"aggregators": [
{
"name": "sum",
"sampling": {
"value": "1",
"unit": "milliseconds"
}
}
]
}
],
"cache_time": 0,
"start_absolute": 1367359200000,
"end_absolute": 1386025200000
}
My question is: Is there a way to obtain the JSON structure like the one accepted by Kairos DB with an Angular JS form?. Thanks to everyone.
I've seen this topic as the one more similar to mine but it isn't in AngularJS.
Personally, I'd do the refactoring work in the backend - Have what ever server interfaces sends and receives data do the manipulation - Otherwise you'll end up needing to refactor your data inside Angular anywhere you want to use that dataset.
Where as doing it in the backend would put it in a single access point.
Of course, you could do it in Angular, just replace userString in the submitData method with a copy of the array and replace the tags section with data in the new format, and likewise refactor the returned result to the correct format when you get a reply.