N1QL index for multiple json object - couchbase

I would like to get digest field from test attribute. Attribute test can change for each of documents.
attachments field is having json object instead of array of json object.
I would like to create index which should give digest field which is inside _sync.attachments.test.digest
Please help in index and query in getting digest field.
{
"_sync": {
"attachments": {
"test": {
"content_type": "text/html",
"digest": "sha1-CMKBiqaqoO++IxKQxH5ZW5smywY=",
"length": 10240,
"revpos": 3,
"stub": true
}
},
"time": "2019-10-14T05:38"
}
}

Related

How to remove one property from json input message using replace() expression in azure logic app?

{
"metadata": {
"id": "2",
"uri": "3",
"type": "2"
},
"Number": "2323600002913",
"Date": "04/21/2009",
"postingDate": "00/00/0000",
"ata": {
"results": [
{
"metadata": {
"id": "r",
"uri": "e2",
"type": "s2"
},
"item": "000010",
"data":"ad"
}
]
}
}
want to remove metadata property from above json message and output should be like below
{
"Number": "2323600002913",
"Date": "04/21/2009",
"postingDate": "00/00/0000",
"ata": {
"results": [
{
"item": "000010",
"data":"ad"
}
]
}
}
I tried with removeProperty() which is working for root level metadata but inside metadata not removed.
how to use replace() in this case or anything else to only remove metadata.
The simplest way is use inline code, cause even with removeProperty() expression to remove the metadata under results, it will return the results array data not the whole json data. Then you will have to combine them, it's not a convenient way.
And with inline code you could refer to my below picture. The variable json is the value from triggerbody, then just delete the node or key and return the json variable. And with this way, even you want to delete many metadata in the array, you could add a for loop to delete it, just think of it as plain js code.
Update:if you want to get value from variable,cause no support expression to get value from variable so use the below expression.
var json =wworkflowContext.actions.Initialize_variable.inputs.variables[0].value;
And about how to loop the array in the json refer to my below pic.

rename invalid keys from JSON

I have following flow in NIFI , JSON has (1000+) objects in it.
invokeHTTP->SPLIT JSON->putMongo
Flow works fine, till I receive some keys in json with "." in the name. e.g. "spark.databricks.acl.dfAclsEnabled".
my current solution is not optimal, I have jotted down bad keys, and using multiple replace text processor to replace "." with "_". I am not using REGEX, I am using string literal find/replace. So each time I am getting failure in putMongo processor, I am inserting new replaceText processor.
This is not maintainable. I am wondering if I can use JOLT for this? couple of info regarding input JSON.
1) no set structure, only thing that is confirmed is. everything will be in events array. But event object itself is free form.
2) maximum list size = 1000.
3) 3rd party JSON, so I cant ask for change in format.
Also, key with ".", can appear anywhere. So I am looking for JOLT spec that can cleanse at all level and then rename it.
{
"events": [
{
"cluster_id": "0717-035521-puny598",
"timestamp": 1531896847915,
"type": "EDITED",
"details": {
"previous_attributes": {
"cluster_name": "Kylo",
"spark_version": "4.1.x-scala2.11",
"spark_conf": {
"spark.databricks.acl.dfAclsEnabled": "true",
"spark.databricks.repl.allowedLanguages": "python,sql"
},
"node_type_id": "Standard_DS3_v2",
"driver_node_type_id": "Standard_DS3_v2",
"autotermination_minutes": 10,
"enable_elastic_disk": true,
"cluster_source": "UI"
},
"attributes": {
"cluster_name": "Kylo",
"spark_version": "4.1.x-scala2.11",
"node_type_id": "Standard_DS3_v2",
"driver_node_type_id": "Standard_DS3_v2",
"autotermination_minutes": 10,
"enable_elastic_disk": true,
"cluster_source": "UI"
},
"previous_cluster_size": {
"autoscale": {
"min_workers": 1,
"max_workers": 8
}
},
"cluster_size": {
"autoscale": {
"min_workers": 1,
"max_workers": 8
}
},
"user": ""
}
},
{
"cluster_id": "0717-035521-puny598",
"timestamp": 1535540053785,
"type": "TERMINATING",
"details": {
"reason": {
"code": "INACTIVITY",
"parameters": {
"inactivity_duration_min": "15"
}
}
}
},
{
"cluster_id": "0717-035521-puny598",
"timestamp": 1535537117300,
"type": "EXPANDED_DISK",
"details": {
"previous_disk_size": 29454626816,
"disk_size": 136828809216,
"free_space": 17151311872,
"instance_id": "6cea5c332af94d7f85aff23e5d8cea37"
}
}
]
}
I created a template using ReplaceText and RouteOnContent to perform this task. The loop is required because the regex only replaces the first . in the JSON key on each pass. You might be able to refine this to perform all substitutions in a single pass, but after fuzzing the regex with the look-ahead and look-behind groups for a few minutes, re-routing was faster. I verified this works with the JSON you provided, and also JSON with the keys and values on different lines (: on either):
...
"spark_conf": {
"spark.databricks.acl.dfAclsEnabled":
"true",
"spark.databricks.repl.allowedLanguages"
: "python,sql"
},
...
You could also use an ExecuteScript processor with Groovy to ingest the JSON, quickly filter all JSON keys that contain ., perform a collect operation to do the replacement, and re-insert the keys in the JSON data if you want a single processor to do this in a single pass.

Ngx-translate: How to access JSON array value directly in Angular with key reference

I am doing POC on translating my app using ngx-translate. I will get JSON response from my API url. In that response, can some one help me how to access the JSON array value with out referring array numbers? My JSON response is given below.
{
"Data": {
"FirstData": [{
"key": "FirstKey",
"value": "FirstValue"
},
{
"key": "SecondKey",
"value": "SecondValue"
}
]
},
"IsSuccessful": true,
"HttpStatusCode": 200,
"Exception": null
}
So, in my view to refer "FirstValue", I have to do something like ( which I dont want to)
<h1> {{'Data.FirstData.0.value' | translate }} </h1> <!--First Value -->
Here "0" is tightly coupled. Is there any other way by accessing via key?
Is that possible?
Change the response of your API to contain only the key/value pairs.
This is the desired JSON:
{
"FirstKey": "FirstValue",
"SecondKey": "SecondValue"
}
Then use it like that:
<h1> {{'FirstKey' | translate }} </h1>

How to Retrieve and Query JSON type fields in Apache Solr 6.5

My Goal is to retrieve JSON type fields in an Solr index and also perform search queries on such fields.
I have the following documents in Solr Index and using the auto generated schema utilizing schemaless feature in Solr.
POST http://localhost:8983/solr/test1/update?commitWithin=1000
[
{"id" : "1", "type_s":"book", "title_t" : "The Way of Kings", "author_s" : "Brandon Sanderson",
"miscinfo": {"provider": "orielly", "site": "US"}
},
{"id" : "2", "type_s":"book", "title_t" : "The Game of Thrones", "author_s" : "James Sanderson",
"miscinfo": {"provider": "pacman", "site": "US"}
}
]
I see the JSON types are stored as strings in the schemaField type as seen in the output for following
GET http://localhost:8983/solr/test1/schema/fields
{
"name":"miscinfo",
"type":"strings"}
I had tried using srcField as mentioned in this post. However, a query to retrieve json type returns empty response. Below are the GET request used for the same
GET http://localhost:8983/solr/test1/select?q=1&fl=miscinfo&wt=json
GET http://localhost:8983/solr/test1/select?q=1&fl=miscinfo,source_s:[json]&wt=json
Also, the search queries for values inside JSON type fields return empty response
http://localhost:8983/solr/test1/select?q=pacman&wt=json
{
"responseHeader": {
"status": 0,
"QTime": 0,
"params": {
"q": "pacman",
"json": "",
"wt": "json"
}
},
"response": {
"numFound": 0,
"start": 0,
"docs": []
}
}
Please help in searching object types in Solr.
Have you checked this: https://cwiki.apache.org/confluence/display/solr/Response+Writers
JSON Response Writer A very commonly used Response Writer is the
JsonResponseWriter, which formats output in JavaScript Object Notation
(JSON), a lightweight data interchange format specified in specified
in RFC 4627. Setting the wt parameter to json invokes this Response
Writer. Here is a sample response for a simple query like
q=id:VS1GB400C3&wt=json:

Obtain a different JSON object structure in AngularJS

I'm Working on AngularJS.
In this part of the project my goal is to obtain a JSON structure after filling a form with some particulars values.
Here's the fiddle of my simple form: Fiddle
With the form I will do a query to KairosDB, that is my NoSql Database, I will query data from it by a JSON object. The form is structured in this way:
a Name
a certain Number of Tags, with Tag Id ("ch" for example) and tag value ("932" for example)
a certain Number of Aggregators to manipulate data coming from DB
Start Timestamp and End Timestamp (now they are static and only included in the final JSON Object)
After filling this form, with my code I'll obtain for example this JSON object:
{
"metrics": [
{
"tags": [
{
"id": "ch",
"value": "932"
},
{
"id": "ch",
"value": "931"
}
],
"aggregators": {
"name": "sum",
"sampling": [
{
"value": "1",
"unit": "milliseconds",
"type": "SUM"
}
]
}
}
],
"cache_time": 0,
"start_absolute": 123,
"end_absolute": 1234
}
Unfortunately, KairosDB accepts a different structure, and as you could see, Tag id "ch" doesn't hase an "id" string before, or for example, Tag values coming from the same tag id are grouped together
{
"metrics": [
{
"tags": {
"ch": [
"932",
"931"
]
},
"name": "AIENR",
"aggregators": [
{
"name": "sum",
"sampling": {
"value": "1",
"unit": "milliseconds"
}
}
]
}
],
"cache_time": 0,
"start_absolute": 1367359200000,
"end_absolute": 1386025200000
}
My question is: Is there a way to obtain the JSON structure like the one accepted by Kairos DB with an Angular JS form?. Thanks to everyone.
I've seen this topic as the one more similar to mine but it isn't in AngularJS.
Personally, I'd do the refactoring work in the backend - Have what ever server interfaces sends and receives data do the manipulation - Otherwise you'll end up needing to refactor your data inside Angular anywhere you want to use that dataset.
Where as doing it in the backend would put it in a single access point.
Of course, you could do it in Angular, just replace userString in the submitData method with a copy of the array and replace the tags section with data in the new format, and likewise refactor the returned result to the correct format when you get a reply.