Is it possible to parse output of MongoDB query as JSON document? - json

First of all - I cannot use perl MongoDB driver, so I'm interacting with MongoDB via IPC::Run. Now I'd like to get output from MongoDB as a hash ref.
Here is the code:
#!/usr/bin/env perl
use strict;
use warnings;
use JSON::XS;
use Try::Tiny;
use IPC::Run 'run';
use Data::Dumper;
my #cmd = ('/opt/mongo/bin/mongo', '127.0.0.1:27117/service_discovery', '--quiet', '-u', 'test', '-p', 'test', '--eval', 'db.sit.find().forEach(function(x){printjson(x)})');
my $out;
run \#cmd, '>>', \$out;
my $coder = JSON::XS->new->ascii->pretty->allow_nonref;
my $dec = try {my $output = $coder->decode($out)} catch {undef};
print Dumper (\%$dec);
It is not working now, %$dec is empty.
Here is the output of MongoDB query (value of $out):
{
"_id" : ObjectId("5696787eb8e5e87534777c82"),
"hostname" : "lab7n1",
"services" : [
{
"port" : 9000,
"name" : "ss-rest"
},
{
"port" : 9001,
"name" : "ss-rest"
},
{
"port" : 8060,
"name" : "websockets"
},
{
"port" : 8061,
"name" : "websockets"
}
]
}
{
"_id" : ObjectId("56967ab2b8e5e87534777c83"),
"hostname" : "lab7n2",
"services" : [
{
"port" : 8030,
"name" : "cloud-rest for batch"
},
{
"port" : 8031,
"name" : "cloud-rest for batch"
},
{
"port" : 8010,
"name" : "cloud-rest for bespoke"
},
{
"port" : 8011,
"name" : "cloud-rest for bespoke"
}
]
}
What should I do to make parser treat this output as legitimate JSON?

As suggested by #Matt i used incr_parse method and omitted _id field in output.

Related

Configuring amq.topic binding with x-filter-jms-selector argument

Any idea how to configure amq.topic binding with x-filter-jms-selector argument?
I know how to do that in web admin UI.
If we are amending config file of qpid, then how to add this filter in that?
Config json will have something like this -
{
"id" : "1c91c97b-df6d-44e8-bf5d-673e7f0133b5",
"name" : "amq.topic",
"type" : "topic",
"durableBindings" : [ {
"arguments" : { },
"bindingKey" : "*.*.event",
"destination" : "test"
}, {
"arguments" : {
"x-filter-jms-selector" : "event NOT IN ('location', 'weather')"
},
"bindingKey" : "*.*.tick",
"destination" : "test"
} ],
"lastUpdatedBy" : "guest",
"lastUpdatedTime" : 1590073211015,
"createdBy" : null,
"createdTime" : 1589575285215
}

How to sort and extract data from JFrog JSON response using groovy for Jenkins pipelining

I am using OS version of JFrog Artifactory for my CI-CD activities which run via the Jenkins pipeline. I am novice to groovy/java
The REST APIs of OS JFrog Artifactory do not support the extraction of the latest build from a repository. With Jenkins pipeline in play, I was wondering if i could extract the data from the JSON response provided by Artifactory using Jenkins native groovy support(just to avoid external service which can be run via python/Java/Shell).
I am looking to put the extracted JSON response into a Map, sort the Map in descending order and extract the first Key-Value pair which contains the latest build info.
I end up getting "-1" as the response when I try to extract the data.
import groovy.json.JsonSlurper
def response = httpRequest authentication: 'ArtifactoryAPIKey', consoleLogResponseBody: false, contentType: 'TEXT_PLAIN', httpMode: 'POST', requestBody: '''
items.find({
"$and": [
{"repo": {"$match": "libs-snapshot-local"}},
{"name": {"$match": "simple-integration*.jar"}}
]
})''', url: 'http://<my-ip-and-port-info>/artifactory/api/search/aql'
def jsonParser = new JsonSlurper()
Map jsonOutput = jsonParser.parseText(response.content)
List resultsInfo = jsonOutput['results']
print(resultInfo[0].created)
def sortedResult = resultInfo.sort( {a, b -> b["created"] <=> a["created"] } )
sortedResult.each {
println it
}
The sample JSON to be parsed:
{
"results" : [ {
"repo" : "libs-snapshot-local",
"path" : "simple-integration/2.5.150",
"name" : "simple-integration-2.5.150.jar",
"type" : "file",
"size" : 1175,
"created" : "2019-06-23T19:51:30.367+05:30",
"created_by" : "admin",
"modified" : "2019-06-23T19:51:30.364+05:30",
"modified_by" : "admin",
"updated" : "2019-06-23T19:51:30.368+05:30"
},{
"repo" : "libs-snapshot-local",
"path" : "simple-integration/2.5.140",
"name" : "simple-integration-2.5.140.jar",
"type" : "file",
"size" : 1175,
"created" : "2019-06-21T19:52:40.670+05:30",
"created_by" : "admin",
"modified" : "2019-06-21T19:52:40.659+05:30",
"modified_by" : "admin",
"updated" : "2019-06-21T19:52:40.671+05:30"
},{
"repo" : "libs-snapshot-local",
"path" : "simple-integration/2.5.150",
"name" : "simple-integration-2.5.160.jar",
"type" : "file",
"size" : 1175,
"created" : "2019-06-28T19:58:04.973+05:30",
"created_by" : "admin",
"modified" : "2019-06-28T19:58:04.970+05:30",
"modified_by" : "admin",
"updated" : "2019-06-28T19:58:04.973+05:30"
} ],
"range" : {
"start_pos" : 0,
"end_pos" : 3,
"total" : 3
}
}
//The output i am looking for: Latest build info with fields "created" and "name"
def jsonOutput = new groovy.json.JsonSlurper().parseText('''
{
"results" : [ {
"repo" : "libs-snapshot-local",
"path" : "simple-integration/2.5.150",
"name" : "simple-integration-2.5.150.jar",
"type" : "file",
"size" : 1175,
"created" : "2019-06-23T19:51:30.367+05:30",
"created_by" : "admin",
"modified" : "2019-06-23T19:51:30.364+05:30",
"modified_by" : "admin",
"updated" : "2019-06-23T19:51:30.368+05:30"
},{
"repo" : "libs-snapshot-local",
"path" : "simple-integration/2.5.140",
"name" : "simple-integration-2.5.140.jar",
"type" : "file",
"size" : 1175,
"created" : "2019-06-21T19:52:40.670+05:30",
"created_by" : "admin",
"modified" : "2019-06-21T19:52:40.659+05:30",
"modified_by" : "admin",
"updated" : "2019-06-21T19:52:40.671+05:30"
},{
"repo" : "libs-snapshot-local",
"path" : "simple-integration/2.5.150",
"name" : "simple-integration-2.5.160.jar",
"type" : "file",
"size" : 1175,
"created" : "2019-06-28T19:58:04.973+05:30",
"created_by" : "admin",
"modified" : "2019-06-28T19:58:04.970+05:30",
"modified_by" : "admin",
"updated" : "2019-06-28T19:58:04.973+05:30"
} ],
"range" : {
"start_pos" : 0,
"end_pos" : 3,
"total" : 3
}
}
''')
def last = jsonOutput.results.sort{a, b -> b.created <=> a.created }[0]
println last.created
println last.name
The problem here is not with Groovy code but the Jenkins pipeline.
This code as part of the question, and the solution provided by #daggett works charm on any Groovy IDE But, Fails when run via jenkins pipeline.
The issue URL: https://issues.jenkins-ci.org/browse/JENKINS-44924
I hope they fix it soon.
Thanks for your help guys.

How to get entire parent node using jq json parser?

I am trying to find a value in the json file and based on that I need to get the entire json data instead of that particular block.
Here is my sample json
[{
"name" : "Redirect to Website 1",
"behaviors" : [ {
"name" : "redirect",
"options" : {
"mobileDefaultChoice" : "DEFAULT",
"destinationProtocol" : "HTTPS",
"destinationHostname" : "SAME_AS_REQUEST",
"responseCode" : 302
}
} ],
"criteria" : [ {
"name" : "requestProtocol",
"options" : {
"value" : "HTTP"
}
} ],
"criteriaMustSatisfy" : "all"
},
{
"name" : "Redirect to Website 2",
"behaviors" : [ {
"name" : "redirect",
"options" : {
"mobileDefaultChoice" : "DEFAULT",
"destinationProtocol" : "HTTPS",
"destinationHostname" : "SAME_AS_REQUEST",
"responseCode" : 301
}
} ],
"criteria" : [ {
"name" : "contentType",
"options" : {
"matchOperator" : "IS_ONE_OF",
"values" : [ "text/html*", "text/css*", "application/x-javascript*" ],
}
} ],
"criteriaMustSatisfy" : "all"
}]
I am trying to match for "name" : "redirect" inside each behaviors array and if it matches then I need the entire block including the "criteria" section, as you can see its under same block {}
I managed to find the values using select methods but not able to get the parent section.
https://jqplay.org/s/BWJwVdO3Zv
Any help is much appreciated!
To avoid unwanted duplication:
.[]
| first(select(.behaviors[].name == "redirect"))
Equivalently:
.[]
| select(any(.behaviors[]; .name == "redirect"))
You can try this jq command:
<file jq 'select(.[].behaviors[].name=="redirect")'

Search inside JSON with Elastic

I have an index/type in ES which has the following type of records:
body "{\"Status\":\"0\",\"Time\":\"2017-10-3 16:39:58.591\"}"
type "xxxx"
source "11.2.21.0"
The body field is a JSON.So I want to search for example the records that have in their JSON body Status:0.
Query should look something like this(it doesn't work):
GET <host>:<port>/index/type/_search
{
"query": {
"match" : {
"body" : "Status:0"
}
}
}
Any ideas?
You have to change the analyser settings of your index.
For the JSON pattern you presented you will need to have a char_filter and a tokenizer which remove the JSON elements and then tokenize according to your needs.
Your analyser should contain a tokenizer and a char_filter like these ones here:
{
"tokenizer" : {
"type": "pattern",
"pattern": ","
},
"char_filter" : [ {
"type" : "mapping",
"mappings" : [ "{ => ", "} => ", "\" => " ]
} ],
"text" : [ "{\"Status\":\"0\",\"Time\":\"2017-10-3 16:39:58.591\"}" ]
}
Explanation: the char_filter will remove the characters: { } ". The tokenizer will tokenize by the comma.
These can be tested using the Analyze API. If you execute the above JSON against this API you will get these tokens:
{
"tokens" : [ {
"token" : "Status:0",
"start_offset" : 2,
"end_offset" : 13,
"type" : "word",
"position" : 0
}, {
"token" : "Time:2017-10-3 16:39:58.591",
"start_offset" : 15,
"end_offset" : 46,
"type" : "word",
"position" : 1
} ]
}
The first token ("Status:0") which is retrieved by the Analyze API is the one you were using in your search.

Update same field in multiple documents with data from json

I have a MongoDB looking like this:
[
{
"status" : 0,
"name" : "Yaknow",
"email" : "yaknow#not.this",
"_id" : "5875a42ea469f40c684de385"
},
{
"status" : 1,
"name" : "johnk",
"email" : "johnk#not#this",
"_id" : "586e31c6ce07af6f891f80fd"
}
]
Meanwhile, all the emails have changed and I got a Json with the new ones:
[
{
"email" : "yaknow#gmai.new",
"_id" : "5875a42ea469f40c684de385"
},
{
"email" : "johnk#gmail.new",
"_id" : "586e31c6ce07af6f891f80fd"
}
]
How do I update all the emails?
There is no operator in mongodb which allows you modify string value by replacing some part of string. You should get documents, and then for each of documents you should locally prepare updated value and update document:
db.collection.find({}).forEach(function(doc){
var newEmail = doc.email.substr(0, doc.email.indexOf('#')) + "#gmail.new";
db.collection.update({_id: doc._id}, {$set:{email: newEmail}});
});