.proj file to read and update data in .json file - json

I'm writing a build.proj file to build and deploy a application. I have one application.configuration file which contains some project configurations. application.configuration is written in json format.
{
"Clients": [
{
"Id": "xyz",
"Name": "test",
"Flow": "2"
}
],
"Scopes": [
{
"Name": "roles",
"DisplayName": "roles",
"Claims": [
{
"Name": "role"
}
]
}
]
}
In build.proj file a want to read and update data from this file so what should i use?
Because MSBUILD Tasks only reads and update data from .xml files.

Related

Does Bulk $import support transaction bundles(Accessing individual resources after posting) in Azure FHIR service

I have posted a bulk import with a file containing group transaction bundles in ndjson format. After posting I see them to be stored as bundles itself in the FHIR store instead of individual resources. But according to documentation when we store data as transaction bundle the resources inside that bundle should be created individually, but the records are getting created as a bundle itself.
I am using following import body, let me know if any changes are to be done for the body
How to post transaction bundle using $import?
{
"resourceType": "Parameters",
"parameter": [
{
"name": "inputFormat",
"valueString": "application/fhir+ndjson"
},
{
"name": "mode",
"valueString": "InitialLoad"
},
{
"name": "input",
"part": [
{
"name": "type",
"valueString": "Bundle"
},
{
"name": "url",
"valueUri": "https:xyz/fhir-sample-import-data/patient_bundles_ndjson.ndjson"
}
]
}
]
}

ambari rest API + set json configuration in ambari

To create a new config group, it is mandatory to provide a config group name, a tag and the cluster name to which it belongs. The tag as seen in this example is a name of the service. Two config groups with the same tag cannot be associated with the same host.
how to run the following json file with curl ?
in order to set this config group in ambari
POST /api/v1/clusters/c1/config_groups
[
{
"ConfigGroup": {
"cluster_name": "c1",
"group_name": "hdfs-nextgenslaves",
"tag": "HDFS",
"description": "HDFS configs for rack added on May 19, 2010",
"hosts": [
{
"host_name": "host1"
}
],
"desired_configs": [
{
"type": "core-site",
"tag": "nextgen1",
"properties": {
"key": "value"
}
}
]
}
}
]
 reference - https://github.com/swagle/test/blob/master/docs/api/v1/config-groups.md
Is your question about how to send multiline json with curl? You can find different methods here.

How to define a variable within a JSON file and use it within JSON file

I'm trying to find if JSON file supports defining variables and using them within that JSON file?
{
"artifactory_repo": "toplevel_virtual_NonSnapshot",
"definedVariable1": "INSTANCE1",
"passedVariable2": "${passedFromOutside}",
"products": [
{ "name": "product_${definedVariable1}_common",
"version": "1.1.0"
},
{ "name": "product_{{passedVariable2}}_common",
"version": 1.5.1
}
]
}
I know YAML files allow this but now sure if JSON file allows this behavior or not. My plan is that a user will pass "definedVariable" value from Jenkins and I'll create a target JSON file (after substi
This might help you:
{
"artifactory_repo": "toplevel_virtual_NonSnapshot",
"definedVariable1": "INSTANCE1",
"passedVariable2": `${passedFromOutside}`,
"products": [
{ "name": `product_${definedVariable1}_common`,
"version": "1.1.0"
},
{ "name": `product_${passedVariable2}_common`,
"version": 1.5.1
}
]
}
*Note the use of `` instead of ''

Reading JSON file content in LINUX

Dears,
Can someone help me out reading the content of JSON file in LINUX machine without using JQ, Python & Ruby. Looking for purely SHELL scripting. We need to iterate the values if multiple records found. In the below case we have 2 set of records, which needs to iterated.
{
"version": [
"sessionrestore",1
],
"windows": [
{
"tabs": [
{
"entries": [
{
"url": "http://orf.at/#/stories/2.../" (http://orf.at/#/stories/2.../%27) ,
"title": "news.ORF.at",
"charset": "UTF-8",
"ID": 9588,
"docshellID": 298,
"docIdentifier": 10062,
"persist": true
},
{
"url": "http://oracle.at/#/stories/2.../" (http://oracle.at/#/stories/2.../%27) ,
"title": "news.at",
"charset": "UTF-8",
"ID": 9589,
"docshellID": 288,
"docIdentifier": 00062,
"persist": false
}
]
}
}
}

Get formatted data in shell script which is read from file containing JSON data

Writting one shell script to automatically get list of name, current and latest available version from raw json data.
I am trying to format JSON data stored in file using shell script. I tried using JQ command line JSON parser.
I want to get formatted JSON data in script. Their is advanced option provided in JQ for same scenario. I am not able to use it properly.
Example: File containing Following JSON
{
"endpoint": {
"name": "test-plugin",
"version": "0.0.1"
},
"dependencies": {
"plugin1": {
"main": {
"name": "plugin1name",
"description": "Dummy text"
},
"pkgMeta": {
"name": "plugin1name",
"version": "0.0.1"
},
"dependencies": {},
"versions": [
"0.0.5",
"0.0.4",
"0.0.3",
"0.0.2",
"0.0.1"
],
"update": {
"latest": "0.0.5"
}
},
"plugin2": {
"main": {
"name": "plugin2name",
"description": "Dummy text"
},
"pkgMeta": {
"name": "plugin2name",
"version": "0.1.1"
},
"dependencies": {},
"versions": [
"0.1.5",
"0.1.4",
"0.1.3",
"0.1.2",
"0.1.1"
],
"update": {
"latest": "0.1.5"
}
}
}
}
Trying to get result in format
[{name: "plugin1name",
c_version: "0.0.1",
n_version: "0.0.5"
},
{name: "plugin2name",
c_version: "0.1.1",
n_version: "0.1.5"}]
Can someone suggest anything ?
Your json file is not valid at: .dependencies.pkgMeta.version.
After fixing your json file, try this command:
jq '
.dependencies |
to_entries |
map(.value |
{
name: .main.name,
c_version: .pkgMeta.version,
n_version: .update.latest
}
)' input.json
The result is:
[
{
"name": "plugin1name",
"c_version": "0.0.1",
"n_version": "0.0.5"
},
{
"name": "plugin2name",
"c_version": "0.1.1",
"n_version": "0.1.5"
}
]