How to define a variable within a JSON file and use it within JSON file - json

I'm trying to find if JSON file supports defining variables and using them within that JSON file?
{
"artifactory_repo": "toplevel_virtual_NonSnapshot",
"definedVariable1": "INSTANCE1",
"passedVariable2": "${passedFromOutside}",
"products": [
{ "name": "product_${definedVariable1}_common",
"version": "1.1.0"
},
{ "name": "product_{{passedVariable2}}_common",
"version": 1.5.1
}
]
}
I know YAML files allow this but now sure if JSON file allows this behavior or not. My plan is that a user will pass "definedVariable" value from Jenkins and I'll create a target JSON file (after substi

This might help you:
{
"artifactory_repo": "toplevel_virtual_NonSnapshot",
"definedVariable1": "INSTANCE1",
"passedVariable2": `${passedFromOutside}`,
"products": [
{ "name": `product_${definedVariable1}_common`,
"version": "1.1.0"
},
{ "name": `product_${passedVariable2}_common`,
"version": 1.5.1
}
]
}
*Note the use of `` instead of ''

Related

How do I effectively write a common json patch to patch a json file in Go?

I have a json n file like this
{
"metadata": {
"groups": [
{
"name": "first",
"version": "v1"
},
{
"name": "second",
"version": "v2"
}
]
}
}
And I want to change the version of all the groups above.
I saw there's this https://jsonpatch.com/ RFC which helps in patching. But the path is very limited in this approach. I cannot select all versions of the groups and replace them in bulk. Something like this is not possible
{
"op": "replace",
"path": "metadata.groups[*].version"
"value": "v2"
}
Now the suggestion might be to write one patch for one group. That will not suffice because I am generating this patch object for many json files. Each json file can have different number of "groups".
Is there a common json patch object that I can build to do these kinds of changes on the target json? Preferably this should be doable in Go language.

Does Bulk $import support transaction bundles(Accessing individual resources after posting) in Azure FHIR service

I have posted a bulk import with a file containing group transaction bundles in ndjson format. After posting I see them to be stored as bundles itself in the FHIR store instead of individual resources. But according to documentation when we store data as transaction bundle the resources inside that bundle should be created individually, but the records are getting created as a bundle itself.
I am using following import body, let me know if any changes are to be done for the body
How to post transaction bundle using $import?
{
"resourceType": "Parameters",
"parameter": [
{
"name": "inputFormat",
"valueString": "application/fhir+ndjson"
},
{
"name": "mode",
"valueString": "InitialLoad"
},
{
"name": "input",
"part": [
{
"name": "type",
"valueString": "Bundle"
},
{
"name": "url",
"valueUri": "https:xyz/fhir-sample-import-data/patient_bundles_ndjson.ndjson"
}
]
}
]
}

Sending a request without specifying the fields in the body

I'm trying to send several PUT request to a specific endpoint.
I am using a CSV file within the columns and values, the name of the different columns reference the different variables inside the body, do you get me?
This is the endpoint:
{{URL_API}}/products/{{sku}} --sku is the id of the product, i created that variable because i use it to pass the reference
This is the body that i use:
{
"price":"{{price}}",
"tax_percentage":"{{tax_percentage}}",
"store_code":"{{store_code}}",
"markup_top":"{{markup_top}}",
"status":"{{status}}",
"group_prices": [
{
"group":"{{class_a}}",
"price":"{{price_a}}",
"website":"{{website_a}}"
}
]
}
I don't want to use the body anymore.. sometimes some products have more tan 1 group of prices, i mean:
"group_prices": [
{
"group":"{{class_a}}",
"price":"{{price_a}}",
"website":"{{website_a}}"
},
{
"group":"{{class_b}}",
"price":"{{price_b}}",
"website":"{{website_b}}"
}
Is it possible to create something like this in the CSV file?
sku,requestBody
99RE345GT, {JSON Payload}
How should i declare the {JSON Payload}?
Can you help me?
EDIT:
This is the CSV file i used:
sku,price,tax_percentage,store_code,markup_top,status,class_a,price_a,website_a
95LB645R34ER,147000,US-21,B2BUSD,1.62,1,CLASS A,700038.79,B2BUSD
I want to pass the JSON within the CSV file, i mean
sku,requestBody
95LB645R34ER,{"price":"147000","tax_percentage":"US-21","store_code":"B2BUSD","markup_top":"1.62","status":"1","group_prices":
[{ "group":"CLASS A","price":"700038.79","website":"B2BUSD"}]}
Is it okay?Should i specify anything on the request body or not? I read the documentation posted in POSTMAN website but i did not find an example like this.
As you're using JSON data as a payload, I would use a JSON file rather than a CSV file in the Collection Runner. Use this as a template and save it as data.json, it's in the correct format accepted in the Runner.
[
{
"sku": "95LB645R34ER",
"payload": {
"price": "147000",
"tax_percentage": "US-21",
"store_code": "B2BUSD",
"markup_top": "1.62",
"status": "1",
"group_prices": [
{
"group": "CLASS A",
"price": "700038.79",
"website": "B2BUSD"
}
]
}
},
{
"sku": "MADEUPSKU",
"payload": {
"price": "99999",
"tax_percentage": "UK-99",
"store_code": "BLAH",
"markup_top": "9.99",
"status": "5",
"group_prices": [
{
"group": "CLASS B",
"price": "88888.79",
"website": "BLAH"
}
]
}
}
]
In the pre-request Script of the request, add this code. It's creating a new local variable from the data under the payload key in the data file. The data needs to be transformed into a string so it's using JSON.stringify() to do this.
pm.variables.set("JSONpayload", JSON.stringify(pm.iterationData.get('payload'), null, 2));
In the Request Body, add this:
{{JSONpayload}}
In the Collection Runner, select the Collection you would like to run and then select the data.json file that you previously created. Run the Collection.

How to use user variables with file provisioner in Packer?

I have a packer json like:
"builders": [{...}],
"provisioners": [
{
"type": "file",
"source": "packer/myfile.json",
"destination": "/tmp/myfile.json"
}
],
"variables": {
"myvariablename": "value"
}
and myfile.json is:
{
"var" : "{{ user `myvariablename`}}"
}
The variable into the file does get replaced, is a sed replacement with shell provisioner after the file the only option available here?
Using packer version 0.12.0
You have to pass these as environment variables. For example:
"provisioners": [
{
"type": "shell"
"environment_vars": [
"http_proxy={{user `proxy`}}",
],
"scripts": [
"some_script.sh"
],
}
],
"variables": {
"proxy": null
}
And in the script you can use $http_proxy
So far I've come just with the solution to use file & shell provisioner. Upload file and then replace variables in file via shell provisioner which can be fed from template variables provided by e.g. HashiCorp Vault
Yo may use OS export function to set environment and pass it to Packer
Here is a config using OS ENV_NAME value to choose local folder to copy from
export ENV_NAME=dev will set local folder to dev
{
"variables": {
...
"env_folder": "{{env `ENV_NAME`}}",
},
"builders": [{...}]
"provisioners": [
{
"type": "file",
"source": "files/{{user `env_folder`}}/",
"destination": "/tmp/"
},
{...}
]
}
User variables must first be defined in a variables section within your template. Even if you want a user variable to default to an empty string, it must be defined. This explicitness helps reduce the time it takes for newcomers to understand what can be modified using variables in your template.
The variables section is a key/value mapping of the user variable name to a default value. A default value can be the empty string. An example is shown below:
{
"variables": {
"aws_access_key": "",
"aws_secret_key": ""
},
"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
// ...
}]
}
check this link for more information

Defining Azure Stream Analytics iot-hub input source through Powershell

I'm trying to write a powershell script that creates a new streamAnalytics job in my azure portal account, with input source as iot-hub and output source as blob storage account.
To do so, I'm using AzureRM command new-streamAnalyticsJob, and json files.
my problem is: I have not seen any documentation or example for json file where the inputs source is iot-hub. only event-hub.
what are the parameters I need to give in the json file? can anyone display an example for json file with input source to streamAnalytics job as Iot-hub?
I got the answer eventually: the required field I had to add to the inputs Oliver posted earlier here is:
"endpoint":"messages/events"
I added it under Datasource Properties section, and it works fine!
Thanks Oliver
To come back on the error message you are seeing, to add to Olivier's sample you need a Property named endpoint which corresponds to the endpoint in IoT Hub. If you are looking for Telemetry messages this will be:
"endpoint": "messages/events"
This can be found in the schema for Azure ARM: https://github.com/Azure/azure-rest-api-specs/blob/current/specification/streamanalytics/resource-manager/Microsoft.StreamAnalytics/2016-03-01/examples/Input_Create_Stream_IoTHub_Avro.json
So to complete Olivier's example, when using API version '':
"Inputs": [
{
"Name": "Hub",
"Properties": {
"DataSource": {
"Properties": {
"consumerGroupName": "[variables('asaConsumerGroup')]",
"iotHubNamespace": "[parameters('iotHubName')]",
"sharedAccessPolicyKey": "[listkeys(variables('iotHubKeyResource'), variables('iotHubVersion')).primaryKey]",
"sharedAccessPolicyName": "[variables('iotHubKeyName')]",
"endpoint": "messages/events"
},
"Type": "Microsoft.Devices/IotHubs"
},
"Serialization": {
"Properties": {
"Encoding": "UTF8"
},
"Type": "Json"
},
"Type": "Stream"
}
}
],
That'd look like the following for the inputs part of the ASA resource:
"Inputs": [
{
"Name": "IoTHubStream",
"Properties": {
"DataSource": {
"Properties": {
"consumerGroupName": "[variables('CGName')]",
"iotHubNamespace": "[variables('iotHubName')]",
"sharedAccessPolicyKey": "[listkeys(variables('iotHubKeyResource'), variables('iotHubVersion')).primaryKey]",
"sharedAccessPolicyName": "[variables('iotHubKeyName')]"
},
"Type": "Microsoft.Devices/IotHubs"
},
"Serialization": {
"Properties": {
"Encoding": "UTF8"
},
"Type": "Json"
},
"Type": "Stream"
}
}
]