I'm trying to use jq to parse a JSON (the output of the OpenShift oc process ... command, actually), and add/update the env array of a container with a new key/value pair.
Sample input:
{
"kind": "List",
"apiVersion": "v1",
"metadata": {},
"items": [
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"annotations": {
"description": "Exposes and load balances the node.js application pods"
},
"name": "myapp-web"
},
"spec": {
"ports": [
{
"name": "web",
"port": 3000,
"protocol": "TCP",
"targetPort": 3000
}
],
"selector": {
"name": "myapp"
}
}
},
{
"apiVersion": "v1",
"kind": "Route",
"metadata": {
"name": "myapp-web"
},
"spec": {
"host": "app.internal.io",
"port": {
"targetPort": "web"
},
"to": {
"kind": "Service",
"name": "myapp-web"
}
}
},
{
"apiVersion": "v1",
"kind": "DeploymentConfig",
"metadata": {
"annotations": {
"description": "Defines how to deploy the application server"
},
"name": "myapp"
},
"spec": {
"replicas": 1,
"selector": {
"name": "myapp"
},
"strategy": {
"type": "Rolling"
},
"template": {
"metadata": {
"labels": {
"name": "myapp"
},
"name": "myapp"
},
"spec": {
"containers": [
{
"env": [
{
"name": "A_ENV",
"value": "a-value"
}
],
"image": "node",
"name": "myapp-node",
"ports": [
{
"containerPort": 3000,
"name": "app",
"protocol": "TCP"
}
]
}
]
}
},
"triggers": [
{
"type": "ConfigChange"
}
]
}
}
]
}
In this JSON, I want to do the following:
Find the DeploymentConfig object
Check if it has the env array in the first container
If it does, add a new object {"name": "B_ENV", "value": "b-value"} in it
If it does not, add the env array, with the object {"name": "B_ENV", "value": "b-value"} in it
So far, I'm able to tackle part of this, where I'm able to find the concerned object, and add the new env var to the container:
oc process -f <dc.yaml> -o json | jq '.items | map(if .kind == "DeploymentConfig"
then .spec.template.spec.containers[0].env |= .+ [{"name": "B_ENV", "value": "b-value"}]
else .
end)'
This is able to insert the new env var as expected, but the output is an array as shown below. Also, it doesn't handle the part where the env array may not be there at all.
I want to be able to produce the same output as the input, but with the new env var added.
Sample output:
[
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"annotations": {
"description": "Exposes and load balances the node.js application pods"
},
"name": "myapp-web"
},
"spec": {
"ports": [
{
"name": "web",
"port": 3000,
"protocol": "TCP",
"targetPort": 3000
}
],
"selector": {
"name": "myapp"
}
}
},
{
"apiVersion": "v1",
"kind": "Route",
"metadata": {
"name": "myapp-web"
},
"spec": {
"host": "app.internal.io",
"port": {
"targetPort": "web"
},
"to": {
"kind": "Service",
"name": "myapp-web"
}
}
},
{
"apiVersion": "v1",
"kind": "DeploymentConfig",
"metadata": {
"annotations": {
"description": "Defines how to deploy the application server"
},
"name": "myapp"
},
"spec": {
"replicas": 1,
"selector": {
"name": "myapp"
},
"strategy": {
"type": "Rolling"
},
"template": {
"metadata": {
"labels": {
"name": "myapp"
},
"name": "myapp"
},
"spec": {
"containers": [
{
"env": [
{
"name": "A_ENV",
"value": "a-value"
},
{
"name": "B_ENV",
"value": "b-value"
}
],
"image": "node",
"name": "myapp-node",
"ports": [
{
"containerPort": 3000,
"name": "app",
"protocol": "TCP"
}
]
}
]
}
},
"triggers": [
{
"type": "ConfigChange"
}
]
}
}
]
Is this doable, or is it too much to do with jq, and I should probably do this in python or node?
EDIT 1:
I just realized that conditional add/update of env array is already handled by the |= syntax!
So, I basically just need to be able to get the same structure back as the input, with the relevant env var added in the concerned array.
You pretty much had it, but you'll want to restructure your filter to preserve the full results. You want to make sure that none of the filters changes the context. By starting off with .items, you changed it from your root object to the items array. That in of itself is not so much a problem but what you do with that matters. Keep in mind that assignments/updates preserves the original context as a result with the change applied. So if you wrote your filter in terms of an update, it will work for you.
So to do this, you'll want to find the env array of the first container in the DeploymentConfig item. First let's find that:
.items[] | select(.kind == "DeploymentConfig").spec.template.spec.containers[0].env
You don't need to do anything else in terms of error handling since select will simply produce no result. From there, you just need to update the array by adding the new value.
(.items[] | select(.kind == "DeploymentConfig").spec.template.spec.containers[0].env) +=
[{name:"B_ENV",value:"b-value"}]
If the array exists, it will add the new item. If not, it will create a new env array. If env is not an array, that would be a different problem.
Related
I have the following output coming from a step function task: ListObjectsV2
{
"Contents": [
{
"ETag": "\"86c12c034bc6c30cb89b500b954c188f\"",
"Key": "55271f52fffe4461a2ee3228ebb97157/input/batch_1.csv",
"LastModified": "2023-02-09T13:46:20Z",
"Size": 796014,
"StorageClass": "STANDARD"
},
{
"ETag": "\"58e4a770e0f66073b00d185df500f07f\"",
"Key": "55271f52fffe4461a2ee3228ebb97157/input/batch_2.csv",
"LastModified": "2023-02-09T13:47:20Z",
"Size": 934038,
"StorageClass": "STANDARD"
},
{
"ETag": "\"460abd0de64d5cb67e8f0d46878cb1ef\"",
"Key": "55271f52fffe4461a2ee3228ebb97157/input/batch_3.csv",
"LastModified": "2023-02-09T13:46:57Z",
"Size": 794264,
"StorageClass": "STANDARD"
},
{
"ETag": "\"1bfedc3dc92e4ba8d04e24b9b5a0ed58\"",
"Key": "55271f52fffe4461a2ee3228ebb97157/input/batch_4.csv",
"LastModified": "2023-02-09T13:46:24Z",
"Size": 788756,
"StorageClass": "STANDARD"
},
{
"ETag": "\"9d6c434ce5ebdf203a790fbcf19338dc\"",
"Key": "55271f52fffe4461a2ee3228ebb97157/input/batch_5.csv",
"LastModified": "2023-02-09T13:47:07Z",
"Size": 831156,
"StorageClass": "STANDARD"
}
],
"IsTruncated": false,
"KeyCount": 5,
"MaxKeys": 1000,
"Name": "vita-internal-text-classification-dev-183576513728",
"Prefix": "55271f52fffe4461a2ee3228ebb97157"
}
I want to have an array containing only the Key key, to pass to the next state, like so:
[
{
"Key": "55271f52fffe4461a2ee3228ebb97157/input/batch_1.csv",
},
{
"Key": "55271f52fffe4461a2ee3228ebb97157/input/batch_2.csv",
},
{
"Key": "55271f52fffe4461a2ee3228ebb97157/input/batch_3.csv",
},
{
"Key": "55271f52fffe4461a2ee3228ebb97157/input/batch_4.csv",
},
{
"Key": "55271f52fffe4461a2ee3228ebb97157/input/batch_5.csv",
}
]
So far I've tried setting the ResultPath to:
$.Contents[*].Key
$.Contents[*].['Key']
What I get is:
[
"55271f52fffe4461a2ee3228ebb97157/input/batch_1.csv",
"55271f52fffe4461a2ee3228ebb97157/input/batch_2.csv",
"55271f52fffe4461a2ee3228ebb97157/input/batch_3.csv",
"55271f52fffe4461a2ee3228ebb97157/input/batch_4.csv",
"55271f52fffe4461a2ee3228ebb97157/input/batch_5.csv",
]
But I've gotten bad output from that, any help?
The way I've solved this is to use an Inline Map state with a Pass state to build the necessary format. You can see this pattern in an example here for how to use Step Functions Distributed Map to bulk delete objects from S3. You can see this in the inner Create Object Identifier Array Map state. If you were doing this in Standard Workflows, this could be a cost concern given the number of state transitions involved. But since in the Item Processor I'm using Express Workflows, which are billed by duration (and these are super fast), it works pretty well.
{
"Comment": "A state machine to bulk delete objects from S3 using Distributed Map",
"StartAt": "Confirm Bucket Provided",
"States": {
"Confirm Bucket Provided": {
"Type": "Choice",
"Choices": [
{
"Not": {
"Variable": "$.bucket",
"IsPresent": true
},
"Next": "Fail - No Bucket"
}
],
"Default": "Check for Prefix"
},
"Check for Prefix": {
"Type": "Choice",
"Choices": [
{
"Not": {
"Variable": "$.prefix",
"IsPresent": true
},
"Next": "Generate Parameters - Without Prefix"
}
],
"Default": "Generate Parameters - With Prefix"
},
"Generate Parameters - Without Prefix": {
"Type": "Pass",
"Parameters": {
"Bucket.$": "$.bucket",
"Prefix": ""
},
"ResultPath": "$.list_parameters",
"Next": "Delete Objects from S3 Bucket"
},
"Fail - No Bucket": {
"Type": "Fail",
"Error": "InsuffcientArguments",
"Cause": "No Bucket was provided"
},
"Generate Parameters - With Prefix": {
"Type": "Pass",
"Next": "Delete Objects from S3 Bucket",
"Parameters": {
"Bucket.$": "$.bucket",
"Prefix.$": "$.prefix"
},
"ResultPath": "$.list_parameters"
},
"Delete Objects from S3 Bucket": {
"Type": "Map",
"ItemProcessor": {
"ProcessorConfig": {
"Mode": "DISTRIBUTED",
"ExecutionType": "EXPRESS"
},
"StartAt": "Create Object Identifier Array",
"States": {
"Create Object Identifier Array": {
"Type": "Map",
"ItemProcessor": {
"ProcessorConfig": {
"Mode": "INLINE"
},
"StartAt": "Create Object Identifier",
"States": {
"Create Object Identifier": {
"Type": "Pass",
"End": true,
"Parameters": {
"Key.$": "$.Key"
}
}
}
},
"ItemsPath": "$.Items",
"ResultPath": "$.object_identifiers",
"Next": "Delete Objects"
},
"Delete Objects": {
"Type": "Task",
"Next": "Clear Output",
"Parameters": {
"Bucket.$": "$.BatchInput.bucket",
"Delete": {
"Objects.$": "$.object_identifiers"
}
},
"Resource": "arn:aws:states:::aws-sdk:s3:deleteObjects",
"Retry": [
{
"ErrorEquals": [
"States.ALL"
],
"BackoffRate": 2,
"IntervalSeconds": 1,
"MaxAttempts": 6
}
],
"ResultSelector": {
"Deleted.$": "$.Deleted",
"RetryCount.$": "$$.State.RetryCount"
}
},
"Clear Output": {
"Type": "Pass",
"End": true,
"Result": {}
}
}
},
"ItemReader": {
"Resource": "arn:aws:states:::s3:listObjectsV2",
"Parameters": {
"Bucket.$": "$.list_parameters.Bucket",
"Prefix.$": "$.list_parameters.Prefix"
}
},
"MaxConcurrency": 5,
"Label": "S3objectkeys",
"ItemBatcher": {
"MaxInputBytesPerBatch": 204800,
"MaxItemsPerBatch": 1000,
"BatchInput": {
"bucket.$": "$.list_parameters.Bucket"
}
},
"ResultSelector": {},
"End": true
}
}
}
I am trying to dynamically render a spec (Specification) of a collection from an RPC. Can't get it to work. Here I have attached the code of both 'module->mappable parameters' and the 'remote procedure->communication' here.
module -> mappable parameters
[
{
"name": "birdId",
"type": "select",
"label": "Bird Name",
"required": true,
"options": {
"store": "rpc://selectbird",
"nested": [
{
"name": "variables",
"type": "collection",
"label": "Bird Variables",
"spec": [
"rpc://birdVariables"
]
}
]
}
}
]
remote procedure -> communication
{
"url": "/bird/get-variables",
"method": "POST",
"body": {
"birdId": "{{parameters.birdId}}"
},
"headers": {
"Authorization": "Apikey {{connection.apikey}}"
},
"response": {
"iterate":{
"container": "{{body.data}}"
},
"output": {
"name": "{{item.name}}",
"label": "{{item.label}}",
"type": "{{item.type}}"
}
}
}
Thanks in advance.
Just tried the following and it worked. According to Integromat's Docs you can use the wrapper directive for the rpc like so:
{
"url": "/bird/get-variables",
"method": "POST",
"body": {
"birdId": "{{parameters.birdId}}"
},
"headers": {
"Authorization": "Apikey {{connection.apikey}}"
},
"response": {
"iterate":"{{body.data}}",
"output": {
"name": "{{item.name}}",
"label": "{{item.label}}",
"type": "{{item.type}}"
},
"wrapper": [{
"name": "variables",
"type": "collection",
"label": "Bird Variables",
"spec": "{{output}}"
}]
}
}
Your mappable parameters would then look like:
[
{
"name": "birdId",
"type": "select",
"label": "Bird Name",
"required": true,
"options": {
"store": "rpc://selectbird",
"nested": "rpc://birdVariables"
}
}
]
Needing this myself. Pulling in custom fields that have different types but would like them all to show for the user to update customs fields or when creating a contact be able to update them. Not sure if best to have them all show or have a select drop down then let the user use the map for more than one.
Here is my response from a Get for custom fields. Could you show how my code should look. Got little confused as usualy look for add a value in the output and do you need two separate RPC's in integromat? Noticed your store and nested were different.
{
"customFields": [
{
"id": "5sCdYXDx5QBau2m2BxXC",
"name": "Your Experience",
"fieldKey": "contact.your_experience",
"dataType": "LARGE_TEXT",
"position": 0
},
{
"id": "RdrFtK2hIzJLmuwgBtAr",
"name": "Assisted by",
"fieldKey": "contact.assisted_by",
"dataType": "MULTIPLE_OPTIONS",
"position": 0,
"picklistOptions": [
"Tom",
"Jill",
"Rick"
]
},
{
"id": "uyjmfZwo0PCDJKg2uqrt",
"name": "Is contacted",
"fieldKey": "contact.is_contacted",
"dataType": "CHECKBOX",
"position": 0,
"picklistOptions": [
"I would like to be contacted"
]
}
]
}
I am creating multiple resource policies (backup policies in Recovery Service Vault) for multiple environments. I was able to create them for one environment, how do i replicate them using nested copy for QA.
They will have policy name as AZR-QA-SQL-1Hour-Policy-001
Any help is appreciated.
"variables": {
"sqlDevPolicyName": [
"[concat('AZR-DEV-SQL-1HOUR-POLICY-001')]",
"[concat('AZR-DEV-SQL-4HOUR-POLICY-001')]",
"[concat('AZR-DEV-SQL-8HOUR-POLICY-001')]"
]
}
}
"resources": [
{
"type": "Microsoft.RecoveryServices/vaults",
"apiVersion": "2018-01-10",
"name": "[parameters('vaultName')]",
"location": "[parameters('location')]",
"sku": {
"name": "RS0",
"tier": "Standard"
},
"properties": {}
},
{
"apiVersion": "2018-01-10",
"name": "[concat(parameters('vaultName'), '/', variables('sqlPolicyName')[copyIndex()])]",
"type": "Microsoft.RecoveryServices/vaults/backupPolicies",
"dependsOn": [
"[concat('Microsoft.RecoveryServices/vaults/', parameters('vaultName'))]"
],
"copy": {
"name": "policies",
"count": "[length(variables('sqlDevPolicyName'))]"
},
"location": "[parameters('location')]",
"properties": {
"backupManagementType": "AzureWorkload",
"protectedItemsCount": 0,
"settings": {
"isCompression": false,
"issqlcompression": false,
"timeZone": "[parameters('timeZone')]"
},
"subProtectionPolicy": [
{
"policyType": "Full",
"retentionPolicy": {
"retentionPolicyType": "LongTermRetentionPolicy",
"weeklySchedule": {
"daysOfTheWeek": [
"Sunday"
],
"retentionDuration": {
"count": 15,
"durationType": "Weeks"
},
"retentionTimes": "[parameters('scheduleRunTimes')]"
}
},
According to my understanding of what you are saying you need to do this:
"variables": {
"sqlQAPolicyName": [ // dont need concat() here
"AZR-QA-SQL-1HOUR-POLICY-001",
"AZR-QA-SQL-4HOUR-POLICY-001",
"AZR-QA-SQL-8HOUR-POLICY-001"
]
},
"resources": [
{
same thing here, just need to create it two times, as you now have 2 sets of resources.
and you need to use your QA variable to create backup policies for QA
...
}
]
at least this is how i see it
I'm debugging log output from kubectl that states:
Error from server (BadRequest): a container name must be specified for pod postgres-operator-49202276-bjtf4, choose one of: [apiserver postgres-operator]
OK, so that's an explanatory error message, but looking at my JSON template it ought to just create both containers specified, correct? What am I missing? (please forgive my ignorance.)
I'm using just a standard kubectl create -f command to create the JSON file within a shell script. The JSON deployment file is as follows:
{
"apiVersion": "extensions/v1beta1",
"kind": "Deployment",
"metadata": {
"name": "postgres-operator"
},
"spec": {
"replicas": 1,
"template": {
"metadata": {
"labels": {
"name": "postgres-operator"
}
},
"spec": {
"containers": [{
"name": "apiserver",
"image": "$CCP_IMAGE_PREFIX/apiserver:$CO_IMAGE_TAG",
"imagePullPolicy": "IfNotPresent",
"env": [{
"name": "DEBUG",
"value": "true"
}],
"volumeMounts": [{
"mountPath": "/config",
"name": "apiserver-conf",
"readOnly": true
}, {
"mountPath": "/operator-conf",
"name": "operator-conf",
"readOnly": true
}]
}, {
"name": "postgres-operator",
"image": "$CCP_IMAGE_PREFIX/postgres-operator:$CO_IMAGE_TAG",
"imagePullPolicy": "IfNotPresent",
"env": [{
"name": "DEBUG",
"value": "true"
}, {
"name": "NAMESPACE",
"valueFrom": {
"fieldRef": {
"fieldPath": "metadata.namespace"
}
}
}, {
"name": "MY_POD_NAME",
"valueFrom": {
"fieldRef": {
"fieldPath": "metadata.name"
}
}
}],
"volumeMounts": [{
"mountPath": "/operator-conf",
"name": "operator-conf",
"readOnly": true
}]
}],
"volumes": [{
"name": "operator-conf",
"configMap": {
"name": "operator-conf"
}
}, {
"name": "apiserver-conf",
"configMap": {
"name": "apiserver-conf"
}
}]
}
}
}
}
If a pod has more than 1 containers then you need to provide the name of the specific container.
in your case, There is a pod (postgres-operator-49202276-bjtf4) which has 2 containers (apiserver and postgres-operator ).
following commands will provide logs for the specific containers
kubectl logs deployment/postgres-operator -c apiserver
kubectl logs deployment/postgres-operator -c postgres-operator
A container name must be given if the pod is having more than one containers (as mentioned in above answers).
To know all the containers inside a pod we can use:
kubectl -n <NAMESPACE> get pods <POD_NAME> -o jsonpath="{..image}"
I have a file with the return of a curl statement in it, in the form of json. Each object has a set of values, but the parameters for these values are all called the same names. See code below.
These objects are part of a larger object called workflow. The Cleaning up object is the last process that runs in our workflow. For every video that passes through the workflow, a json file in this format is created. (There are more than only these three objects, this is just for illustrative purposes)
I want to take the value of completed of the object with "description": "Cleaning up" and store it as a variable $end_time. Then I want to take the value of completed of the object with "description": "Ingest" and store it as a variable $start_time. These two values are then subtracted to give me an integer time in milliseconds so I can calculate the time it took for the video to go through this part of the process. With the maths part I am fine, and know how to do it. It is the extraction of the values that I am struggling with.
I hope this makes sense? ANY help would be appreciated. Thank you in advance!
EDIT: Had to delete original code in post, due to character limitations
Here is a proper example of the file that I have to work with:
{
"workflows": {
"count": "20",
"searchTime": "1",
"startPage": "0",
"totalCount": "1",
"workflow": {
"configurations": {
"configuration": [
{
"$": "1409750880000",
"key": "schedule.start"
},
{
"$": "1409755980000",
"key": "schedule.stop"
},
{
"$": "Capture_agent",
"key": "schedule.location"
},
{
"$": "false",
"key": "trimHold"
},
{
"$": "true",
"key": "archiveOp"
},
{
"$": "false",
"key": "captionHold"
},
{
"$": "false",
"key": "videoPreview"
}
]
},
"creator": {
"organization": "mh_default_org",
"roles": [
"76b1bdde-a080-40a4-b929-bde89af6a0a8_Instructor",
"ROLE_ADMIN",
"ROLE_ANONYMOUS",
"ROLE_USER"
],
"userName": user_name
},
"description": "This workflow definition defines the steps involved in scheduling a recording, capturing it, and\n ingesting it, after which processing operations may be added.\n ",
"errors": "",
"id": "15518",
"mediapackage": {
"attachments": "",
"creators": {
"creator": "Name"
},
"id": "2d25ed19-2978-458d-a4a0-c9c56d791c68",
"license": "Creative Commons 3.0: Attribution-NonCommercial-NoDerivs",
"media": "",
"metadata": "",
"publications": {
"publication": {
"channel": "engage-player",
"id": "b7b68f91-2c33-4673-ba7c-2e9b891788f9",
"mimetype": "text/html",
"tags": "",
"url": "http://some.url.com:80/engage/ui/watch.html?id=2d25ed19-2978-458d-a4a0-c9c56d791c68"
}
},
"series": "76b1bdde-a080-40a4-b929-bde89af6a0a8",
"seriestitle": "Recording_Title_user_name",
"start": "2014-09-03T13:28:00Z",
"title": "Recording_Title"
},
"operations": {
"operation": [
{
"abortable": "false",
"completed": 1409750882092,
"configurations": {
"configuration": [
{
"$": "1409750880000",
"key": "schedule.start"
},
{
"$": "1409755980000",
"key": "schedule.stop"
},
{
"$": "Capture_agent",
"key": "schedule.location"
}
]
},
"continuable": "false",
"description": "Scheduled",
"execution-history": "",
"execution-host": "http://some.url.com:8080",
"fail-on-error": "true",
"failed-attempts": "0",
"hold-action-title": "View schedule",
"holdurl": "/workflow/hold/org.opencastproject.workflow.handler.scheduleworkflowoperationhandler",
"id": "schedule",
"job": "15519",
"max-attempts": "1",
"retry-strategy": "none",
"started": 1409750881745,
"state": "SUCCEEDED",
"time-in-queue": 0
},
{
"abortable": "false",
"configurations": "",
"continuable": "false",
"description": "Capture",
"execution-history": "",
"execution-host": "http://some.url.com:8080",
"fail-on-error": "true",
"failed-attempts": "0",
"hold-action-title": "Monitor capture",
"holdurl": "/workflow/hold/org.opencastproject.workflow.handler.captureworkflowoperationhandler",
"id": "capture",
"job": "42894",
"max-attempts": "1",
"retry-strategy": "none",
"started": 1409750884085,
"state": "SKIPPED",
"time-in-queue": 0
},
{
"completed": 1409756171224,
"configurations": "",
"description": "Ingest",
"execution-history": "",
"fail-on-error": "true",
"failed-attempts": "0",
"id": "ingest",
"max-attempts": "1",
"retry-strategy": "none",
"state": "SUCCEEDED"
},
{
"completed": 1409854379552,
"configurations": {
"configuration": {
"key": "preserve-flavors"
}
},
"description": "Cleaning up",
"execution-history": "",
"execution-host": "http://some.url.com:8080",
"fail-on-error": "false",
"failed-attempts": "0",
"id": "cleanup",
"job": "45113",
"max-attempts": "1",
"retry-strategy": "none",
"started": 1409854378128,
"state": "SUCCEEDED",
"time-in-queue": 0
}
]
},
"organization": {
"adminRole": "ROLE_ADMIN",
"anonymousRole": "ROLE_ANONYMOUS",
"id": "mh_default_org",
"name": "Opencast Project",
"properties": {
"property": [
{
"$": "true",
"key": "adminui.i18n_tab_episode.enable"
},
{
"$": "false",
"key": "adminui.i18n_tab_users.enable"
},
{
"$": "/engage/ui/img/mh_logos/OpencastLogo.png",
"key": "logo_small"
},
{
"$": "http://opencast.org/matterhorn/",
"key": "engageui.link_mobile_redirect.url"
},
{
"$": "false",
"key": "engageui.annotations.enable"
},
{
"$": "true",
"key": "engageui.links_media_module.enable"
},
{
"$": "2024",
"key": "adminui.chunksize"
},
{
"$": "false",
"key": "adminui.series_prepopulate.enable"
},
{
"$": "true",
"key": "engageui.link_download.enable"
},
{
"$": "false",
"key": "engageui.link_mobile_redirect.enable"
},
{
"$": "For more information have a look at the official site.",
"key": "engageui.link_mobile_redirect.description"
},
{
"$": "/engage/ui/img/mh_logos/MatterhornLogo_large.png",
"key": "logo_large"
}
]
},
"servers": {
"server": {
"name": "localhost",
"port": "8080"
}
}
},
"parent": {
"nil": "true"
},
"state": "SUCCEEDED",
"template": "full",
"title": "Scheduled Workflow"
}
}
}
Here is a jq example that should point you to getting what you want:
#!/bin/bash
# Assuming the json is in a file workflow.json
end_time=$( jq '.workflows.workflow.operations.operation[] | select(.description == "Cleaning up") | .completed' < workflow.json )
start_time=$( jq '.workflows.workflow.operations.operation[] | select(.description == "Ingest") | .completed' < workflow.json )
This is assuming the input you have is in an JSON array called workflow at the top level. Here's this on the command line:
$ jq '.workflows.workflow.operations.operation[] | select(.description == "Ingest") | .completed' < workflow.json
1406051539118
$ jq '.workflows.workflow.operations.operation[] | select(.description == "Cleaning up") | .completed' < workflow.json
1406051695440