Create a json file from other json files - json

I have 2 json files:
User.json:
{
"users": [
{
"username": "User1",
"app": "Git",
"role": "Manager"
},
{
"username": "user2",
"app": "Git",
"role": "Developer"
}
]
}
App.js:
{
"apps": [
{
"appName": "Git",
"repo": "http://repo1..."
},
{
"appName": "Jenkins",
"repo": "htpp://repo2..."
}
]
}
I'm working on an Angular-CLI apllication for the first time and I want to generate a new json file called infos.json containing the content of the 2 files (User.json + App.json) without redundancy.
Expected file:
Infos.json:
{
"infos": [
{
"username": "User1",
"appName": "GIT",
"role": "Manager",
"repo": "http://repo1..."
},
{
"username": "User2",
"appName": "Jenkins",
"role": "Developer",
"repo": "htpp://repo2..."
}
]
}
How can I do it in my Angular-CLI app ?

You can do this by creating task in task runner. Some of the task runner are Grunt, gulp etc.
Grunt and gulp have different inbuilt packages.
Grunt: npm i grunt-merge-json
Gulp: npm i gulp-merge-json
If you are using web-pack so there is a inbuilt package called merge-webpack-plugin

Related

Parsing Git Json with Regular Express

I am taking a Github json file and parsing it with Java's regular expression library JsonPath. I am having a problem parsing arrays that do not have labels.
I need to send a email every time a particular file is changed in our repository.
Here is the Git Json:
{
"trigger": "push",
"payload": {
"type": "GitPush",
"before": "xxxxxxxx",
"after": "yyyyyyyy",
"branch": "branch-name",
"ref": "refs/heads/branch-name",
"repository": {
"id": 42,
"name": "repo",
"title": "repo",
"type": "GitRepository"
},
"beanstalk_user": {
"type": "Owner",
"id": 42,
"login": "username",
"email": "user#example.org",
"name": "Name Surname"
},
"commits": [
{
"type": "GitCommit",
"id": "ffffffff",
"message": "Important changes.",
"branch": "branch-name",
"author": {
"name": "Name Surname",
"email": "user#example.org"
},
"beanstalk_user": {
"type": "Owner",
"id": 42,
"login": "username",
"email": "user#example.org",
"name": "Name Surname"
},
"changed_files": {
"added": [
"NEWFILE",
],
"deleted": [
"Gemfile",
"NEWFILE"
],
"modified": [
"README.md",
"NEWFILE"
],
"copied": [
]
},
"changeset_url": "https://subdomain.github.com/repository-name/changesets/ffffffff",
"committed_at": "2014/08/18 13:30:29 +0000",
"parents": [
"afafafaf"
]
}
]
}
}
This is the expression I am using: to get the commits
$..changed_files
This return the whole changed files part but I can not explicitly choose the name "NEWFILE"
I tried
$..changed_files.*[?(#.added == "NEWFILE")]
$..changed_files.*[?(#.*== "NEWFILE")]
It just returns a empty array.
I just want it to return Newfile and what type of change. Any Ideas?
You can use the following JsonPath to retrieve the commits which list "NEWFILE" as an added file :
$.payload.commits[?(#.changed_files.added.indexOf("NEWFILE") != -1)]

How to find a key-value pair in json text using shell scripting with in-built linux tools like sed?

I have a JSON file abc.json containing text:
{
"size": 3,
"limit": 25,
"isLastPage": true,
"values": [
{
"slug": "docker_apache_customised",
"id": 234889,
"name": "docker_apache_customised",
"scmId": "git",
"state": "AVAILABLE",
"statusMessage": "Available",
"forkable": true,
"project": {
"key": "UFD",
"id": 36239,
"name": "UF_docker",
"public": false,
"type": "NORMAL",
"links": {
"self": [{
"href": "https://rndwww.abc.xxx.net/git/projects/UFD"
}]
}
},
"public": false,
"links": {
"clone": [{
"href": "https://rndwww.abc.xxx.net/git/scm/ufd/docker_apache_customised.git",
"name": "http"
}, {
"href": "ssh://git#git.rnd.xxx.net/ufd/docker_apache_customised.git",
"name": "ssh"
}],
"self": [{
"href": "https://rndwww.abc.xxx.net/git/projects/UFD/repos/docker_apache_customised/browse"
}]
}
},
{
"slug": "web-software",
"id": 241533,
"name": "web-software",
"scmId": "git",
"state": "AVAILABLE",
"statusMessage": "Available",
"forkable": true,
"project": {
"key": "UFD",
"id": 36239,
"name": "UF_docker",
"public": false,
"type": "NORMAL",
"links": {
"self": [{
"href": "https://rndwww.abc.xxx.net/git/projects/UFD"
}]
}
},
"public": false,
"links": {
"clone": [{
"href": "https://rndwww.abc.xxx.net/git/scm/ufd/web-software.git",
"name": "http"
}, {
"href": "ssh://git#git.rnd.xxx.net/ufd/web-software.git",
"name": "ssh"
}],
"self": [{
"href": "https://rndwww.abc.xxx.net/git/projects/UFD/repos/web-software/browse"
}]
}
},
{
"slug": "web-loy-conf",
"id": 240959,
"name": "web-loy-conf",
"scmId": "git",
"state": "AVAILABLE",
"statusMessage": "Available",
"forkable": true,
"project": {
"key": "UFD",
"id": 36239,
"name": "UF_docker",
"public": false,
"type": "NORMAL",
"links": {
"self": [{
"href": "https://rndwww.abc.xxx.net/git/projects/UFD"
}]
}
},
"public": false,
"links": {
"clone": [{
"href": "ssh://git#git.rnd.xxx.net/ufd/web-loy-conf.git",
"name": "ssh"
}, {
"href": "https://rndwww.abc.xxx.net/git/scm/ufd/web-loy-conf.git",
"name": "http"
}],
"self": [{
"href": "https://rndwww.abc.xxx.net/git/projects/UFD/repos/web-loy-conf/browse"
}]
}
}
],
"start": 0
}
This text contains three repositories(named docker_apache_customised, web-software, web-loy-conf) in a git project. There may be more repos containing web as substring.
I want to perform some operation on the repositories which has web as substring, and for that I think I have to apply a for loop in shell script. I don't want to use jq tool
I wrote a script using external tool jq, but I want to do it with Linux in-built tools only. The script using jq is working fine:
for k in $(jq '.values | keys | .[]' abc.json); do
value=$(jq -r ".values[$k]" abc.json);
name=$(jq -r '.name' <<< "$value");
if [[ $name == *"web"* ]]; then
#MYLOGIC
done
done
Expected result are names (web-software, web-loy-conf) and to be able to loop through that
You can run jq from its current path in your git repository, there's no need to copy it to a directory in the PATH. After adding execution permissions:
value=$(<path to jq in git dir>/jq -r ".values[$k]" abc.json);
You can make it relative to git repository root
value=$(./<path to jq from git repo root>/jq -r ".values[$k]" abc.json);
Also, you can set the path to it in a variable
jqbin='./<path to jq from git repo root>/jq'
value=$($jqbin -r ".values[$k]" abc.json);

Is it possible to create cluster in EMR by giving all the configurations from json file

I want to automate the cluster creation task in EMR. I have a json file
which contains configurations which need to be applied on new cluster and I want to write a shell script which automates this task for me.
Is it possible to create cluster in EMR by giving all the configurations from json file?
For example I have this file
{
"Cluster": {
"Ec2InstanceAttributes": {
"EmrManagedMasterSecurityGroup": "sg-00b10b71",
"RequestedEc2AvailabilityZones": [],
"AdditionalSlaveSecurityGroups": [],
"AdditionalMasterSecurityGroups": [],
"RequestedEc2SubnetIds": [
"subnet-02291b3e"
],
"Ec2SubnetId": "subnet-02291b3e",
"IamInstanceProfile": "EMR_EC2_DefaultRole",
"Ec2KeyName": "perf_key_pair",
"Ec2AvailabilityZone": "us-east-1e",
"EmrManagedSlaveSecurityGroup": "sg-f2b30983"
},
"Name": "NitinJ-Perf",
"ServiceRole": "EMR_DefaultRole",
"Tags": [
{
"Value": "Perf-Nitink",
"Key": "Qubole"
}
],
"Applications": [
{
"Version": "3.7.2",
"Name": "Ganglia"
},
{
"Version": "2.7.3",
"Name": "Hadoop"
},
{
"Version": "2.1.1",
"Name": "Hive"
},
{
"Version": "0.16.0",
"Name": "Pig"
},
{
"Version": "0.8.4",
"Name": "Tez"
}
],
"MasterPublicDnsName": "ec2-34-229-254-217.compute-1.amazonaws.com",
"ScaleDownBehavior": "TERMINATE_AT_INSTANCE_HOUR",
"InstanceGroups": [
{
"RequestedInstanceCount": 4,
"Status": {
"Timeline": {
"ReadyDateTime": 1499150835.979,
"CreationDateTime": 1499150533.99
},
"State": "RUNNING",
"StateChangeReason": {
"Message": ""
}
},
"Name": "Core Instance Group",
"InstanceGroupType": "CORE",
"EbsBlockDevices": [],
"ShrinkPolicy": {},
"Id": "ig-34P3CVF8ZL5CW",
"Configurations": [],
"InstanceType": "r3.4xlarge",
"Market": "ON_DEMAND",
"RunningInstanceCount": 4
},
{
"RequestedInstanceCount": 1,
"Status": {
"Timeline": {
"ReadyDateTime": 1499150804.591,
"CreationDateTime": 1499150533.99
},
"State": "RUNNING",
"StateChangeReason": {
"Message": ""
}
},
"Name": "Master Instance Group",
"InstanceGroupType": "MASTER",
"EbsBlockDevices": [],
"ShrinkPolicy": {},
"Id": "ig-3V7EHQ36187PY",
"Configurations": [],
"InstanceType": "r3.4xlarge",
"Market": "ON_DEMAND",
"RunningInstanceCount": 1
}
],
"Configurations": [
{
"Properties": {
"hive.vectorized.execution.enabled": "true"
},
"Classification": "hive-site"
}
]
}
}
Can I create a cluster on EMR by using some command like
aws emr create-cluster --cli-input-json file://'pwd'/emr_cluster_up.json
There is no such option through the AWS CLI as per the AWS CLI documentation. But if you want to automate the EMR cluster creation using a JSON file. You can use Cloud formation and automate the cluster creation.
Getting Started with AWS CloudFormation

Openshift: Unresolved Image

i'm stuck with Openshift (Origin) and need some help.
Let's say i want to add a grafana deployment via CLI to a new started cluster.
What i do:
Upload a template to my openshift cluster (oc create -f openshift-grafana.yml)
Pull the necessary image from the docker hub (oc import-image --confirm grafana/grafana)
Build a new app based on my template (oc new-app grafana)
These steps creates the deployment config and the routes.
But then i'm not able to start a deployment via CLI.
# oc deploy grafana
grafana deployment #1 waiting on image or update
# oc rollout latest grafana
Error from server (BadRequest): cannot trigger a deployment for "grafana" because it contains unresolved imagesenter code here
In the openshift web console it looks like this:
The images is there, even the link is working. In the web console i can click "deploy" and it's working. But nevertheless i'm not able to rollout a new version via command line.
The only way it works is editing the deployment yml so openshift recognizes a change a starts a deployment based on "config change" (hint: i'm was not changing the image or image name)
There is nothing special in my template, it was just an export via oc export from a working config.
Any hint would be appreciated, i'm pretty much stuck.
Thanks.
I had this same issue and I solved it by adding:
lastTriggeredImage: >-
mydockerrepo.com/repo/myimage#sha256:xxxxxxxxxxxxxxxx
On:
triggers:
- type: ImageChange
imageChangeParams:
Of the deploymentconfig yaml. Looks like if it doesn't know what the last triggered image is, it wont be able to resolve it.
Included below is the template you can use as a starter. Just be aware that the grafana image appears to require it be run as root, else it will not startup. This means you have to override the default security model of OpenShift and enable allowing running of images as root in the project. This is not recommended. The grafana images should be fixed so as not to require they be run as root.
To enable running as root, you would need to run as a cluster admin:
oc adm policy add-scc-to-user anyuid -z default -n myproject
where myproject is the name of the project you are using.
I applied it to the default service account, but better you create a separate service account, apply it to that and then change the template so that only grafana runs as that service account.
It is possible that the intent is that you override the default settings through the grafana.ini file so it uses your mounted emptyDir directories and then it isn't an issue. I didn't attempt to provide any override config.
The template for grafana would then be as follows. Note I have used JSON as I find it easier to work with JSON, but also to avoid indenting being screwed up making the YAML impossible to use.
Before you use this template, you should obviously create the corresponding config map where name is of form ${APPLICATION_NAME}-config where ${APPLICATION_NAME} is grafana unless you override it when using the template. The key in the config map should be grafana.ini and then have as value the config file contents.
{
"apiVersion": "v1",
"kind": "Template",
"metadata": {
"name": "grafana"
},
"parameters": [
{
"name": "APPLICATION_NAME",
"value": "grafana",
"from": "[a-zA-Z0-9]",
"required": true
}
],
"objects": [
{
"apiVersion": "v1",
"kind": "ImageStream",
"metadata": {
"name": "${APPLICATION_NAME}-img",
"labels": {
"app": "${APPLICATION_NAME}"
}
},
"spec": {
"tags": [
{
"name": "latest",
"from": {
"kind": "DockerImage",
"name": "grafana/grafana"
}
}
]
}
},
{
"apiVersion": "v1",
"kind": "DeploymentConfig",
"metadata": {
"name": "${APPLICATION_NAME}",
"labels": {
"app": "${APPLICATION_NAME}",
"type": "monitoring"
}
},
"spec": {
"replicas": 1,
"selector": {
"app": "${APPLICATION_NAME}",
"deploymentconfig": "${APPLICATION_NAME}"
},
"template": {
"metadata": {
"labels": {
"app": "${APPLICATION_NAME}",
"deploymentconfig": "${APPLICATION_NAME}",
"type": "monitoring"
}
},
"spec": {
"containers": [
{
"name": "grafana",
"image": "${APPLICATION_NAME}-img:latest",
"imagePullPolicy": "Always",
"livenessProbe": {
"failureThreshold": 3,
"httpGet": {
"path": "/",
"port": 3000,
"scheme": "HTTP"
},
"periodSeconds": 10,
"successThreshold": 1,
"timeoutSeconds": 1
},
"ports": [
{
"containerPort": 3000,
"protocol": "TCP"
}
],
"volumeMounts": [
{
"mountPath": "/etc/grafana",
"name": "grafana-1"
},
{
"mountPath": "/var/lib/grafana",
"name": "grafana-2"
},
{
"mountPath": "/var/log/grafana",
"name": "grafana-3"
}
]
}
],
"volumes": [
{
"configMap": {
"defaultMode": 420,
"name": "${APPLICATION_NAME}-config"
},
"name": "grafana-1"
},
{
"emptyDir": {},
"name": "grafana-2"
},
{
"emptyDir": {},
"name": "grafana-3"
}
]
}
},
"test": false,
"triggers": [
{
"type": "ConfigChange"
},
{
"imageChangeParams": {
"automatic": true,
"containerNames": [
"grafana"
],
"from": {
"kind": "ImageStreamTag",
"name": "${APPLICATION_NAME}-img:latest"
}
},
"type": "ImageChange"
}
]
}
},
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"name": "${APPLICATION_NAME}",
"labels": {
"app": "${APPLICATION_NAME}",
"type": "monitoring"
}
},
"spec": {
"ports": [
{
"name": "3000-tcp",
"port": 3000,
"protocol": "TCP",
"targetPort": 3000
}
],
"selector": {
"deploymentconfig": "${APPLICATION_NAME}"
},
"type": "ClusterIP"
}
},
{
"apiVersion": "v1",
"kind": "Route",
"metadata": {
"name": "${APPLICATION_NAME}",
"labels": {
"app": "${APPLICATION_NAME}",
"type": "monitoring"
}
},
"spec": {
"host": "",
"port": {
"targetPort": "3000-tcp"
},
"to": {
"kind": "Service",
"name": "${APPLICATION_NAME}",
"weight": 100
}
}
}
]
}
For me, I had the image name incorrectly under 'from':
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- alcatraz-ha
from:
kind: ImageStreamTag
name: 'alcatraz-haproxy:latest'
namespace: alcatraz-ha-dev
type: ImageChange
I had name: 'alcatraz-ha:latest' so it could not find the image
Make sure that spec.triggers.imageChangeParams.from.name exists as image stream
triggers:
- imageChangeParams:
from:
kind: ImageStreamTag
name: 'myapp:latest' # Does "myapp" exist if you run oc get is ????!!!

Add or Update object in array in JSON using jq

I'm trying to use jq to parse a JSON (the output of the OpenShift oc process ... command, actually), and add/update the env array of a container with a new key/value pair.
Sample input:
{
"kind": "List",
"apiVersion": "v1",
"metadata": {},
"items": [
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"annotations": {
"description": "Exposes and load balances the node.js application pods"
},
"name": "myapp-web"
},
"spec": {
"ports": [
{
"name": "web",
"port": 3000,
"protocol": "TCP",
"targetPort": 3000
}
],
"selector": {
"name": "myapp"
}
}
},
{
"apiVersion": "v1",
"kind": "Route",
"metadata": {
"name": "myapp-web"
},
"spec": {
"host": "app.internal.io",
"port": {
"targetPort": "web"
},
"to": {
"kind": "Service",
"name": "myapp-web"
}
}
},
{
"apiVersion": "v1",
"kind": "DeploymentConfig",
"metadata": {
"annotations": {
"description": "Defines how to deploy the application server"
},
"name": "myapp"
},
"spec": {
"replicas": 1,
"selector": {
"name": "myapp"
},
"strategy": {
"type": "Rolling"
},
"template": {
"metadata": {
"labels": {
"name": "myapp"
},
"name": "myapp"
},
"spec": {
"containers": [
{
"env": [
{
"name": "A_ENV",
"value": "a-value"
}
],
"image": "node",
"name": "myapp-node",
"ports": [
{
"containerPort": 3000,
"name": "app",
"protocol": "TCP"
}
]
}
]
}
},
"triggers": [
{
"type": "ConfigChange"
}
]
}
}
]
}
In this JSON, I want to do the following:
Find the DeploymentConfig object
Check if it has the env array in the first container
If it does, add a new object {"name": "B_ENV", "value": "b-value"} in it
If it does not, add the env array, with the object {"name": "B_ENV", "value": "b-value"} in it
So far, I'm able to tackle part of this, where I'm able to find the concerned object, and add the new env var to the container:
oc process -f <dc.yaml> -o json | jq '.items | map(if .kind == "DeploymentConfig"
then .spec.template.spec.containers[0].env |= .+ [{"name": "B_ENV", "value": "b-value"}]
else .
end)'
This is able to insert the new env var as expected, but the output is an array as shown below. Also, it doesn't handle the part where the env array may not be there at all.
I want to be able to produce the same output as the input, but with the new env var added.
Sample output:
[
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"annotations": {
"description": "Exposes and load balances the node.js application pods"
},
"name": "myapp-web"
},
"spec": {
"ports": [
{
"name": "web",
"port": 3000,
"protocol": "TCP",
"targetPort": 3000
}
],
"selector": {
"name": "myapp"
}
}
},
{
"apiVersion": "v1",
"kind": "Route",
"metadata": {
"name": "myapp-web"
},
"spec": {
"host": "app.internal.io",
"port": {
"targetPort": "web"
},
"to": {
"kind": "Service",
"name": "myapp-web"
}
}
},
{
"apiVersion": "v1",
"kind": "DeploymentConfig",
"metadata": {
"annotations": {
"description": "Defines how to deploy the application server"
},
"name": "myapp"
},
"spec": {
"replicas": 1,
"selector": {
"name": "myapp"
},
"strategy": {
"type": "Rolling"
},
"template": {
"metadata": {
"labels": {
"name": "myapp"
},
"name": "myapp"
},
"spec": {
"containers": [
{
"env": [
{
"name": "A_ENV",
"value": "a-value"
},
{
"name": "B_ENV",
"value": "b-value"
}
],
"image": "node",
"name": "myapp-node",
"ports": [
{
"containerPort": 3000,
"name": "app",
"protocol": "TCP"
}
]
}
]
}
},
"triggers": [
{
"type": "ConfigChange"
}
]
}
}
]
Is this doable, or is it too much to do with jq, and I should probably do this in python or node?
EDIT 1:
I just realized that conditional add/update of env array is already handled by the |= syntax!
So, I basically just need to be able to get the same structure back as the input, with the relevant env var added in the concerned array.
You pretty much had it, but you'll want to restructure your filter to preserve the full results. You want to make sure that none of the filters changes the context. By starting off with .items, you changed it from your root object to the items array. That in of itself is not so much a problem but what you do with that matters. Keep in mind that assignments/updates preserves the original context as a result with the change applied. So if you wrote your filter in terms of an update, it will work for you.
So to do this, you'll want to find the env array of the first container in the DeploymentConfig item. First let's find that:
.items[] | select(.kind == "DeploymentConfig").spec.template.spec.containers[0].env
You don't need to do anything else in terms of error handling since select will simply produce no result. From there, you just need to update the array by adding the new value.
(.items[] | select(.kind == "DeploymentConfig").spec.template.spec.containers[0].env) +=
[{name:"B_ENV",value:"b-value"}]
If the array exists, it will add the new item. If not, it will create a new env array. If env is not an array, that would be a different problem.