I am trying to update the output of sudo knife node edit fqdn -c /etc/chef/client.rb using bash script - json

Here is the command that I run:
sudo knife node edit fqdn -c /etc/chef/client.rb . --> hit enter button then shows below output :
{
"name": "test",
"chef_environment": "standard_chef_environment",
"normal": {
"httpd": {
"fips_mode_enable": "false"
},
"enable_fips_mode": false,
"props": {
So i wanted to add few line under props using following command but its getting failed :
sudo knife node edit fqdn -c /etc/chef/client.rb |jq ‘.props |= . + { "ParameterKey": "Foo4", "ParameterValue": "Bar4" }'

The props key is nested under normal so you would need .normal.props or similar.

Related

cbimport not importing file which is extracted from cbq command

I tried to extract data from below cbq command which was successful.
cbq -u Administrator -p Administrator -e "http://localhost:8093" --script= SELECT * FROM `sample` where customer.id=="12345'" -q | jq '.results' > temp.json;
However when I am trying to import the same data in json format to target cluster using below command I am getting error.
cbimport json -c http://{target-cluster}:8091 -u Administrator -p Administrator -b sample -d file://C:\Users\{myusername}\Desktop\temp.json -f list -g %docId%
JSON import failed: 0 documents were imported, 0 documents failed to be imported
JSON import failed: input json is invalid: ReadArray: expect [ or , or ] or n, but found {, error found in #1 byte of ...|{
"requ|..., bigger context ...|{
"requestID": "2fc34542-4387-4643-8ae3-914e316|...],```
```{
"requestID": "6ef38b8a-8e70-4c3d-b3b4-b73518a09c62",
"signature": {
"*": "*"
},
"results": [
{
"{Bucket-name}":{my-data}
"status": "success",
"metrics": {
"elapsedTime": "4.517031ms",
"executionTime": "4.365976ms",
"resultCount": 1,
"resultSize": 24926
}
It looks like the file which was extracted from cbq command has control fields details like RequestID, metrics, status etc. Also json in pretty format. If I manually remove it(remove all fields except {my-data}) then put in a json file and make json unpretty then it works. But I want to automate it in a single run. Is there a way to do it in cbq command.
I don't find any other utility or way to use where condition on cbexport to do that on Couchbase, because the document which are exported using cbexport can be imported using cbimport easily.
For the cbq command, you can use the --quiet option to disable the startup connection messages and the --pretty=false to disable pretty-print. Then, to extract just the documents in cbimport json lines format, I used jq.
This worked for me -- selecting documents from travel-sample._default._default (for the jq filter, where I have _default, you would put the Bucket-name, based on your example):
cbq --quiet --pretty=false -u Administrator -p password --script='select * from `travel-sample`._default._default' | jq --compact-output '.results|.[]|._default' > docs.json
Then, importing into test-bucket1:
cbimport json -c localhost -u Administrator -p password -b test-bucket1 -d file://./docs.json -f lines -g %type%_%id%
cbq documentation: https://docs.couchbase.com/server/current/tools/cbq-shell.html
cbimport documentation: https://docs.couchbase.com/server/current/tools/cbimport-json.html
jq documentation:
https://stedolan.github.io/jq/manual/#Basicfilters

How can I pass Rundeck variables to a JSON file?

I have a JSON with key pairs and I want to access the values from Rundeck Options dynamically during the job execution.
For shell script, we can do a $RD_OPTIONS_<>.
Similarly is there some format I can use in a JSON file?
Just use #option.myoption# in a inline-script step.
You need a tool to use on an inline script step to manipulate JSON files on Rundeck. I made an example using JQ. Alternatively, you can use bash script-fu to reach the same goal.
For example, using this JSON file:
{
"books": [{
"fear_of_the_dark": {
"author": "John Doe",
"genre": "Mistery"
}
}]
}
Update the file with the following jq call:
To test directly in your terminal
jq '.books[].fear_of_the_dark += { "ISBN" : "9999" }' myjson.json
On Rundeck Inline-script
echo "$(jq ''.books[].fear_of_the_dark += { "ISBN" : "#option.isbn#" }'' myjson.json)" > myjson.json
Check how looks on an inline-script job (check here to know how to import the job definition to your Rundeck instance).
- defaultTab: nodes
description: ''
executionEnabled: true
id: d8f1c0e7-a7c6-43d4-91d9-25331cc06560
loglevel: INFO
name: JQTest
nodeFilterEditable: false
options:
- label: isbn number
name: isbn
required: true
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- description: original file content
exec: cat myjson.json
- description: pass the option and save the content to the json file
fileExtension: .sh
interpreterArgsQuoted: false
script: 'echo "$(jq ''.books[].fear_of_the_dark += { "ISBN" : "#option.isbn#"
}'' myjson.json)" > myjson.json'
scriptInterpreter: /bin/bash
- description: modified file content (after jq)
exec: cat myjson.json
keepgoing: false
strategy: node-first
uuid: d8f1c0e7-a7c6-43d4-91d9-25331cc06560
Finally, check the result.
Here you can check more about executing scripts on Rundeck and here more about the JQ tool.

How to patch container env variable in deployment with kubectl?

When I want to exctract the current value of some container env variabe I could use jsonpath with syntax like:
kubectl get pods -l component='somelabel' -n somenamespace -o \
jsonpath='{.items[*].spec.containers[*].env[?(#.name=="SOME_ENV_VARIABLE")].value}')
That will return me the value of env varialbe with the name SOME_ENV_VARIABLE. Pod section with container env variables in json will look like this:
"spec": {
"containers": [
{
"env": [
{
"name": "SOME_ENV_VARIABLE",
"value": "some_value"
},
{
"name": "ANOTHER_ENV_VARIABLE",
"value": "another_value"
}
],
When I want to patch some value in my deployment I'm using commands with syntax like:
kubectl -n kube-system patch svc kubernetes-dashboard --type='json' -p="[{'op': 'replace', 'path': '/spec/ports/0/nodePort', 'value': $PORT}]"
But how can I patch a variable with 'op': 'replace' in cases where I need to use expression like env[?(#.name=="SOME_ENV_VARIABLE")]? Which syntax I should use?
Rather than kubectl patch command, you can make use of kubectl set env to update environment variable of k8s deployment.
envvalue=$(kubectl get pods -l component='somelabel' -n somenamespace -o jsonpath='{.items[*].spec.containers[*].env[?(#.name=="SOME_ENV_VARIABLE")].value}')
kubectl set env deployment/my-app-deploy op=$envvalue
Hope this helps.
Most of them haven't provide proper commands just use as simple as it is =>
kubectl set env deployment/deploy_name APP_VERSION=value -n namespace
op: replace
path: /spec/template/spec/containers/0/env/0/name
value: YOUR_VARIABLE_NAME
op: replace
path: /spec/template/spec/containers/0/env/0/value
value: YOUR_VARIABLE_VALUE

Need to import 2M of JSON into ONE Couchbase Document

I've been given an odd requirement to store an Excel spreadsheet in one JSON document within Couchbase. cbimport is saying that my document is not valid JSON, when it is, so I believe something else is wrong.
My document goes along the style of this:
[{ "sets": [
{
"cluster" : "M1M",
"type" : "SET",
"shortName" : "MARTIN MARIETTA MATERIALS",
"clusterName" : "MARTIN MARIETTA",
"setNum" : "10000163"
},
{
"shortName" : "STERLING INC",
"type" : "SET",
"cluster" : "SJW",
"setNum" : "10001427",
"clusterName" : "STERLING JEWELERS"
},
...
]}]
And my cbimport command looks like this:
cbimport json --cluster localhost --bucket documentBucket \
--dataset file://set_numbers.json --username Administrator \
--password password --format lines -e errors.log -l debug.log \
--generate-key 1
I've tried to format as lines as well as list. Both fail. What am I doing wrong?
I wrote your sample to a json file called set_numbers.json and tried it locally with list.
cbimport json --cluster localhost --bucket documentBucket --dataset
file://set_numbers.json --username Administrator --password password
--format list --generate-key 1
It imported successfully into a single document.
use cbimport to upload json data
cbimport json -c couchbase://127.0.0.1 -b data -d file://data.json -u Administrator -p password -f list -g "%id%" -t 4

How to install Hadoop using Ambari setup?

I tried to install Hadoop on 3 node cluster using ambari_setup.sh. I have successfully started ambari-server on NODE_1 and ambari-agent is running on all 3 nodes.
I have also pushed blueprint using:
root#host curl -H "X-Requested-By: ambari" -X POST -d #blueprint.json
-u admin:admin HOST_NAME:8080/api/v1/blueprints/blueprints-c1
But while installing using given below command I am getting following given below error.
[root#host]# curl -H "X-Requested-By: ambari" -X POST -d #hostmapping.json -u admin:admin
HOST_NAME:8080/api/v1/clusters/blueprints-c1 {
"status" : 400, "message" : "The properties [host-groups] specified
in the request or predicate are not supported for the resource type
Cluster."
Given below is the hostmapping.json file I am using
> { "blueprint":"blueprints-c1", "host-groups":[
> { "name":"host_group_1",
> "hosts":[ { "fqdn":"NODE_1" } ] },
> { "name":"host_group_2",
> "hosts":[ { "fqdn":"NODE_2" } ] },
> { "name":"host_group_3",
> "hosts":[ { "fqdn":"NODE_3" } ] } ] }
You made a mistake here: properties [host-groups]
It should be [host_groups]
You need to push the blueprint.json and the hostmapping.json. This need to be 2 seperate files and 2 seperate API-Calls. Did u do that?
Maybe you can specify what you done so far befor u get the exeception. Than we can tell you if you missed something