Bash Parsing JSON name after sorting - json

I'm trying to get the most recent (highest number prefix) CacheClusterId from my Elasticache using the AWS CLI in order to put it into a Chef recipe. This is what I've got so far:
aws elasticache describe-cache-clusters --region us-east-1 | grep CacheClusterId | sort -t : -rn
Which produces:
"CacheClusterId": "xy112-elasticache"
"CacheClusterId": "xy111-elasticache"
"CacheClusterId": "xy110-elasticache"
"CacheClusterId": "xy109-elasticache"
"CacheClusterId": "xy-elasticache"
How can I isolate just the "xy112-elasticache" portion (minus quotes)? Having read the man page for sort, I feel like it requires a -k option, but I haven't been able to work out the particulars.

I think a much better way is handling JSON using jq. To install on Debian:
sudo apt-get install jq
I don't know exactly what your JSON looks like, but based on this XML example response for the aws elasticache describe-cache-clusters command, if your JSON response looked like:
{
"CacheClusters": [
{ "CacheClusterId": "xy112-elasticache" , ... },
{ "CacheClusterId": "xy111-elasticache" , ... },
...
]
}
then you'd write:
aws elasticache describe-cache-clusters --region us-east-1 | jq ".CacheClusters[].CacheClusterId"
For the two JSON objects in the array above, it would return:
"xy112-elasticache"
"xy111-elasticache"

Since first part is same for all I will just cut them and take id part in following way:
aws elasticache describe-cache-clusters --region us-east-1 | grep CacheClusterId | cut -d'"' -f4

Related

Azure JMESPATH query for nested json

I'm trying to extract just the paths portion of the result of this AZ CLI query:
az network application-gateway show --query urlPathMaps --resource-group dev-aag --name dev-aag-gateway
I'm unsure if I should use az network application-gateway list instead
This is the result of the first query, however I'm not able to extract just the paths portion -- is this because paths is nested?
"backendAddressPool": {
"id": "/subscriptions/42xxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/dev-aag/providers/Microsoft.Network/applicationGateways/dev-aag-gateway/backendAddressPools/co20225020a-backend-pool",
"resourceGroup": "dev-aag"
},
"backendHttpSettings": {
"id": "/subscriptions/42xxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/dev-aag/providers/Microsoft.Network/applicationGateways/dev-aag-gateway/backendHttpSettingsCollection/dev-aag-httpsetting",
"resourceGroup": "dev-aag"
},
"etag": "W/\"9f2d3xxc-2cbd-49fr-8726-432c7ef00de7\"",
"firewallPolicy": null,
"id": "/subscriptions/42xxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/dev-aag/providers/Microsoft.Network/applicationGateways/dev-aag-gateway/urlPathMaps/dev-aag-https-routing-rule/pathRules/co20225020a-cqvgkj9xxxxx9bcu-url",
"loadDistributionPolicy": null,
"name": "co20225020a-cqvgkj9xxxxx9bcu-url",
"paths": [
"/co20225020a/cqvgkj9xxxxx9bcu/*"
],
"provisioningState": "Succeeded",
"redirectConfiguration": null,
"resourceGroup": "dev-aag",
"rewriteRuleSet": null,
"type": "Microsoft.Network/applicationGateways/urlPathMaps/pathRules"
}
I'm trying to use grep and cut like this, but maybe something else I should be using instead:
az network application-gateway show --query urlPathMaps --resource-group dev-aag --name dev-aag-gateway | grep paths | cut -d ":" -f1-19
what should I be using to make this work?
az network application-gateway list -->Can used if you want to list all the application gateways under a particular subscription.
az network application-gateway show --> should be used if you want to pull the properties of a particular app gateway.
Refer to this articles for more information about Azure CLI application gateway cmdlets.
To pull list of paths under urlPathMaps in pathRules you need to use the below JMESPath query in the az network application-gateway show cmdlet.
az network application-gateway show -n <AppGatewayName> -g <ResourceGroupName> --query urlPathMaps[].pathRules[].paths
I have tested this in Azure Cloud shell, it is working fine I would suggest you to validate from your end as well.
**Here is the Sample output screenshot for reference: **

Write Hashicorp Vault secret as multiline to YAML

Given this Vault secret:
{
"config": "test.domain.com:53 {errors cache 30 forward . 1.1.1.1 1.1.1.2}"
}
How do I retrieve it and write it to a YAML file so that is in the following format:
test.domain.com:53 {
errors
cache 30
forward . 1.1.1.1 1.1.1.2
}
Using the following command saves it on a single line, which won't work with our project.
vault kv get -format=json ${VAULT_PATH}/coredns-custom | jq -r .data.data >> coredns-custom.yaml
I've tried inserting linebreaks \n in the secret, but the retrieval command doesn't parse them.
Any help would be appreciated.
The \n should work in the value that stored in Vault.
How about storing the value as following into vault directly? eg
test.domain.com:53 {\n
errors\n
cache 30\n
forward . 1.1.1.1 1.1.1.2\n
}\n

How to fix warning format JSON deprecation in cucumber 4.1.0 gem?

I've recently updated to the latest Ruby cucumber gem and now getting the following warning:
WARNING: --format=json is deprecated and will be removed after version 5.0.0.
Please use --format=message and stand-alone json-formatter.
json-formatter homepage: https://github.com/cucumber/cucumber/tree/master/json-formatter#cucumber-json-formatter.
I'm using json output later for my reporting. In my cucumber.yml I have the following default profile:
default:
-r features
--expand -f pretty --color
-f json -o reports/cucumber.json
According to the reference https://github.com/cucumber/cucumber/tree/master/json-formatter#cucumber-json-formatter they say to use something like
cucumber --format protobuf:cucumber-messages.bin
cat cucumber-messages.bin | cucumber-json-formatter > cucumber-results.json
and
Trying it out
../fake-cucumber/javascript/bin/fake-cucumber \
--results=random \
../gherkin/testdata/good/*.feature \ |
go/dist/json-formatter-darwin-amd64
But it's not really clear how to do that.
I guess you need to change your cucumber profile to produce protobuf output instead of json, and then add a step to post-process the file into the json you want?
I'd assumed that the 'Trying it out' above was your output from actually trying it out, rather than just a straight cut and paste from the formatter's Github page...

jq get all values in a tabbed format

i'm trying to convert a json to a tab formatted data:
{"level":"INFO", "logger":"db", "msg":"connection successful"}
{"level":"INFO", "logger":"server", "msg":"server started"}
{"level":"INFO", "logger":"server", "msg":"listening on port :4000"}
{"level":"INFO", "logger":"server", "msg":"stopping s ervices ..."}
{"level":"INFO", "logger":"server", "msg":"exiting..."}
to something like this:
INFO db connection successful
INFO server server started
INFO server listening on port 4000
DEBUG server stopping s ervices ...
INFO server exiting...
I've tried this jq -r ' . | to_entries[] | "\(.value)"', but this prints each value on a separate line.
Assuming the keys are always in the same order, you could get away with:
jq -r '[.[]]|#tsv'
In any case, it would be preferable to use #tsv.

import json file to couch db-

If I have a json file that looks something like this:
{"name":"bob","hi":"hello"}
{"name":"hello","hi":"bye"}
Is there an option to import this into couchdb?
Starting from #Millhouse answer but with multiple docs in my file I used
cat myFile.json | lwp-request -m POST -sS "http://localhost/dbname/_bulk_docs" -c "application/json"
POST is an alias of lwp-request but POST doesn't seem to work on debian. If you use lwp-request you need to set the method with -m as above.
The trailing _bulk_docs allows multiple documents to be uploaded at once.
http://wiki.apache.org/couchdb/HTTP_Bulk_Document_API
If you are on Linux, You could write a quick shell script to POST the contents of valid json files to Couch.
To test couch I did something like this:
cat myFile.json | POST -sS "http://myDB.couchone.com/testDB" -c "application/json"
myFile.json has the json contents I wanted to import into the database.
Another alternative, if you don't like command line or aren't using Linux, and prefer a gui, you can use a tool like RESTClient
Probably a bit late to answer. But If you can use Python than you can use the couchdb module to do so:
import couchdb
import json
couch = couchdb.Server(<your server url>)
db = couch[<your db name>]
with open(<your file name>) as jsonfile:
for row in jsonfile:
db_entry = json.load(row)
db.save(db_entry)
I created the python script to do that(As I could not find one on Internet).
The full script is here: :
http://bitbucket.org/tdatta/tools/src/
(name --> jsonDb_to_Couch.py)
If you download the full repo and:
Text replace all the "_id" in json files to "id"
Run make load_dbs
It would create 4 databases in your local couch installation
Hope that helps newbies (like me)
Yes, this is not valid JSON ...
To import JSON-Objects I use curl (http://curl.haxx.se):
curl -X PUT -d #my.json http://admin:secret#127.0.0.1:5984/db_name/doc_id
where my.json is a file the JSON-Object is in.
Of course you can put your JSON-Object directly into couchdb (without a file) as well:
curl -X PUT -d '{"name":"bob","hi":"hello"}' http://admin:secret#127.0.0.1:5984/db_name/doc_id
If you do not have a doc_id, you can ask couchdb for it:
curl -X GET http://127.0.0.1:5984/_uuids?count=1
That JSON object will not be accepted by CouchDB. To store all the data with a single server request use:
{
"people":
[
{
"name":"bob",
"hi":"hello"
},
{
"name":"hello",
"hi":"bye"
}
]
}
Alternatively, submit a different CouchDB request for each row.
Import the file into CouchDB from the command-line using cURL:
curl -vX POST https://user:pass#127.0.0.1:1234/database \
-d #- -# -o output -H "Content-Type: application/json" < file.json
It's not my solution but I found this to solve my issue:
A simple way of exporting a CouchDB database to a file, is by running the following Curl command in the terminal window:
curl -X GET http://127.0.0.1:5984/[mydatabase]/_all_docs\?include_docs\=true > /Users/[username]/Desktop/db.json
Next step is to modify the exported json file to look like something like the below (note the _id):
{
"docs": [
{"_id": "0", "integer": 0, "string": "0"},
{"_id": "1", "integer": 1, "string": "1"},
{"_id": "2", "integer": 2, "string": "2"}
]
}
Main bit you need to look at is adding the documents in the “docs” code block. Once this is done you can run the following Curl command to import the data to a CouchDB database:
curl -d #db.json -H "Content-type: application/json" -X POST http://127.0.0.1:5984/[mydatabase]/_bulk_docs
Duplicating a database
If you want to duplicate a database from one server to another. Run the following command:
curl -H 'Content-Type: application/json' -X POST http://localhost:5984/_replicate -d ' {"source": "http://example.com:5984/dbname/", "target": "http://localhost#:5984/dbname/"}'
Original Post:
http://www.greenacorn-websolutions.com/couchdb/export-import-a-database-with-couchdb.php
http://github.com/zaphar/db-couchdb-schema/tree/master
My DB::CouchDB::Schema module has a script to help with loading a series of documents into a CouchDB Database. The couch_schema_tool.pl script accepts a file as an argument and loads all the documents in that file into the database. Just put each document into an array like so:
[
{"name":"bob","hi":"hello"},
{"name":"hello","hi":"bye"}
]
It will load them into the database for you. Small caveat though I haven't tested my latest code against CouchDB's latest so if you use it and it breaks then let me know. I probably have to change something to fit the new API changes.
Jeremy