Write Hashicorp Vault secret as multiline to YAML - json

Given this Vault secret:
{
"config": "test.domain.com:53 {errors cache 30 forward . 1.1.1.1 1.1.1.2}"
}
How do I retrieve it and write it to a YAML file so that is in the following format:
test.domain.com:53 {
errors
cache 30
forward . 1.1.1.1 1.1.1.2
}
Using the following command saves it on a single line, which won't work with our project.
vault kv get -format=json ${VAULT_PATH}/coredns-custom | jq -r .data.data >> coredns-custom.yaml
I've tried inserting linebreaks \n in the secret, but the retrieval command doesn't parse them.
Any help would be appreciated.

The \n should work in the value that stored in Vault.
How about storing the value as following into vault directly? eg
test.domain.com:53 {\n
errors\n
cache 30\n
forward . 1.1.1.1 1.1.1.2\n
}\n

Related

Why IPFS's multihash can decode?

I have read this stackoverflow post:
How to create an IPFS compatible multihash
$ echo "Hello World" | ipfs add -n
$ added QmWATWQ7fVPP2EFGu71UkfnqhYXDYH566qy47CnJDgvs8u QmWATWQ7fVPP2EFGu71UkfnqhYXDYH566qy47CnJDgvs8u
base58
12 - 20 - 74410577111096cd817a3faed78630f2245636beded412d3b212a2e09ba593ca
<hash-type> - <hash-length> - <hash-digest>
ipfs cat
$ curl "https://ipfs.infura.io:5001/api/v0/object/data?arg=QmWATWQ7fVPP2EFGu71UkfnqhYXDYH566qy47CnJDgvs8u"
Hello World
So I was wondering how does ipfs's decoding work?
Since as far as I know, sha-256 hash function is ONE-WAY hashing, right?
Basicly, IPFS is a (key, value) storage service. The multihash you get from ipfs add command is the multihash of the value, also the key to retrieve the value from IPFS service with ipfs get or ipfs object commands.
With http api of IPFS service, curl "https://ipfs.infura.io:5001/api/v0/object/data?arg=key works exactly same as ipfs object data command.
So it is not about decoding the hash, it is just get the value with you key(the multihash).

How to run a cypher script file from Terminal with the cypher-shell neo4j command?

I have a cypher script file and I would like to run it directly.
All answers I could find on SO to the best of my knowledge use the command neo4j-shell which in my version (Neo4j server 3.5.5) seems to be deprecated and substituted with the command cyphershell.
Using the command sudo ./neo4j-community-3.5.5/bin/cypher-shell --help I got the following instructions.
usage: cypher-shell [-h] [-a ADDRESS] [-u USERNAME] [-p PASSWORD]
[--encryption {true,false}]
[--format {auto,verbose,plain}] [--debug] [--non-interactive] [--sample-rows SAMPLE-ROWS]
[--wrap {true,false}] [-v] [--driver-version] [--fail-fast | --fail-at-end] [cypher]
A command line shell where you can execute Cypher against an
instance of Neo4j. By default the shell is interactive but you can
use it for scripting by passing cypher directly on the command
line or by piping a file with cypher statements (requires Powershell
on Windows).
My file is the following which tries to create a graph from csv files and it comes from the book "Graph Algorithms".
WITH "https://github.com/neo4j-graph-analytics/book/raw/master/data" AS base
WITH base + "transport-nodes.csv" AS uri
LOAD CSV WITH HEADERS FROM uri AS row
MERGE (place:Place {id:row.id})
SET place.latitude = toFloat(row.latitude),
place.longitude = toFloat(row.latitude),
place.population = toInteger(row.population)
WITH "https://github.com/neo4j-graph-analytics/book/raw/master/data/" AS base
WITH base + "transport-relationships.csv" AS uri
LOAD CSV WITH HEADERS FROM uri AS row
MATCH (origin:Place {id: row.src})
MATCH (destination:Place {id: row.dst})
MERGE (origin)-[:EROAD {distance: toInteger(row.cost)}]->(destination)
When I try to pass the file directly with the command:
sudo ./neo4j-community-3.5.5/bin/cypher-shell neo_4.cypher
first it asks for username and password but after typing the correct password (the wrong password results in the error The client is unauthorized due to authentication failure.) I get the error:
Invalid input 'n': expected <init> (line 1, column 1 (offset: 0))
"neo_4.cypher"
^
When I try piping with the command:
sudo cat neo_4.cypher| sudo ./neo4j-community-3.5.5/bin/cypher-shell -u usr -p 'pwd'
no output is generated and no graph either.
How to run a cypher script file with the neo4j command cypher-shell?
Use cypher-shell -f yourscriptname. Check with --help for more description.
I think the key is here:
cypher-shell -- help
... Stuff deleted
positional arguments:
cypher an optional string of cypher to execute and then exit
This means that the paremeter is actual cypher code, not a file name. Thus, this works:
GMc#linux-ihon:~> cypher-shell "match(n) return n;"
username: neo4j
password: ****
+-----------------------------+
| n |
+-----------------------------+
| (:Job {jobName: "Job01"}) |
| (:Job {jobName: "Job02"}) |
But this doesn't (because the text "neo_4.cypher" isn't a valid cypher query)
cypher-shell neo_4.cypher
The help also says:
example of piping a file:
cat some-cypher.txt | cypher-shell
So:
cat neo_4.cypher | cypher-shell
should work. Possibly your problem is all of the sudo's. Specifically the cat ... | sudo cypher-shell. It is possible that sudo is protecting cypher-shell from some arbitrary input (although it doesn't seem to do so on my system).
If you really need to use sudo to run cypher, try using the following:
sudo cypher-shell arguments_as_needed < neo_4.cypher
Oh, also, your script doesn't have a return, so it probably won't display any data, but you should still see the summary reports of records loaded.
Perhaps try something simpler first such as a simple match ... return ... query in your script.
Oh, and don't forget to terminate the cypher query with a semi-colon!
The problem is in the cypher file: each line should end with a semicolon: ;. I still need sudo to run the program.
The file taken from the book seems to contain other errors as well actually.

jq get all values in a tabbed format

i'm trying to convert a json to a tab formatted data:
{"level":"INFO", "logger":"db", "msg":"connection successful"}
{"level":"INFO", "logger":"server", "msg":"server started"}
{"level":"INFO", "logger":"server", "msg":"listening on port :4000"}
{"level":"INFO", "logger":"server", "msg":"stopping s ervices ..."}
{"level":"INFO", "logger":"server", "msg":"exiting..."}
to something like this:
INFO db connection successful
INFO server server started
INFO server listening on port 4000
DEBUG server stopping s ervices ...
INFO server exiting...
I've tried this jq -r ' . | to_entries[] | "\(.value)"', but this prints each value on a separate line.
Assuming the keys are always in the same order, you could get away with:
jq -r '[.[]]|#tsv'
In any case, it would be preferable to use #tsv.

Zabbix Discovery with External check JSON

Zabbix 3.2.5 in docker on alpine image (official build)
I have some problem with external script and returned JSON.
The script json_data.sh is:
#!/bin/bash
# Generate JSON data for zabbix
declare -i i
fields=$1
data=($2)
json=""
i=0
while [ $i -lt ${#data[*]} ]; do
row=""
for f in $fields; do
row+="\"{#$f}\":\"${data[$i]}\","
i+=1
done
json+="{${row%,}},"
done
echo "{\"data\":[${json%,}]}"
key string is:
json_data.sh["IP", "127.0.0.1 127.0.0.2 127.0.0.3"]
I test it with text item and have result
2539:20170515:095829.375 zbx_popen(): executing script
{"data":[{"{#IP}":"127.0.0.1"},{"{#IP}":"127.0.0.2"},{"{#IP}":"127.0.0.3"}]}
So script returns valid JSON but i still have error Vallue Should be JSON object in service discovery.
What wrong with that JSON?
Template Settings In screenshot {$IPLIST} just macro = "127.0.0.1 127.0.0.2 127.0.0.3"
Error
This is bug. When DebugLevel is more than 3 Zabbix mix part of the debug output with the value data. Something like zbx_popen(): executing script.
Solution to reduce DebugLevel to 3 or lower, and wait until ZBX-12195 will be fixed.

Bash Parsing JSON name after sorting

I'm trying to get the most recent (highest number prefix) CacheClusterId from my Elasticache using the AWS CLI in order to put it into a Chef recipe. This is what I've got so far:
aws elasticache describe-cache-clusters --region us-east-1 | grep CacheClusterId | sort -t : -rn
Which produces:
"CacheClusterId": "xy112-elasticache"
"CacheClusterId": "xy111-elasticache"
"CacheClusterId": "xy110-elasticache"
"CacheClusterId": "xy109-elasticache"
"CacheClusterId": "xy-elasticache"
How can I isolate just the "xy112-elasticache" portion (minus quotes)? Having read the man page for sort, I feel like it requires a -k option, but I haven't been able to work out the particulars.
I think a much better way is handling JSON using jq. To install on Debian:
sudo apt-get install jq
I don't know exactly what your JSON looks like, but based on this XML example response for the aws elasticache describe-cache-clusters command, if your JSON response looked like:
{
"CacheClusters": [
{ "CacheClusterId": "xy112-elasticache" , ... },
{ "CacheClusterId": "xy111-elasticache" , ... },
...
]
}
then you'd write:
aws elasticache describe-cache-clusters --region us-east-1 | jq ".CacheClusters[].CacheClusterId"
For the two JSON objects in the array above, it would return:
"xy112-elasticache"
"xy111-elasticache"
Since first part is same for all I will just cut them and take id part in following way:
aws elasticache describe-cache-clusters --region us-east-1 | grep CacheClusterId | cut -d'"' -f4