jq get all values in a tabbed format - json

i'm trying to convert a json to a tab formatted data:
{"level":"INFO", "logger":"db", "msg":"connection successful"}
{"level":"INFO", "logger":"server", "msg":"server started"}
{"level":"INFO", "logger":"server", "msg":"listening on port :4000"}
{"level":"INFO", "logger":"server", "msg":"stopping s ervices ..."}
{"level":"INFO", "logger":"server", "msg":"exiting..."}
to something like this:
INFO db connection successful
INFO server server started
INFO server listening on port 4000
DEBUG server stopping s ervices ...
INFO server exiting...
I've tried this jq -r ' . | to_entries[] | "\(.value)"', but this prints each value on a separate line.

Assuming the keys are always in the same order, you could get away with:
jq -r '[.[]]|#tsv'
In any case, it would be preferable to use #tsv.

Related

How to run a cypher script file from Terminal with the cypher-shell neo4j command?

I have a cypher script file and I would like to run it directly.
All answers I could find on SO to the best of my knowledge use the command neo4j-shell which in my version (Neo4j server 3.5.5) seems to be deprecated and substituted with the command cyphershell.
Using the command sudo ./neo4j-community-3.5.5/bin/cypher-shell --help I got the following instructions.
usage: cypher-shell [-h] [-a ADDRESS] [-u USERNAME] [-p PASSWORD]
[--encryption {true,false}]
[--format {auto,verbose,plain}] [--debug] [--non-interactive] [--sample-rows SAMPLE-ROWS]
[--wrap {true,false}] [-v] [--driver-version] [--fail-fast | --fail-at-end] [cypher]
A command line shell where you can execute Cypher against an
instance of Neo4j. By default the shell is interactive but you can
use it for scripting by passing cypher directly on the command
line or by piping a file with cypher statements (requires Powershell
on Windows).
My file is the following which tries to create a graph from csv files and it comes from the book "Graph Algorithms".
WITH "https://github.com/neo4j-graph-analytics/book/raw/master/data" AS base
WITH base + "transport-nodes.csv" AS uri
LOAD CSV WITH HEADERS FROM uri AS row
MERGE (place:Place {id:row.id})
SET place.latitude = toFloat(row.latitude),
place.longitude = toFloat(row.latitude),
place.population = toInteger(row.population)
WITH "https://github.com/neo4j-graph-analytics/book/raw/master/data/" AS base
WITH base + "transport-relationships.csv" AS uri
LOAD CSV WITH HEADERS FROM uri AS row
MATCH (origin:Place {id: row.src})
MATCH (destination:Place {id: row.dst})
MERGE (origin)-[:EROAD {distance: toInteger(row.cost)}]->(destination)
When I try to pass the file directly with the command:
sudo ./neo4j-community-3.5.5/bin/cypher-shell neo_4.cypher
first it asks for username and password but after typing the correct password (the wrong password results in the error The client is unauthorized due to authentication failure.) I get the error:
Invalid input 'n': expected <init> (line 1, column 1 (offset: 0))
"neo_4.cypher"
^
When I try piping with the command:
sudo cat neo_4.cypher| sudo ./neo4j-community-3.5.5/bin/cypher-shell -u usr -p 'pwd'
no output is generated and no graph either.
How to run a cypher script file with the neo4j command cypher-shell?
Use cypher-shell -f yourscriptname. Check with --help for more description.
I think the key is here:
cypher-shell -- help
... Stuff deleted
positional arguments:
cypher an optional string of cypher to execute and then exit
This means that the paremeter is actual cypher code, not a file name. Thus, this works:
GMc#linux-ihon:~> cypher-shell "match(n) return n;"
username: neo4j
password: ****
+-----------------------------+
| n |
+-----------------------------+
| (:Job {jobName: "Job01"}) |
| (:Job {jobName: "Job02"}) |
But this doesn't (because the text "neo_4.cypher" isn't a valid cypher query)
cypher-shell neo_4.cypher
The help also says:
example of piping a file:
cat some-cypher.txt | cypher-shell
So:
cat neo_4.cypher | cypher-shell
should work. Possibly your problem is all of the sudo's. Specifically the cat ... | sudo cypher-shell. It is possible that sudo is protecting cypher-shell from some arbitrary input (although it doesn't seem to do so on my system).
If you really need to use sudo to run cypher, try using the following:
sudo cypher-shell arguments_as_needed < neo_4.cypher
Oh, also, your script doesn't have a return, so it probably won't display any data, but you should still see the summary reports of records loaded.
Perhaps try something simpler first such as a simple match ... return ... query in your script.
Oh, and don't forget to terminate the cypher query with a semi-colon!
The problem is in the cypher file: each line should end with a semicolon: ;. I still need sudo to run the program.
The file taken from the book seems to contain other errors as well actually.

Zabbix Discovery with External check JSON

Zabbix 3.2.5 in docker on alpine image (official build)
I have some problem with external script and returned JSON.
The script json_data.sh is:
#!/bin/bash
# Generate JSON data for zabbix
declare -i i
fields=$1
data=($2)
json=""
i=0
while [ $i -lt ${#data[*]} ]; do
row=""
for f in $fields; do
row+="\"{#$f}\":\"${data[$i]}\","
i+=1
done
json+="{${row%,}},"
done
echo "{\"data\":[${json%,}]}"
key string is:
json_data.sh["IP", "127.0.0.1 127.0.0.2 127.0.0.3"]
I test it with text item and have result
2539:20170515:095829.375 zbx_popen(): executing script
{"data":[{"{#IP}":"127.0.0.1"},{"{#IP}":"127.0.0.2"},{"{#IP}":"127.0.0.3"}]}
So script returns valid JSON but i still have error Vallue Should be JSON object in service discovery.
What wrong with that JSON?
Template Settings In screenshot {$IPLIST} just macro = "127.0.0.1 127.0.0.2 127.0.0.3"
Error
This is bug. When DebugLevel is more than 3 Zabbix mix part of the debug output with the value data. Something like zbx_popen(): executing script.
Solution to reduce DebugLevel to 3 or lower, and wait until ZBX-12195 will be fixed.

Bash Parsing JSON name after sorting

I'm trying to get the most recent (highest number prefix) CacheClusterId from my Elasticache using the AWS CLI in order to put it into a Chef recipe. This is what I've got so far:
aws elasticache describe-cache-clusters --region us-east-1 | grep CacheClusterId | sort -t : -rn
Which produces:
"CacheClusterId": "xy112-elasticache"
"CacheClusterId": "xy111-elasticache"
"CacheClusterId": "xy110-elasticache"
"CacheClusterId": "xy109-elasticache"
"CacheClusterId": "xy-elasticache"
How can I isolate just the "xy112-elasticache" portion (minus quotes)? Having read the man page for sort, I feel like it requires a -k option, but I haven't been able to work out the particulars.
I think a much better way is handling JSON using jq. To install on Debian:
sudo apt-get install jq
I don't know exactly what your JSON looks like, but based on this XML example response for the aws elasticache describe-cache-clusters command, if your JSON response looked like:
{
"CacheClusters": [
{ "CacheClusterId": "xy112-elasticache" , ... },
{ "CacheClusterId": "xy111-elasticache" , ... },
...
]
}
then you'd write:
aws elasticache describe-cache-clusters --region us-east-1 | jq ".CacheClusters[].CacheClusterId"
For the two JSON objects in the array above, it would return:
"xy112-elasticache"
"xy111-elasticache"
Since first part is same for all I will just cut them and take id part in following way:
aws elasticache describe-cache-clusters --region us-east-1 | grep CacheClusterId | cut -d'"' -f4

how to capture bitorrent infohash id in network using tcpdump or any other open scource tool?

i am working on a project where we need to collect the bitorrent infohash id running in our small ISP network. using port mirroring we can pass the all wan traffic to a server and run tcpdump tools or any other tool to find the infohash id download by bitorrent client. for example
tcpflow -p -c -i eth1 tcp | grep -oE '(GET) .* HTTP/1.[01].*'
this code is showing result like this
GET /announce?info_hash=N%a1%94%17%2c%11%aa%90%9c%0a%1a0%9d%b2%cfy%08A%03%16&peer_id=-BT7950-%f1%a2%d8%8fO%d7%f9%bc%f1%28%15%26&port=19211&uploaded=55918592&downloaded=0&left=0&corrupt=0&key=21594C0B&numwant=200&compact=1&no_peer_id=1 HTTP/1.1
now we need to capture only infohash and store it to a log or mysql database
can you please tell me which tool can do thing like this
Depending on how rigorous you want to be you'll have to decode the following protocol layers:
TCP, assemble packets of a flow. you're already doing that with tcpflow. tshark - wireshark's CLI - could do that too.
HTTP, extract the value of the GET header. A simple regex would do the job here.
URI, extracting the query string
application/x-www-form-urlencoded, info_hash key value pair extraction and handling of percent-encoding
For the last two steps I would look for tools or libraries in your programming language of choice to handle them.

Zabbix Trapper: Cannot get data from orabbix

I am using orabbix to monitor my db. The data from the queries executed on this db using orabbix are sent to zabbix server. However, I am not able to see the data reaching zabbix.
On my zabbix web console, I see this message on the triggers added - "Trigger expression updated. No status update so far."
Any ideas?
My update interval for the trigger is set to 30 sec.
Based on the screenshots you posted, your host is named "wfc1dev1" and you have items with keys "WFC_WFS_SYS_001" and "WFC_WFS_SYS_002". However, based on the Orabbix XML that it sends to Zabbix, the hostname and item keys are different. Here is the XML:
<req><host>V0ZDMURFVg==</host><key>V0ZDX0xFQUZfU1lTXzAwMg==</key><data>MA==</dat‌​a></req>
From this, we can deduce the host:
$ echo V0ZDMURFVg== | base64 -d
WFC1DEV
The key:
$ echo V0ZDX0xFQUZfU1lTXzAwMg== | base64 -d
WFC_LEAF_SYS_002
The data:
$ echo MA== | base64 -d
0
It can be seen that neither the host name, nor item key match those configured on Zabbix server. Once you fix that, it should work.