I'm monitoring a web page that displays the status of a few hundred items. The page looks like this:
{"arrisId":"a000098","status":"Running","startTime":"2018-05-10T08:02:19.563Z"},{"arrisId":"a000101","status":"Running","startTime":"2018-05-10T08:02:19.892Z"},{"arrisId":"a000107","status":"Running","startTime":"2018-05-10T08:02:28.556Z"},...
What I want to do is trigger when 1 of the things is "Not Running", but I would like to display only the item that is not working and not the entire page. Hope that makes sense. I could use web.page.regexp and send a message that something is not running, but if I use web.page.get, is there a way to configure a trigger to display the not running and the 25 or so characters in front of that?
I hope this question makes sense.
Your best course of action is to use Low Level Discovery.
Your LLD rule will run a script to ingest your main status page, then parse it and use the fields to create your items according to the "Item prototypes" you define.
The item prototype themselves will need a script as well to get their respective information (unless you are willing to use the Zabbix in beta)
I've done a simple setup, using mock json from here:
LLD Script: will parse the mock json and convert it into a Zabbix LLD compliant format:
import requests
import json
jsonSource = "https://jsonplaceholder.typicode.com/users"
lld = {}
data = []
lld['data'] = data
session = requests.Session()
response = session.get(jsonSource)
for jsonObject in response.json():
data.append ( {
'{#NAME}': jsonObject['name'],
'{#ID}': jsonObject['id'],
'{#URL}': jsonSource + '/' + str(jsonObject['id'])
} )
print json.dumps(lld)
Item GET Script: get a specific field of a specific item (will become obsolete with http agent item from Zabbix 4.0):
import requests
import json
import sys, argparse
parser = argparse.ArgumentParser()
parser.add_argument('-i', required=True, metavar='User ID')
parser.add_argument('-f', required=True, metavar='\"Requested JSON Field\"')
args = parser.parse_args()
jsonSource = "https://jsonplaceholder.typicode.com/users/" + args.i
session = requests.Session()
response = session.get(jsonSource)
print (response.json()[args.f])
Command line usage sample:
$ jsonLLD.py
{"data": [{"{#ID}": 1, "{#URL}": "https://jsonplaceholder.typicode.com/users/1", "{#NAME}": "Leanne Graham"}, {"{#ID}": 2, "{#URL}": "https://jsonplaceholder.typicode.com/users/2", "{#NAME}": "Ervin Howell"},
[cut]
$ jsonGet.py -i 10 -f phone
024-648-3804
$ jsonGet.py -i 10 -f name
Clementina DuBuque
Then you have to set it up into Zabbix:
create a new template
create a Discovery rule of "Zabbix agent" type and set it to run system.run[/usr/bin/jsonLLD.py] (mind the path!)
create an item prototype for each json field you want to work on (ie: Item name: {#NAME} telephone number, Item key system.run[/usr/bin/jsonGet.py -i {#ID} -f phone] )
create trigger prototypes accordingly
associate an host to the template
In your situation I'd use the Zabbix server itself as host, and install the scripts in its /usr/bin.
Watch the Zabbix Agent's log to see the discovery and item gathering process:
1972:20180519:121849.052 Executing command '/usr/bin/jsonGet.py -i 1 -f phone'
1971:20180519:121850.054 Executing command '/usr/bin/jsonGet.py -i 2 -f phone'
1974:20180519:121851.055 Executing command '/usr/bin/jsonGet.py -i 3 -f phone'
1974:20180519:121852.073 Executing command '/usr/bin/jsonGet.py -i 4 -f phone'
1974:20180519:121853.076 Executing command '/usr/bin/jsonGet.py -i 5 -f phone'
1973:20180519:121854.077 Executing command '/usr/bin/jsonGet.py -i 6 -f phone'
1972:20180519:121855.079 Executing command '/usr/bin/jsonGet.py -i 7 -f phone'
[cut]
Related
My overall task is constantly to collect data from UNIX system log file, filter it, prepare a json payload based on the filtered data and process the data by sending a post api call to another server.
I wonder if that can be done using let's say shell script to monitor the log file with tail, filter with grep to get the specific lines dumpted in another file. With cronjob to run another script which contruct a .json and send curl request with the json to external server.
Some details:
In the log file - connector.log I am interested in lines like:
2020-09-16T15:14:37,337 INFO (tomcat-http--131) [tenant-test;-;138.188.247.4;] com.vmware.horizon.adapters.passwordAdapter.PasswordIdpAdapter - Login: user123 - SUCCESS
These lines, I can collect by the below command:
tailf connector.log | grep 'PasswordIdpAdapter - Login\|FAILURE\|SUCCESS'
and probably dump them into a file:
tailf connector.log | grep 'PasswordIdpAdapter - Login\|FAILURE\|SUCCESS' > log_data.txt
I wonder at this point, is it possible to extract only specific fields from a line(not the whole line) from the connector.log , so one line in log_data.txt to look like(1, 4, 6, 7, 8):
1 2020-09-29T07:15:13,881 [tenant1;usrname#tenant1;10.93.231.5;] - username - SUCCESS
From that point, I need to write a script(maybe could be run by cronjob every minute)/or a command to construct the below json and send the request. One line - one request.
This is the example of the json:
{
"timestamp": "2020-09-16T15:24:35,377",
"tenant_name": "tenant-test",
"log_type": "SERVICE",
"log_entry": "Login: user123 - SUCCESS"
}
The field values that should be replaced already exist in the log line: timestamp(the 1st field, e.g. 2020-09-16T15:14:37,337), tenant_name(the 1st part of the 4th field, tenant-test) and the log_entry(the last four fields, e.g. Login: user123 - SUCCESS).
When the json is constructed, I'll send it by:
curl --header "Content-Type: application/json" --request POST --data \
$payload http://myservert:8080/api/requests
What is not clear to me, this script to get the data line by line from log_data.txt e.g.
and populate some of the fields to create the .json and send it to the server.
Thanks for your answers in advance,
Petko
Thanks #shellter for the awk idea. So, bash, awk, grep, cat, cut and curl did the job.
I've created a cronjob to execute the bash script on 5 min interval.
The script gets the last 5mins of log data, dump it to another file, reads the filtered data, prepare the payload and then executes the API call. Maybe it is stupid but it works.
#!/bin/bash
MONITORED_LOG="/var/logs/test.log"
FILTERED_DATA="/tmp/login/login_data.txt"
REST_HOST="https://rest-host/topics/logs-"
# dump the last 5 mins of log data(date format: 2020-09-28T10:52:28,334)
# to a file, filter for keywords FAILURE\|SUCCESS and NOT having 'lookup|SA'
# an example of data record taken: 1 2020-09-29T07:15:13,881 [tenant1;usrname#tenant1;10.93.231.5;] - username - SUCCESS
awk -v d1="$(date --date="-5 min" "+%Y-%m-%dT%H:%M:%S")" -v d2="$(date "+%Y-%m-%dT%H:%M:%S")" '$0 > d1 && $0 < d2' $MONITORED_LOG | grep 'FAILURE\|SUCCESS' | grep -v 'lookup\|SA-' | awk '{ print $2, $3, $5, $7}' | uniq -c > $FILTERED_DATA
## loop through all the filtered records and send an API call
cat $FILTERED_DATA | while read LINE; do
## preparing the variables
timestamp=$(echo $LINE | cut -f2 -d' ')
username=$(echo $LINE | cut -f5 -d' ')
log_entry=$(echo $LINE | cut -f7 -d' ')
# get the tenant name, split by ; and remove the first char [
tenant_name=$(echo $tenant_name | cut -f1 -d';')
tenant_name="${tenant_name:1}"
# preparing the payload
payload=$'{"records":[{"value":{"timestamp":"'
payload+=$timestamp
payload+=$'","tenant_name":"'
payload+=$tenant_name
payload+=$'","log_entry":"'
payload+=$log_entry
payload+=$'"}}]}'
echo 'payload: ' $payload
# send the api call to the server with dynamic construction of tenant name
curl -i -k -u 'api_user:3494ssdfs3' --request POST --header "Content-type:application/json" --data "$payload" "$REST_HOST$tenant_name"
done
I would like to check which runners are currently running jobs but i fail to find anything that would give me this information using API.
I know which ones are active and can take jobs but not the ones that are actually running them at the current time.
So my question is, how can i determine which runners are currently processing a job
You can list all the runners, get theirs ids and then for each runner check if there are jobs with status running:
List all runners API using /runners/all
List Runner jobs using /runners/$runner_id/jobs?status=running
The following bash script uses curl and jq :
#!/bin/bash
token=YOUR_TOKEN
domain=your.domain.com
ids=$(curl -s -H "PRIVATE-TOKEN: $token" "https://$domain/api/v4/runners/all" | \
jq '.[].id')
set -- $ids
for i
do
result=$(curl -s \
-H "PRIVATE-TOKEN: $token" \
"https://$domain/api/v4/runners/$i/jobs?status=running" | jq '. | length')
if [ $result -eq 0 ]; then
echo "runner $i is not running jobs"
else
echo "runner $i is running $result jobs"
fi
done
Output:
runner 6 is not running jobs
runner 7 is running 1 jobs
runner 8 is not running jobs
Using python :
import requests
import json
token = "YOUR_TOKEN"
domain = "your.domain.com"
r = requests.get(
f'https://{domain}/api/v4/runners/all',
headers = { "PRIVATE-TOKEN": token }
)
ids = [ i["id"] for i in json.loads(r.text) ]
for i in ids:
r = requests.get(
f'https://{domain}/api/v4/runners/{i}/jobs?status=running',
headers = { "PRIVATE-TOKEN": token }
)
num_jobs = len(json.loads(r.text))
if num_jobs > 0:
print(f'runner {i} is running {num_jobs} jobs')
else:
print(f'runner {i} is not running jobs')
I have a json file that is formatted like so:
{
"ServerName1": {
"localip": "192.168.1.1",
"hostname": "server1"
},
"ServerName2": {
"localip": "192.168.1.2",
"hostname": "server2"
},
"ServerName3": {
"localip": "192.168.1.3",
"hostname": "server3"
}
}
And i am trying to write a shell script that uses Dialog to create a menu to run an ssh connection command. I'm parsing with jq, but can't get past the first object level. We have a lot of servers and this will make connecting to them a lot easier. I have the Dialog statement working fine with static data, but we are trying to populate it with a json file with the rest of the data. So i am killing myself trying to figure out how to get just the localip and hostname either into an array to loop into the Dialog command or something that will effectively do the same thing and al I get it it to do so far is spit out
Servername1 = {"localip":"192.168.1.1","hostname":"server1"}
on each line. I'm a shell script newbie but this is messing with sanity now.
This is the jq command that I've been working with so far:
jq -r "to_entries|map(\"\(.key)=\(.value|tostring)\")|.[]" config.json
This is the Dialog command that works well with static data:
callssh(){
clear
ssh $1#$2
}
## Display Menu ##
dialog --clear --title "SSH Relayer"\
--menu "Please choose which server \n\
with which you would like to connect" 15 50 4 \
"Server 1" "192.168.1.1"\
"Server 2" "192.168.1.2"\
"Server 3" "192.168.1.3"\
Exit "Exit to shell" 2>"${INPUT}"
menuitem=$(<"${INPUT}")
case $menuitem in
"Server 1") callssh $sshuser 192.168.1.1;;
"Server 2") callssh $sshuser 192.168.1.2;;
"Server 3") callssh $sshuser 192.168.1.3;;
Exit) clear
echo "Bye!";;
esac
Thanks for any help or pointing in the right direction.
To create a bash array mapping hostnames to ip addresses based on config.json:
declare -A ip_of
# Emit lines of the form:
# hostname localip (without quotation marks)
function hostname_ip {
local json="$1"
jq -r '.[] | "\(.hostname) \(.localip)"' "$json"
}
while read -r hostname ip ; do
ip_of["$hostname"]="$ip"
done < <(hostname_ip config.json)
You can loop through this bash array like so:
for hostname in "${!ip_of[#]}" ; do
echo hostname=$hostname "=>" ${ip_of[$hostname]}
done
For example, assuming the "dialog" presents the hostnames,
you can replace the case statement by:
callssh "$sshuser" "${ip_of[$menuitem]}"
I have to go through and collect a few OIDs from some SNMP enabled network printers with a BASH script I have been working on.
My Request:
snmpget -v2c -c public 192.168.0.77
.1.3.6.1.2.1.1.1
.1.3.6.1.2.1.1.2
My Actual Response:
.1.3.6.1.2.1.1.1 = Counter32: 1974
.1.3.6.1.2.1.1.2 = Counter32: 633940
The Desired Response:
1974
633940314
(just the oid values only)
I looked and tested several options using the resource from the site below:
http://www.netsnmp.org/docs/man/snmpcmd.html#lbAF
-Oq removes '=' so running
snmpget -v2c -c public -Oq 10.15.105.133
.1.3.6.1.2.1.1.1
.1.3.6.1.2.1.1.2
returns
.1.3.6.1.2.1.1.1 Counter32: 1974
.1.3.6.1.2.1.1.2 Counter 32: 633940314
so I know I am phrasing my request properly.
I am taking the values and writing them to a MYSQL dB, I set the data types in my tale schema, the request is consistent so I know the definition of the OID, so I do not need all the information I am getting back, just the value of the OID itself, so I can write it to my dB without manipulating the the response. I probably can manipulate the response pulling the information to the right of ":" and writing the value of the OID.
I am relatively new to SNMP (http://www.net-snmp.org/), but I can not see why this is not a more commonly asked question because I have been searching everywhere for an answer and this post is my last recourse...
You can tune the output with the -O argument:
snmpgetnext -Oqv -v 2c -c public 192.168.0.77 .1
2
See the --help:
q: quick print for easier parsing
v: print values only (not OID = value)
You can postprocess the output with a simple Awk or sed script, or even just grep (provided you have grep -P).
snmpget -v2c -c public 192.168.0.77 <<'____HERE' | awk '{ print $4 }'
.1.3.6.1.2.1.1.1
.1.3.6.1.2.1.1.2
____HERE
or
.... | sed 's/.*: //'
or
.... | grep -oP ':\K[0-9]+'
Is there any way to import a JSON file (contains 100 documents) in elasticsearch server? I want to import a big json file into es-server..
As dadoonet already mentioned, the bulk API is probably the way to go. To transform your file for the bulk protocol, you can use jq.
Assuming the file contains just the documents itself:
$ echo '{"foo":"bar"}{"baz":"qux"}' |
jq -c '
{ index: { _index: "myindex", _type: "mytype" } },
. '
{"index":{"_index":"myindex","_type":"mytype"}}
{"foo":"bar"}
{"index":{"_index":"myindex","_type":"mytype"}}
{"baz":"qux"}
And if the file contains the documents in a top level list they have to be unwrapped first:
$ echo '[{"foo":"bar"},{"baz":"qux"}]' |
jq -c '
.[] |
{ index: { _index: "myindex", _type: "mytype" } },
. '
{"index":{"_index":"myindex","_type":"mytype"}}
{"foo":"bar"}
{"index":{"_index":"myindex","_type":"mytype"}}
{"baz":"qux"}
jq's -c flag makes sure that each document is on a line by itself.
If you want to pipe straight to curl, you'll want to use --data-binary #-, and not just -d, otherwise curl will strip the newlines again.
You should use Bulk API. Note that you will need to add a header line before each json document.
$ cat requests
{ "index" : { "_index" : "test", "_type" : "type1", "_id" : "1" } }
{ "field1" : "value1" }
$ curl -s -XPOST localhost:9200/_bulk --data-binary #requests; echo
{"took":7,"items":[{"create":{"_index":"test","_type":"type1","_id":"1","_version":1,"ok":true}}]}
I'm sure someone wants this so I'll make it easy to find.
FYI - This is using Node.js (essentially as a batch script) on the same server as the brand new ES instance. Ran it on 2 files with 4000 items each and it only took about 12 seconds on my shared virtual server. YMMV
var elasticsearch = require('elasticsearch'),
fs = require('fs'),
pubs = JSON.parse(fs.readFileSync(__dirname + '/pubs.json')), // name of my first file to parse
forms = JSON.parse(fs.readFileSync(__dirname + '/forms.json')); // and the second set
var client = new elasticsearch.Client({ // default is fine for me, change as you see fit
host: 'localhost:9200',
log: 'trace'
});
for (var i = 0; i < pubs.length; i++ ) {
client.create({
index: "epubs", // name your index
type: "pub", // describe the data thats getting created
id: i, // increment ID every iteration - I already sorted mine but not a requirement
body: pubs[i] // *** THIS ASSUMES YOUR DATA FILE IS FORMATTED LIKE SO: [{prop: val, prop2: val2}, {prop:...}, {prop:...}] - I converted mine from a CSV so pubs[i] is the current object {prop:..., prop2:...}
}, function(error, response) {
if (error) {
console.error(error);
return;
}
else {
console.log(response); // I don't recommend this but I like having my console flooded with stuff. It looks cool. Like I'm compiling a kernel really fast.
}
});
}
for (var a = 0; a < forms.length; a++ ) { // Same stuff here, just slight changes in type and variables
client.create({
index: "epubs",
type: "form",
id: a,
body: forms[a]
}, function(error, response) {
if (error) {
console.error(error);
return;
}
else {
console.log(response);
}
});
}
Hope I can help more than just myself with this. Not rocket science but may save someone 10 minutes.
Cheers
jq is a lightweight and flexible command-line JSON processor.
Usage:
cat file.json | jq -c '.[] | {"index": {"_index": "bookmarks", "_type": "bookmark", "_id": .id}}, .' | curl -XPOST localhost:9200/_bulk --data-binary #-
We’re taking the file file.json and piping its contents to jq first with the -c flag to construct compact output. Here’s the nugget: We’re taking advantage of the fact that jq can construct not only one but multiple objects per line of input. For each line, we’re creating the control JSON Elasticsearch needs (with the ID from our original object) and creating a second line that is just our original JSON object (.).
At this point we have our JSON formatted the way Elasticsearch’s bulk API expects it, so we just pipe it to curl which POSTs it to Elasticsearch!
Credit goes to Kevin Marsh
Import no, but you can index the documents by using the ES API.
You can use the index api to load each line (using some kind of code to read the file and make the curl calls) or the index bulk api to load them all. Assuming your data file can be formatted to work with it.
Read more here : ES API
A simple shell script would do the trick if you comfortable with shell something like this maybe (not tested):
while read line
do
curl -XPOST 'http://localhost:9200/<indexname>/<typeofdoc>/' -d "$line"
done <myfile.json
Peronally, I would probably use Python either pyes or the elastic-search client.
pyes on github
elastic search python client
Stream2es is also very useful for quickly loading data into es and may have a way to simply stream a file in. (I have not tested a file but have used it to load wikipedia doc for es perf testing)
Stream2es is the easiest way IMO.
e.g. assuming a file "some.json" containing a list of JSON documents, one per line:
curl -O download.elasticsearch.org/stream2es/stream2es; chmod +x stream2es
cat some.json | ./stream2es stdin --target "http://localhost:9200/my_index/my_type
You can use esbulk, a fast and simple bulk indexer:
$ esbulk -index myindex file.ldj
Here's an asciicast showing it loading Project Gutenberg data into Elasticsearch in about 11s.
Disclaimer: I'm the author.
you can use Elasticsearch Gatherer Plugin
The gatherer plugin for Elasticsearch is a framework for scalable data fetching and indexing. Content adapters are implemented in gatherer zip archives which are a special kind of plugins distributable over Elasticsearch nodes. They can receive job requests and execute them in local queues. Job states are maintained in a special index.
This plugin is under development.
Milestone 1 - deploy gatherer zips to nodes
Milestone 2 - job specification and execution
Milestone 3 - porting JDBC river to JDBC gatherer
Milestone 4 - gatherer job distribution by load/queue length/node name, cron jobs
Milestone 5 - more gatherers, more content adapters
reference https://github.com/jprante/elasticsearch-gatherer
One way is to create a bash script that does a bulk insert:
curl -XPOST http://127.0.0.1:9200/myindexname/type/_bulk?pretty=true --data-binary #myjsonfile.json
After you run the insert, run this command to get the count:
curl http://127.0.0.1:9200/myindexname/type/_count