SNMP OUTPUT OPTIONS - How do I get the OID response value only? - mysql

I have to go through and collect a few OIDs from some SNMP enabled network printers with a BASH script I have been working on.
My Request:
snmpget -v2c -c public 192.168.0.77
.1.3.6.1.2.1.1.1
.1.3.6.1.2.1.1.2
My Actual Response:
.1.3.6.1.2.1.1.1 = Counter32: 1974
.1.3.6.1.2.1.1.2 = Counter32: 633940
The Desired Response:
1974
633940314
(just the oid values only)
I looked and tested several options using the resource from the site below:
http://www.netsnmp.org/docs/man/snmpcmd.html#lbAF
-Oq removes '=' so running
snmpget -v2c -c public -Oq 10.15.105.133
.1.3.6.1.2.1.1.1
.1.3.6.1.2.1.1.2
returns
.1.3.6.1.2.1.1.1 Counter32: 1974
.1.3.6.1.2.1.1.2 Counter 32: 633940314
so I know I am phrasing my request properly.
I am taking the values and writing them to a MYSQL dB, I set the data types in my tale schema, the request is consistent so I know the definition of the OID, so I do not need all the information I am getting back, just the value of the OID itself, so I can write it to my dB without manipulating the the response. I probably can manipulate the response pulling the information to the right of ":" and writing the value of the OID.
I am relatively new to SNMP (http://www.net-snmp.org/), but I can not see why this is not a more commonly asked question because I have been searching everywhere for an answer and this post is my last recourse...

You can tune the output with the -O argument:
snmpgetnext -Oqv -v 2c -c public 192.168.0.77 .1
2
See the --help:
q: quick print for easier parsing
v: print values only (not OID = value)

You can postprocess the output with a simple Awk or sed script, or even just grep (provided you have grep -P).
snmpget -v2c -c public 192.168.0.77 <<'____HERE' | awk '{ print $4 }'
.1.3.6.1.2.1.1.1
.1.3.6.1.2.1.1.2
____HERE
or
.... | sed 's/.*: //'
or
.... | grep -oP ':\K[0-9]+'

Related

Extract data from unix log file, construct JSON and perform post request using curl

My overall task is constantly to collect data from UNIX system log file, filter it, prepare a json payload based on the filtered data and process the data by sending a post api call to another server.
I wonder if that can be done using let's say shell script to monitor the log file with tail, filter with grep to get the specific lines dumpted in another file. With cronjob to run another script which contruct a .json and send curl request with the json to external server.
Some details:
In the log file - connector.log I am interested in lines like:
2020-09-16T15:14:37,337 INFO (tomcat-http--131) [tenant-test;-;138.188.247.4;] com.vmware.horizon.adapters.passwordAdapter.PasswordIdpAdapter - Login: user123 - SUCCESS
These lines, I can collect by the below command:
tailf connector.log | grep 'PasswordIdpAdapter - Login\|FAILURE\|SUCCESS'
and probably dump them into a file:
tailf connector.log | grep 'PasswordIdpAdapter - Login\|FAILURE\|SUCCESS' > log_data.txt
I wonder at this point, is it possible to extract only specific fields from a line(not the whole line) from the connector.log , so one line in log_data.txt to look like(1, 4, 6, 7, 8):
1 2020-09-29T07:15:13,881 [tenant1;usrname#tenant1;10.93.231.5;] - username - SUCCESS
From that point, I need to write a script(maybe could be run by cronjob every minute)/or a command to construct the below json and send the request. One line - one request.
This is the example of the json:
{
"timestamp": "2020-09-16T15:24:35,377",
"tenant_name": "tenant-test",
"log_type": "SERVICE",
"log_entry": "Login: user123 - SUCCESS"
}
The field values that should be replaced already exist in the log line: timestamp(the 1st field, e.g. 2020-09-16T15:14:37,337), tenant_name(the 1st part of the 4th field, tenant-test) and the log_entry(the last four fields, e.g. Login: user123 - SUCCESS).
When the json is constructed, I'll send it by:
curl --header "Content-Type: application/json" --request POST --data \
$payload http://myservert:8080/api/requests
What is not clear to me, this script to get the data line by line from log_data.txt e.g.
and populate some of the fields to create the .json and send it to the server.
Thanks for your answers in advance,
Petko
Thanks #shellter for the awk idea. So, bash, awk, grep, cat, cut and curl did the job.
I've created a cronjob to execute the bash script on 5 min interval.
The script gets the last 5mins of log data, dump it to another file, reads the filtered data, prepare the payload and then executes the API call. Maybe it is stupid but it works.
#!/bin/bash
MONITORED_LOG="/var/logs/test.log"
FILTERED_DATA="/tmp/login/login_data.txt"
REST_HOST="https://rest-host/topics/logs-"
# dump the last 5 mins of log data(date format: 2020-09-28T10:52:28,334)
# to a file, filter for keywords FAILURE\|SUCCESS and NOT having 'lookup|SA'
# an example of data record taken: 1 2020-09-29T07:15:13,881 [tenant1;usrname#tenant1;10.93.231.5;] - username - SUCCESS
awk -v d1="$(date --date="-5 min" "+%Y-%m-%dT%H:%M:%S")" -v d2="$(date "+%Y-%m-%dT%H:%M:%S")" '$0 > d1 && $0 < d2' $MONITORED_LOG | grep 'FAILURE\|SUCCESS' | grep -v 'lookup\|SA-' | awk '{ print $2, $3, $5, $7}' | uniq -c > $FILTERED_DATA
## loop through all the filtered records and send an API call
cat $FILTERED_DATA | while read LINE; do
## preparing the variables
timestamp=$(echo $LINE | cut -f2 -d' ')
username=$(echo $LINE | cut -f5 -d' ')
log_entry=$(echo $LINE | cut -f7 -d' ')
# get the tenant name, split by ; and remove the first char [
tenant_name=$(echo $tenant_name | cut -f1 -d';')
tenant_name="${tenant_name:1}"
# preparing the payload
payload=$'{"records":[{"value":{"timestamp":"'
payload+=$timestamp
payload+=$'","tenant_name":"'
payload+=$tenant_name
payload+=$'","log_entry":"'
payload+=$log_entry
payload+=$'"}}]}'
echo 'payload: ' $payload
# send the api call to the server with dynamic construction of tenant name
curl -i -k -u 'api_user:3494ssdfs3' --request POST --header "Content-type:application/json" --data "$payload" "$REST_HOST$tenant_name"
done

mongoexport - issue with JSON query (extended JSON - Invalid JSON input)

I have started learning MongoDB recently. Today the instructor taught us the mongoexport command. While practicing the same, I face a typical issue which none of the other batchmates including the instructor faced. I use MongoDB version 4.2.0 on my Windows 10 machine.
If I use mongoexport for my collection without any -q parameter to specify any filtering condition, it works fine.
mongoexport -d trainingdb -c employee -f empId,name,designation -o \mongoexport\all-employees.json
2019-09-17T18:00:30.300+0530 connected to: mongodb://localhost/
2019-09-17T18:00:30.314+0530 exported 3 records
However, whenever I specify the JSON query as -q (or --query) it gives an error as follows.
mongoexport -d trainingdb -c employee -f empId,name,designation -q {'designation':'Developer'} -o \mongoexport\developers.json
2019-09-17T18:01:45.381+0530 connected to: mongodb://localhost/
2019-09-17T18:01:45.390+0530 Failed: error parsing query as Extended JSON: invalid JSON input
The same error persists in all the different flavors I had attempted with for the query.
-q {'designation':'Developer'}
--query {'designation':'Developer'}
-q "{'designation':'Developer'}"
I had even attempted with a different query condition on the 'empId' as -q {'empId':'1001'} But no luck. I keep getting the same error.
As per one of the suggestions given in the StackOverflow website, I tried with the following option but getting a different error.
-q '{"designation":"Developer"}'
The error is : 'query '[39 123 101 109 112 73 100 58 49 48 48 49 125 39]' is not valid JSON: json: cannot unmarshal string into Go value of type map[string]interface {}'.
2019-09-17T20:24:58.878+0530 query '[39 123 101 109 112 73 100 58 49 48 48 49 125 39]' is not valid JSON: json: cannot unmarshal string into Go value of type map[string]interface {}
2019-09-17T20:24:58.882+0530 try 'mongoexport --help' for more information
I am really not sure what is missing here ? Tried with a bit of Googling and also gone through the official MongoDB documentation of the mongoexport - but no luck.
The employee collection in my system looks like the follows with 3 documents.
> db.employee.find().pretty()
{
"_id" : ObjectId("5d80d1ae0d4d526a42fd95ad"),
"empId" : 1001,
"name" : "Raghavan",
"designation" : "Developer"
}
{
"_id" : ObjectId("5d80d1b20d4d526a42fd95ae"),
"empId" : 1002,
"name" : "Kannan",
"designation" : "Architect"
}
{
"_id" : ObjectId("5d80d1b40d4d526a42fd95af"),
"empId" : 1003,
"name" : "Sathish",
"designation" : "Developer"
}
>
Update
As suggested by #NikosM, I have saved the query in a .json file (query.json) and tried the same mongoexport command with the new approach. Still, no luck. Same Marshal error.
cat query.json
{"designation":"Developer"}
mongoexport -d trainingdb -c employee -f empId,name,designation -q 'query.json' -o \mongoexport\developers.json
2019-09-17T21:16:32.849+0530 query '[39 113 117 101 114 121 46 106 115 111 110 39]' is not valid JSON: json: cannot unmarshal string into Go value of type map[string]interface {}
2019-09-17T21:16:32.852+0530 try 'mongoexport --help' for more information
Any help on this will be highly appreciated.
The following different approach made it work at last - where I had specified the JSON query with the double quotes escaped with the backslash : -q "{\"designation\":\"Developer\"}".
mongoexport -d trainingdb -c employee -f empId,name,designation -q "{\"designation\":\"Developer\"}" -o \mongoexport\developers.json
2019-09-17T21:33:01.642+0530 connected to: mongodb://localhost/
2019-09-17T21:33:01.658+0530 exported 2 records
cat developers.json
{"_id":{"$oid":"5d80d1ae0d4d526a42fd95ad"},"empId":1001.0,"name":"Raghavan","designation":"Developer"}
{"_id":{"$oid":"5d80d1b40d4d526a42fd95af"},"empId":1003.0,"name":"Sathish","designation":"Developer"}
Thank you very much #Caconde. Your suggestion helped.
But I am really not sure why this does not work in my machine alone and the reason for this tweak in the format of the query.
There is another approaches that I found out to work which were using the triple double-quote (""") for outside encasing.
mongoexport -d trainingdb -c employee -f empId,name,designation -q """ {"designation":"Developer"} """ -o \mongoexport\developers.json
The following different approach made it work at last - where I had specified the JSON query with the double quotes escaped with the backslash : -q "{"designation":"Developer"}".
for me it was
"{\"sensor_name\":\"Heat Recovery System Header Mass Flow\"}"
THIS ANSWER SOLVED MY ISSUE TYSM

jq --arg variable used in quoted string within select()

I want to select() an object based on a string containing a jq variable ($ARCH) using -arg jq argument. Here's the use-case while looking for "/bin/linux/$ARCH/kubeadm" from Google...
# You may need to install `xml2json` IE
# sudo gem install --no-rdoc --no-ri xml2json and run the script I wrote to do the xml2json:
#!/usr/bin/ruby
# Written by Jim Conner
require 'xml2json'
xml = ARGV[0]
begin
if xml == '-'
xdata = ARGF.read.chomp
puts XML2JSON.parse(xdata)
else
puts XML2JSON.parse(File.read(file2parse).chomp)
end
rescue => e
$stderr.puts 'Unable to comply: %s' % [e.message]
end
Then run the following:
curl -sSL https://storage.googleapis.com/kubernetes-release/ > /var/tmp/k8s.xml | \
xml2json - | \
jq --arg ARCH amd64 '[.ListBucketResult.Contents[] | select(.Key | contains("/bin/linux/$arch/kubeadm"))]'
...which returns an empty set because jq doesn't transliterate inside quotes. I know I can get around this by using multiple select/contains() but I'd prefer not to if possible.
jq simply may not do it, but if someone knows a way to do it, I'd much appreciate it.
jq does support string interpolation, and in your case the string would be:
"/bin/linux/\($ARCH)/kubeadm"
Notice that this is not a JSON string: the occurrence of "\(" signals that the string is subject to interpolation. Very nifty.
(Alternatively, you could of course use string concatenation:
"/bin/linux/" + $ARCH + "/kubeadm")
Btw, you might wish to avoid contains here. Its semantics is (are?) quite complex and perhaps counter-intuitive. Consider using startswith, index, or (for regex matches) test.

Zabbix - triggering on text, displaying only part of the text

I'm monitoring a web page that displays the status of a few hundred items. The page looks like this:
{"arrisId":"a000098","status":"Running","startTime":"2018-05-10T08:02:19.563Z"},{"arrisId":"a000101","status":"Running","startTime":"2018-05-10T08:02:19.892Z"},{"arrisId":"a000107","status":"Running","startTime":"2018-05-10T08:02:28.556Z"},...
What I want to do is trigger when 1 of the things is "Not Running", but I would like to display only the item that is not working and not the entire page. Hope that makes sense. I could use web.page.regexp and send a message that something is not running, but if I use web.page.get, is there a way to configure a trigger to display the not running and the 25 or so characters in front of that?
I hope this question makes sense.
Your best course of action is to use Low Level Discovery.
Your LLD rule will run a script to ingest your main status page, then parse it and use the fields to create your items according to the "Item prototypes" you define.
The item prototype themselves will need a script as well to get their respective information (unless you are willing to use the Zabbix in beta)
I've done a simple setup, using mock json from here:
LLD Script: will parse the mock json and convert it into a Zabbix LLD compliant format:
import requests
import json
jsonSource = "https://jsonplaceholder.typicode.com/users"
lld = {}
data = []
lld['data'] = data
session = requests.Session()
response = session.get(jsonSource)
for jsonObject in response.json():
data.append ( {
'{#NAME}': jsonObject['name'],
'{#ID}': jsonObject['id'],
'{#URL}': jsonSource + '/' + str(jsonObject['id'])
} )
print json.dumps(lld)
Item GET Script: get a specific field of a specific item (will become obsolete with http agent item from Zabbix 4.0):
import requests
import json
import sys, argparse
parser = argparse.ArgumentParser()
parser.add_argument('-i', required=True, metavar='User ID')
parser.add_argument('-f', required=True, metavar='\"Requested JSON Field\"')
args = parser.parse_args()
jsonSource = "https://jsonplaceholder.typicode.com/users/" + args.i
session = requests.Session()
response = session.get(jsonSource)
print (response.json()[args.f])
Command line usage sample:
$ jsonLLD.py
{"data": [{"{#ID}": 1, "{#URL}": "https://jsonplaceholder.typicode.com/users/1", "{#NAME}": "Leanne Graham"}, {"{#ID}": 2, "{#URL}": "https://jsonplaceholder.typicode.com/users/2", "{#NAME}": "Ervin Howell"},
[cut]
$ jsonGet.py -i 10 -f phone
024-648-3804
$ jsonGet.py -i 10 -f name
Clementina DuBuque
Then you have to set it up into Zabbix:
create a new template
create a Discovery rule of "Zabbix agent" type and set it to run system.run[/usr/bin/jsonLLD.py] (mind the path!)
create an item prototype for each json field you want to work on (ie: Item name: {#NAME} telephone number, Item key system.run[/usr/bin/jsonGet.py -i {#ID} -f phone] )
create trigger prototypes accordingly
associate an host to the template
In your situation I'd use the Zabbix server itself as host, and install the scripts in its /usr/bin.
Watch the Zabbix Agent's log to see the discovery and item gathering process:
1972:20180519:121849.052 Executing command '/usr/bin/jsonGet.py -i 1 -f phone'
1971:20180519:121850.054 Executing command '/usr/bin/jsonGet.py -i 2 -f phone'
1974:20180519:121851.055 Executing command '/usr/bin/jsonGet.py -i 3 -f phone'
1974:20180519:121852.073 Executing command '/usr/bin/jsonGet.py -i 4 -f phone'
1974:20180519:121853.076 Executing command '/usr/bin/jsonGet.py -i 5 -f phone'
1973:20180519:121854.077 Executing command '/usr/bin/jsonGet.py -i 6 -f phone'
1972:20180519:121855.079 Executing command '/usr/bin/jsonGet.py -i 7 -f phone'
[cut]

How do I get field from HTTP GET JSON result to file?

I am trying to make a HTTP GET request to an API service and push one of the returned fields in the JSON result to a txt file.
Based on this previously asked question: (Getting JSON value from cURL in Linux Bash)
...I have a bash script as follows...
TOKEN_FILE="/myhome/project/resources/auto_token.txt"
AUTH_RESULT=$(curl -i -H "Content-Type: application/json" "https://access.mywebservice.com/access/oauth/token?grant_type=client_credentials&client_id=123456&client_secret=MySecretPassword");
RESULT_FIELDS=$( cat <<EOF | json_reformat | \
sed -rne '/:/s#^\s+"(\w+)":\s+"([^"]+)",?#json_\1="\2"#gp'
[$AUTH_RESULT]
EOF
)
if [ -f "$TOKEN_FILE" ]
then
echo "$RESULT_FIELDS" > "$TOKEN_FILE"
fi
The expected JSON result looks like this (copied from Postman):
{
"access_token": "eyJ5bGciOiJSUzI1NiJ6.eyJzY29wZSI6WyJDUl7iLCJNQVAiLCJQVFkiLCJ8R1QiLCJTVFMiLCJUVEwiXSwiaXNzIjoiaHR0cHM6Ly9hY2Nlc3MtdWF0LWFwaS5jb3JlbG9naWMuYXNpYSIsImVudl9hx2Nlc3NfcmVzdHJpY3QiOmZhbHNlLCJleHAiOjE0NjcyODMwODcsImNsaWVudF9pZCI6IjhhOTY4OGJjIn0.F2iQfVsi9zntOxKYrNRukSIwuQ_LGSi_WMIXKII2A3GOEaqs-WmFTi7az9rvvfDsOl9rHy_s_66A6PiCpPftyw21Fl0aZZRoFcKv2H_zDUHuxOEs8V36jHeLghV7pjHwYI_nG68CIGvfuRWFNzQuiMFWc_i8oB3n5noSd8fQqa4",
"token_type": "bearer",
"expires_in": 43199,
"scope": "PROD1 PROD2 PROD3",
"iss": "https://access.mywebservice.com",
"env_access_restrict": false
}
I get the following errors returned...
bash-4.1$ ./token_renewal_test_05.sh
: command not foundt_05.sh: line 2:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
115 576 0 576 0 0 2266 0 --:--:-- --:--:-- --:--:-- 30315
: command not foundt_05.sh: line 3:
: command not foundt_05.sh: line 4:
./token_renewal_test_05.sh: line 14: warning: here-document at line 10 delimited by end-of-file (wanted `EOF')
./token_renewal_test_05.sh: line 13: warning: here-document at line 9 delimited by end-of-file (wanted `EOF')
: command not foundt_05.sh: line 13:
lexical error: invalid char in json text.
sed -rne '/:/s#^\s+"(\w+)":\s+"
(right here) ------^
: command not foundt_05.sh: line 10:
./token_renewal_test_05.sh: line 16: syntax error: unexpected end of file
I'm a bit new to bash and despite what appears to be a direct pointer to the issue am having problems resolving this one (note this is version 5)!
Can anyone offer any assistance with this one?
PS: I do not have jq either.
Thanks!
Regards,
Chris
Caveat emptor as per this comment on Parsing JSON with UNIX tools.
A working solution for your format:
eval $(cat <<EOF | \
sed -re 's/(,|\{|\})//g' | \
sed -re 's/"(\w+)":\s*"?([^"]*)"?$/json_\1='\''\2'\''/'
$JSON
EOF
)
set | grep '^json_'
json_access_token=eyJ5bGciOiJSUzI1NiJ6.eyJzY29wZSI6WyJDUl7iLCJNQVAiLCJQVFkiLCJ8R1QiLCJTVFMiLCJUVEwiXSwiaXNzIjoiaHR0cHM6Ly9hY2Nlc3MtdWF0LWFwaS5jb3JlbG9naWMuYXNpYSIsImVudl9hx2Nlc3NfcmVzdHJpY3QiOmZhbHNlLCJleHAiOjE0NjcyODMwODcsImNsaWVudF9pZCI6IjhhOTY4OGJjIn0.F2iQfVsi9zntOxKYrNRukSIwuQ_LGSi_WMIXKII2A3GOEaqs-WmFTi7az9rvvfDsOl9rHy_s_66A6PiCpPftyw21Fl0aZZRoFcKv2H_zDUHuxOEs8V36jHeLghV7pjHwYI_nG68CIGvfuRWFNzQuiMFWc_i8oB3n5noSd8fQqa4
json_env_access_restrict=false
json_expires_in=43199
json_iss=https://access.mywebservice.com
json_scope='PROD1 PROD2 PROD3'
json_token_type=bearer
Thanks again Chepner and Drew
I was having too many issues with Sed (probably due to my lack of exprience). As it turns out, I tried using a lookbehind. Sed doesn't have this but grep does so knowing the strcuture of my JSON response will never chance, I was able to get my token extracted using the following with grep instead...
grep -o -P '(?<="access_token":").*(?=","token_type")'