Zabbix api value is different from Graph value - zabbix

i have zabbix 5. I've been trying to write a shell script to get item trend for a range of time. the shell script works correctly but the value it return doesn't match what is showing on graph.
for example:
I have an item with itemid "10234" which return "percentage of used CPU".
i want to get the zabbix trend for this item from "2021/09/20 09:00:00" till "2021/09/21 09:00:00".
Unix time for this rang is: 1632112200 , 1632198600
I run this command to get the values:
curl -L -k -i -X POST -H 'Content-Type:application/json' -d '{"jsonrpc":"2.0","method":"trend.get","id":1,"aut h":"1a543455bd48e6ddc222219acccb52e9","params" : {"output": ["clock","value_avg","value_min","value_max","num", "itemid"],"itemids":["10234"],"time_from": "1632112200","time_till": "1632198600", "limit": "1"}}' https://172.30.134.03:423//api_jsonrpc.php
output:
{"clock":"1632114000","value_avg":"14.968717529411 764","value_min":"12.683622999999997","value_max": "17.635707999999994"}
but in Graph it shows:
why this happens and how to fix it?

In most cases, the graphs apply approximations. If you zoom in, you should see the same data you get from the API. The most zoom you can apply is 1 minute, while the API will get you the exact point in time value.

Related

Increment bash variable while evaluating it

I'm using curl to send some json data. Part of the data is a request counter that needs to be incremented after each call.
I would like to reduce the code below by incrementing it right after evaluating it. I'm not sure how to format the variable within the json string though.
Thank you in advance!
#!/bin/bash
reqcnt=0
curl http://myurl.com --data-binary '{"requestCounter":'${reqcnt}'}'
((reqcnt++))
Expected:
#!/bin/bash
reqcnt=0
curl http://myurl.com --data-binary '{"requestCounter":'${((reqcnt++)}'}'
Edit
Taking into account the great answer by Inian, I noticed there are cases where I need to save the output of curl. For some reason the arithmetic operation is not performed on the variable in that case:
res=$(curl http://myurl.com --data-binary {"requestCounter":'"$((reqcnt++))"'}')

The forge script 'test-list-resources' only list 10 items

The forge script 'test-list-resources' only list 10 items. How do we list all the resources? Besides the command-line script, is it possible to view all resource somewhere online?
And I found that it 's not listing the latest 10 items, it lists the first 10 items after sorting by the URN(which is very long and human-unreadable), this is not so intuitive in usability, because usually user upload the model and could forget the URN and they might want to check the URN by executing this script.
Can you please clarify where the test-list-resource script came from?
Also from my perspective this script under the hood use one of the next methods:
1.Get Buckets
2.Get Bucket by Key
Both of them them you can use for getting bucket(s) with content. And for both of them you can specify limit as Query String Parameter, and now you have 10 because this value GET methods use by default. To getting more them 10 you just need to set higher value up to 100(max value)
Updated
After checking script source I found that we use second of GET methods - Get Bucket by Key. And the quickest solution that I can propose to you - is just jump in script code and edit 1 line. Basically you need only add limit param to query (for GET buckets/:bucketKey/objects curl request). And you can do this in few ways:
Hardcode 'limit' equal 100
response=$(curl -H "Authorization: ${bearer}" -X GET ${ForgeHost}/oss/v2/buckets/${bucket}/objects?limit=100 -k -s)
Pass value to script from shell environment variables
first
export BUCKET_LIMIT=<<YOUR LIMIT VALUE>>
then
response=$(curl -H "Authorization: ${bearer}" -X GET ${ForgeHost}/oss/v2/buckets/${bucket}/objects?limit=$BUCKET_LIMIT -k -s)
If you run script with 'sh' command you can add inline parameter
first
response=$(curl -H "Authorization: ${bearer}" -X GET ${ForgeHost}/oss/v2/buckets/${bucket}/objects?limit=$1 -k -s)
then
sh test-list-resources 100
Also thank you for notice this case, I will connect with script's author and create proposal for adding new functionality regarding limits and other params

Shell script - math operations and loops

I'm writing a sh file to run it via cron jobs, but I have no idea about shell script, I want to get the count of rows from mysql query then divide it by 200, ceil the result and start a loop from 0 to that amount.
After a long search, I wrote this line to get the count of rows from mysql query, and this works fine.
total=`mysql database -uuser -ppassword -s -N -e "SELECT count(id) as total FROM users"`
but all what I get from Google doesn't help me to complete my work, I tested something like "expr" and "let" methods for math operations, but I don't know why not working.
Even examples of loop I found on Google not working.
Can you help me with this script?
I guess you are using Bash. Here is how you divide:
#!/usr/bin/env bash
# ^
# |
# |
# --> This should be at the top of your script to make sure you run Bash
value=$((total / 200)) # total => value returned from mysql
for ((i = 0; i <= $value; i++)); do
# your code here
done
Stackoverflow has an astounding number of examples. I recommend you taking a look at Bash documentation here: https://stackoverflow.com/tags/bash/info.

Extract the value of a field of a top-level JSON object using command-line tools

I have JSON files on my server that needs to be passed into several different Raspberry Pis running Debian. Each of the Pis has their own JSON feed that they will pull from, but essentially, I need to automatically take the value of one key-value pair and use it as an argument for a command that is run in the terminal.
For instance: Fetching https://www.example.com/api/THDCRUG2899CGF8&/manifest.json
{
"version": "1.5.6",
"update_at": "201609010000",
"body": "172.16.1.1"
}
Then that value would be dynamically output into a command that uses the body's value as an argument. EG: ping [body value]
Edit:
The point of this is to have a task that executes every minute to receive weather updates to it.
You are looking for command substition, specifically wrapped around a command that can extract values from a JSON value. First, you can use jq as the JSON-processing command.
$ jq -r '.body' tmp.json
172.16.1.1
Command substitution allows you to capture the output of jq to use as an argument:
$ ping "$(jq -r '.body' tmp.json)"
PING 172.16.1.1 (172.16.1.1): 56 data bytes
...

Kibana - how to export search results

We've recently moved our centralized logging from Splunk to an ELK solution, and we have a need to export search results - is there a way to do this in Kibana 4.1? If there is, it's not exactly obvious...
Thanks!
This is very old post. But I think still someone searching for a good answer.
You can easily export your searches from Kibana Discover.
Click Save first, then click Share
Click CSV Reports
Then click Generate CSV
After a few moments, you'll get download option bottom right side.
This works with Kibana v 7.2.0 - export query results into a local JSON file. Here I assume that you have Chrome, similar approach may work with Firefox.
Chrome - open Developer Tools / Network
Kibana - execute your query
Chrome - right click on the network call and choose Copy / Copy as cURL
command line - execute [cURL from step 3] > query_result.json . The query response data is now stored in query_result.json
Edit: To drill down into the source nodes in the resulting JSON file using jq:
jq '.responses | .[] | .hits | .hits | .[]._source ' query_result.json
If you want to export the logs (not just the timestamp and counts), you have a couple of options (tylerjl answered this question very well on the Kibana forums):
If you're looking to actually export logs from Elasticsearch, you
probably want to save them somewhere, so viewing them in the browser
probably isn't the best way to view hundreds or thousands of logs.
There are a couple of options here:
In the "Discover" tab, you can click on the arrow tab near the bottom to see the raw request and response. You could click "Request"
and use that as a query to ES with curl (or something similar) to
query ES for the logs you want.
You could use logstash or stream2es206 to dump out the contents of a index (with possible query parameters to get the
specific documents you want.)
#Sean's answer is right, but lacks specifics.
Here is a quick-and-dirty script that can grab all the logs from ElasticSearch via httpie, parse and write them out via jq, and use a scroll cursor to iterate the query so that more than the first 500 entries can be captured (unlike other solutions on this page).
This script is implemented with httpie (the http command) and fish shell, but could readily be adapted to more standard tools like bash and curl.
The query is set as per #Sean's answer:
In the "Discover" tab, you can click on the arrow tab near the bottom
to see the raw request and response. You could click "Request" and
use that as a query to ES with curl (or something similar) to query ES
for the logs you want.
set output logs.txt
set query '<paste value from Discover tab here>'
set es_url http://your-es-server:port
set index 'filebeat-*'
function process_page
# You can do anything with each page of results here
# but writing to a TSV file isn't a bad example -- note
# the jq expression here extracts a kubernetes pod name and
# the message field, but can be modified to suit
echo $argv | \
jq -r '.hits.hits[]._source | [.kubernetes.pod.name, .message] | #tsv' \
>> $output
end
function summarize_string
echo (echo $argv | string sub -l 10)"..."(echo $argv | string sub -s -10 -l 10)
end
set response (echo $query | http POST $es_url/$index/_search\?scroll=1m)
set scroll_id (echo $response | jq -r ._scroll_id)
set hits_count (echo $response | jq -r '.hits.hits | length')
set hits_so_far $hits_count
echo "Got initial response with $hits_count hits and scroll ID "(summarize_string $scroll_id)
process_page $response
while test "$hits_count" != "0"
set response (echo "{ \"scroll\": \"1m\", \"scroll_id\": \"$scroll_id\" }" | http POST $es_url/_search/scroll)
set scroll_id (echo $response | jq -r ._scroll_id)
set hits_count (echo $response | jq -r '.hits.hits | length')
set hits_so_far (math $hits_so_far + $hits_count)
echo "Got response with $hits_count hits (hits so far: $hits_so_far) and scroll ID "(summarize_string $scroll_id)
process_page $response
end
echo Done!
The end result is all of the logs matching the query in Kibana, in the output file specified at the top of the script, transformed as per the code in the process_page function.
If you have troubles making your own request with curl or you don't need automatic program to extract logs from Kibana, just click 'Response' and get what you need.
After having troubles like 'xsrf token missing' when using curl,
I found this way is more easier and simple!
Like others said, Request button appears after clicking the arrow tab near the bottom.
Only the Timestamp and the count of messages at that time are exported, not the log information:
Raw:
1441240200000,1214
1441251000000,1217
1441261800000,1342
1441272600000,1452
1441283400000,1396
1441294200000,1332
1441305000000,1332
1441315800000,1334
1441326600000,1337
1441337400000,1215
1441348200000,12523
1441359000000,61897
Formatted:
"September 3rd 2015, 06:00:00.000","1,214"
"September 3rd 2015, 09:00:00.000","1,217"
"September 3rd 2015, 12:00:00.000","1,342"
"September 3rd 2015, 15:00:00.000","1,452"
"September 3rd 2015, 18:00:00.000","1,396"
"September 3rd 2015, 21:00:00.000","1,332"
"September 4th 2015, 00:00:00.000","1,332"
"September 4th 2015, 03:00:00.000","1,334"
"September 4th 2015, 06:00:00.000","1,337"
"September 4th 2015, 09:00:00.000","1,215"
"September 4th 2015, 12:00:00.000","12,523"
"September 4th 2015, 15:00:00.000","61,897"
Sure, you can export from Kibana's Discover (Kibana 4.x+).
1. On the discover page click the "up arrow" here:
Now, on the bottom of the page, you'll have two options to export search results
At logz.io (the company I work for), we'll be releasing scheduled reports based on specific searches.