Zabbix load/cpu roll-your-own formula - zabbix

I know that newer versions of Zabbix (2.0 onward) has a simple way of determining average load per cpu via the introduction of the "percpu" parameter. Unfortunately, I'm using 1.8.
With 2.0 I would be able to create an item with this key: system.cpu.load[percpu,avg15]
How do I roll-my-own calculated item using 1.8? I have tried the following formulas (Many are desperate and improbable, I know):
system.cpu.load[,avg15].last/system.cpu.num.last
Template_Linux:system.cpu.load[,avg15]/Template_Linux:system.cpu.num
{Template_Linux:system.cpu.load[,avg15]}/{Template_Linux:system.cpu.num}
{Template_Linux:system.cpu.load[,avg15].last}/{Template_Linux:system.cpu.num.last}
{Template_Linux:system.cpu.load[,avg15].last()}/{Template_Linux:system.cpu.num.last()}
{"Template_Linux:system.cpu.load[,avg15]".last()}/{"Template_Linux:system.cpu.num".last()}
"Template_Linux:system.cpu.load[,avg15]".last()/"Template_Linux:system.cpu.num".last()
"Template_Linux:system.cpu.load[,avg15].last()"/"Template_Linux:system.cpu.num.last()"
Thanks!

Zabbix documentation page on item configuration describes the correct calculated item syntax.
In this case, the formula would be something like this:
last("system.cpu.load[,avg15]") / last("system.cpu.num")

Related

Filter on partial kernel name with Nsight Compute

I am trying to filter on a partial name when profiling kernels in my program using NVIDIA Nsight Compute 2021.2.1. I believe it has worked before to use substrings or regex to match more than one kernel. However when I try it now I do not get any results unless I either leave the field blank or write the full name..
How do I accomplish this through the GUI?
See changelog for version 2021.1:
--kernel-regex and --kernel-regex-base options are deprecated and replaced by --kernel-name and --kernel-name-base, respectively.
All options which support regex need to provide regex: as a prefix before an argument to match per the regex, e.g regex:expression
So you need to write regex:almostFullkernelnam in the field.
https://developer.nvidia.com/nsight-compute-2021_1-new-features

jsonPath - problem with accessing to field by index

I trying access to value Carrier by taking value from shipping_id. I testing queries at https://www.jsonquerytool.com/. If I type key by hand $.shipping_methods["11"] or $.shipping_methods.11I receive correct result ["Carrier"]. But I have problem with taking key value from shipping_id field. I was trying with many variations of this $.shipping_methods[$.shipping_id] but without success. It's possible with pure jsonPath?
{
"shipping_id":"11",
"shipping_methods":{
"10":"Post",
"11":"Carrier"
}
}
Depending on the JSON-Path implementation/environment you are using this may or may not be possible. This is because the feature you are asking for is not of the proposed standard, though some libraries have features that enable queries like that, e.g. in JSONPath-Plus you could use #property and #parent (I had no success using #root) - but those are 'extensions':
$.shipping_methods[?(#property == #parent.shipping_id)]
You can test this online here.
The page you have linked is using JSPath under the hood, and I cannot see any of the required features mentioned in the readme. It would be simpler to drill down in a general programming language that hosts the JSON-Path engine but not sure if this is an option here.

Zabbix Agent 3.4.9 Active Monitoring Log file, Not supported: too many parameters

I'm trying to monitor the log file: /var/log/neo4j/debug.log
I'm looking for the text: Application threads blocked for ######ms
I have devised a regular expression for this as: Application threads blocked for (\d+)ms
We want to skip old info: add skip as mode
I want to pull out the number of MS so that the trigger will alert on blockages > 150ms.: \1 must be set as output parameter
I constructed the key as:
log[/var/log/neo4j/debug.log,Application threads blocked for (\d+)ms,,,skip,\1]
in accordance with
log[/path/to/file/file_name,< regexp >,< encoding >,< maxlines >,< mode >,< output >,< maxdelay >]
Type of Information is: Log
Update interval: 30s
History storage period: 90d
Timestamps appear in the log file as: 2018-10-03 13:29:20.460+0000
My timestamp appears as: yyyypMMpddphhpmmpss
I have tried a bunch of different things over the past week trying to get it to stop showing a "Too Many Parameters" error in the GUI without success. I'm completely lost at this point. We have 49 other items working correctly (all others are passive). Active checks are enabled in zabbix_agentd.conf.
I know this is an old thread but it took me a while to solve this problem, so I'd like to share and hope it helps...
According to the Zabbix official documentation the parameters usage for log (and logrt) keys should be:
logrt[file_regexp,<regexp>,<encoding>,<maxlines>,<mode>,<output>,<maxdelay>]
So, if we would use only the "skip" parameter, the item key should look like:
logrt[MyLogFile.log,,,,skip,,]
Nevertheless, it triggers the error "too many parameters".
In fact, to solve this issue I configured this key in my environment with only one coma after the parameter, like this:
logrt["MyLogFile.log","MyFilter",,,skip,]
That's it... hope it helps someone else.

Working on migration of SPL 3.0 to 4.2 (TEDA)

I am working on migration of 3.0 code into new 4.2 framework. I am facing a few difficulties:
How to do CDR level deduplication in new 4.2 framework? (Note: Table deduplication is already done).
Where to implement PostDedupProcessor - context or chainsink custom? In either case, do I need to remove duplicate hashcodes from the list or just reject the tuples? Here I am also doing column updating for a few tuples.
My file is not moving into archive. The temporary output file is getting generated and that too empty and outside load directory. What could be the possible reasons? - I have thoroughly checked config parameters and after putting logs, it seems correct output is being sent from transformer custom, so I don't know where it is stuck. I had printed TableRowGenerator stream for logs(end of DataProcessor).
1. and 2.:
You need to select the type of deduplication. It is not a big difference if you choose "table-" or "cdr-level-deduplication".
The ite.businessLogic.transformation.outputType does affect this. There is one Dedup only. You can not have both.
Select recordStream for "cdr-level-deduplication", do the transformation to table row format (e.g. if you like to use the TableFileWriter) in xxx.chainsink.custom::PostContextDataProcessor.
In xxx.chainsink.custom::PostContextDataProcessor you need to add custom code for duplicate-handling: reject (discard) tuples or set special column values or write them to different target tables.
3.:
Possibly reasons could be:
Missing forwarding of window punctuations or statistic tuple
error in BloomFilter configuration, you would see it easily because PE is down and error log gives hints about wrong sha2 functions be used
To troubleshoot your ITE application, I recommend to enable the following debug sinks if checking the StreamsStudio live graph is not sufficient:
ite.businessLogic.transformation.debug=on
ite.businessLogic.group.debug=on
ite.businessLogic.sink.debug=on
Run a test with a single input file only and check the flow of your record and statistic tuples. "Debug sinks" write punctuations markers also to debug files.

Migrating from graphite to graph-explorer

The graphite-webapp does not encourage ad-hoc graphing. Graphiti et al are just fancy UIs that, while improve UI-UX, do not do much regarding the inherent linear metric search that plagues the graphite-webapp. Correct me if wrong here, but the only option I came across that encourages ad-hoc graphing has been Graph-Explorer. Assuming, that Graph-Explorer is the only way ahead.
I have some 1000 distinct metrics currently. Named in the following fashion-
stats.beta.pluto.ip-10-0-1-81.helios.pa.v4.reminder.total
stats.beta.pluto.ip-10-0-1-81.helios.pa.v4.reminder.failed
stats.beta.pluto.ip-10-0-1-81.helios.pa.v4.reminder.delivered
stats.dev.ganglia.ip-10-0-3-40.ink.web.pi.notification.android.total
stats.dev.ganglia.ip-10-0-3-40.ink.web.pi.notification.android.failed
stats.dev.ganglia.ip-10-0-3-40.ink.web.pi.notification.android.delivered
I understand that these will become-
metric=stats.env=dev.role=ganglia.server=ip-10-0-3-40. application=ink.endpoint=web.src=pi.metric=notification.what=total
Where do I insert unit and target_type tags?
Similarly, I have 500 timers.
How do I go about migrating from 'proto1' to 'proto2'?
Also where exactly does Carbon-Tagger come into the stack?
Do I rename my metrics at the source level?
Do I modify the structured_metrics/plugins/statsd.py file as we have fixed hierarchy across our distributed infrastructure?
Anything I am missing?
What will I have to change in my statsd? I quote the carbon-tagger documentation- "aggregators like statsd will need proto2 support."
the structured metrics plugins will set the tags for proto1 ("old style") metrics, see https://github.com/vimeo/graph-explorer/wiki/Structured-Metrics
if you want to stick to proto1 you just have to create a plugin to tag your metrics see https://github.com/vimeo/graph-explorer/wiki/Structured-Metrics#writing-your-own-plugins and existing plugins for examples
you can basically ignore carbon-tagger if you want to stick with proto1, so 3 is not needed, but otherwise yes. the statsd plugin just converts statsd's internal metrics to proto2.