How to identify the cause of incorrect data in calculated item? - zabbix

I have 2 items in zabbix version 5. first item as a trapper and it will be updated every one minute and Its value is ascending. second item is calculated based on first item and its formula is : (abschange("KeyName"))/60
for example:
if values of first item are these: 5,8,16,34,68,71,93,102,...
then second item must return: 3,8,18,34,.. Then divide them by 60.
what i get is completely different. i checked in latest data and the first item is always correct and ascending but the second item seems that can't calculate absolute value. how can i find the cause of this problem?
here is an example of what i get:
first item value: 148822441 ,148963574 ,149106618 ,149555694 ,...
second item value: 63498138, 2384, 63498138, 2384,...
i should mention that i had these exact items on zabbix 3 and they worked correctly. after migrating to zabbix 5 i face this problem.

Related

How get Zabbix status flag for "calculated" field item?

How get value of flag "The item is not discovered anymore and will be deleted" in lld item?
Need it to replace item value to "0".
That 'flag' is not item data, so it cannot be calculated. May as well just manually delete it. The underlying issue (why has the item disappeared?) may be something to shift focus to.

SSRS Matrix Last Column Item For Each Row

I need to find the last column item for each row in a matrix with multiple levels.
How could this be achieved?
If I use: =Last(Fields!XXX.Value) will get the last item for everything
If I use: =Last(Fields!XXX.Value, "Level1") will get the last item for Level1 only.
I need something like =Last(Fields!XXX.Value, "All Levels in Row")

Zabbix:Calculated items from calculated items

I have a problem creating calculated items from another ones.
For example i have an item inside a host that is a calculated item from 2 different existenting SNMP items.
When I create in a different host a different calculated item that try to calculate the same operation using 2 calculated items in a different host I obtain a not supported tag with this message:
Cannot evaluate function [avg(0)]: item [SMS_Reference:last("Host1:CalculatedItem"] not found
My formula:
last("Host_1:Calculated_item_in_Host_1")
In fact Host_1:Calculated_item_in_Host_1 is another calculated item inside another host and is working perfectly.
Anyone knows how can I fix this problem?

Accurate pagination by datetime field

I have a database table, for example 'items'. I have a timeline of these items, sorted by field ascended_at (datetime). I need to make a pagination api for such timeline. So, the first my version was:
HTTP GET /items/timeline?page=[PAGE_NUM]
which fires
SELECT * FROM items LIMIT 10 OFFSET [0, 10, 20, ...] ORDER BY ascended_at;
but here is the problem: when new item arrives, all pages shifts per 1 item. To avoid this, i have added from_asc_at parameter:
HTTP GET /items/timeline?page=[PAGE_NUM]&from_asc_at=123123123
which fires
SELECT * FROM items WHERE ascended_at <= [asc_at_parameter] LIMIT 10 OFFSET [0, 10, 20, ...] ORDER BY ascended_at;
but this is not accurate, because it is possible to have two items with same ascended_at, and you can see the same item in two different pages (but should not).
So, my question is: what are the possible solutions for this?
Use ID (because it is unique)? But what if it is not ordered by ID?
Any ideas more?
If your items IDs are auto-incremented, you could check what will be the next "autoincrement" value when retrieving items the first time (before pagination).
Store that value persistently (maybe in a session var) until the next search, and add a filter < {maximumID} to your SQL query, to improve the "result set stability" when the user paginates (all new items created between the initial search and paginations won't be retrieved).
EDIT
To handle items deletions, you will have to do "soft deletes" : do not immediately delete an item from DB, but store a deletion date in a datetime field, so that items still exist in DB for a while.
When a new search is issued, you will store in session the current server time, and add a criteria (for example date_deleted IS NULL OR date_deleted > {searchDate}), so that all the items deleted after a search will still be displayed for that specific search.
You will have to create a scheduled job to "really" delete items from DB after some delay.

RuningValue not working to give Cumulative Stacked Chart

I have a dataset that is showing the correct data, but putting it into a stacked bar chart and using the RunningValue function to try and plot it cumulatively is giving numbers that start way higher than they should.
My data is aggregated at the database, giving a dataset of:
Date of data
Count
Sum of Value
Filter Item 1
Time Since Date
Stacking Category
5 other fields
I am plotting with Time Since X along the X axis, Stacking Category is my Series field (there are 4 possible options), and my Y is using this function:
=RunningValue(IIF(Parameters!VolumeOrValue.Value="Volume",
Fields!Count.Value,
Fields!SumValue.Value),Sum,Nothing)
This should show me in the first X bar only 1 of the series, with a count as 1, or value of 100. Instead I get 3 of the series, with Counts summed up to 2500, which is more than the total sum of all of the count fields.
Can anyone point me to where my problem is?
Edit: Setting the CategoryField in the Series Properties dialog to match the Catgory that is set for the chart means that each bar is increasing by the right amount, but each stacked slice starts at the size of the entire value of the last bar. I need to get the reset to work properly, but I can't set any "Groupings" as normally recommended, and choosing any field name or Series name causes an error.
I managed to get it working quite simply...
Clicking on the Field on the right hand side of the chart on the Drop Series Fields Here section has a Group properties which when expanded has a name of the grouping. Plugging this into the RunningValue function as the last argument got it working properly (after removing the CategoryField setting).