Zabbix 2.2 calculate issue - zabbix

I need to agregate power data from 8 different PDUs.
So far:
- I've created a template with OIDs to poll by SNMP the desired value and I called it: Amps Total.
- I've created 8 different hosts using the template and it works OK. I can see the Amps Total being graphed for all the devices.
- I've created a fake host to use as "Data Centre" object.
- I've created a new template with 8 items Type "Calculate", Formula:
last("PDU-B1-L:Amps Total")+last("PDU-B1-R:Amps Total")
(PDU-B1-L & PDU-B1-R are my PDU hostnames).
I was expecting to see the agregate data (at least for the 2 PDUs), but nothing is being shown. All the data type is Numeric (Unsigned).
Those hosts are being polled through a Zabbix proxy. I replaced the hostname by proxy:host:key in the formula with no luck. (the host config shows proxy:host as the hostname)
Any clue?
Thanks!

You have to use item keys in the calculated item formula. Given that spaces are not supported in item keys, "Amps Total" most likely is item name.

Related

how can we create a block size in jmeter with csv config? Each thread should pickup specific set of values

how can we create a block size in JMeter with CSV config?
I have 5 multiple users and one Bulkuser.csv file with 4 columns,
The file has around 2000 values.
I wish to create a block of 400 values for my 5threads[users].
1st USER WILL USE 1st – 400 VALUES (Values in ROW 1-400)
2nd USER WILL USE NEXT 5 VALUES (Values in ROW 401-800)
and so on..
How can we implement this? is there a beanshell pre-processor script for each data read and decide to read the specific file as per thread number?
As of JMeter 5.3 this functionality is not supported, the only stable option I can think of is splitting your Bulkuser.csv into 5 separate files like user1.csv, user2.csv, etc. and use __threadNum() and __CSVRead() functions combination for accessing the data like:
${__CSVRead(user${__threadNum}.csv,0)} - reads the value from column 1 from user1.csv file for 1st thread (for 2nd thread it will be user2.csv file, etc)
${__CSVRead(user${__threadNum}.csv,1)} - reads the value from column 2
.....
${__CSVRead(user${__threadNum}.csv,next)} - proceeds to the next row
More information: How to Pick Different CSV Files at JMeter Runtime

Obtaining the Max in Business Objects

I've been stuck on this problem for a couple of days now.
The structure of the data I'm working with is that each quote has a web stage and each client can have multiple quotes. I need to establish which quote(s) for each client has the highest web stage (web stage is a numerical field from 1-6) and remove from the data the quotes that aren't at the max stage(two or more quotes could be at the same web stage).
I need to do it this way because there is some information held at the quote level that I need to show at the client level and if I let all the data in then my number of clients gets inflated.
Universe or query level solutions would be greatly appreciated.
The data structure and results I'm hoping to get look like this:
Data Structure & Results
Many thanks in advance for any help.
Tom
Couple of ways to do it, either in the universe, via a subquery in the report, or report variables. Here's the report variable method:
Create a new report variable named [IsMax], with this definition:
=If [Web Stage] = Max([Web Stage]) In ([Client ID]) Then 1 Else 0
Add a filter to the report, where [IsMax] is 1.

Jmeter loop though CSV Data set config - Ajax flow

I am new to Jmeter and trying to carry out the following flows:
User Login with username and password
Page 1 is displayed with 10 invoices - User select ten invoices -
10 ajax call is executed (invoice1, invoice2,invoice3.. json file is generated with invoices as request)
Page 2 is displayed to view invoices
User log out
I have recorded the flow with blazemeter plugin on chrome.
The thread group in Jmeter has the following tasks:
I have 10 users in a file called users.txt and i am using CSV Data
set config to load them.
For each user I will load only 10 invoices from invoices.txt using
CSV Data set config to load them.
Since I have 10 users and each user needs 10 invoices, my
invoices.txt has 100 unique invoices.
Please find csv config for invoice below:
The problem is that I need each user to be assigned with 10 unique invoices and those 10 invoices cannot be allocated to another user.
Any idea how I can load 10 unique invoices for each user and make sure those invoices are not assigned again to another user?
invoices.txt should have only unique IDs before test start, you can share the IDs using:
CSV Data Set Config inside loop of users with attributes:
Sharing mode - All Threads - ID won't be repeated
Recycle on EOF? - False - for not to get invalid Id (<EOF>)
Stop thread on EOF? - True - Stop when file with unique IDs ends
You can consider using HTTP Simple Table Server instead of 2nd CSV Data Set Config.
HTTP Simple Table Server has KEEP option, given you set it to FALSE each used "invoice" will be removed, it will guarantee uniqueness even in case when you run your test in Distributed (Remote) mode
You can install HTTP Simple Table Server (as well as any other JMeter Plugin) using JMeter Plugins Manager

How to Implement logging at the end of each job In talend?

I am new to Talend os.
However, I received a task:
Create file delimited .csv metadata (one for Lead & Opportunity).
Move files to your repository on the AWS server (the etl_process1 login).
Create two tables sfdc_leads_reporting_raw and sfdc_opp_reporting_raw.
Load the data from the files into the tables. Ensure the data types are correctly used when creating metadata schemas & tables.
Till step 4 I am done.
Now the problem is:
How to Implement logging at the end of each job to report the number of leads (count of distinct id in leads table) and number of opportunities created (count of opportunity id) by stages (how many converted, qualified, closed won, and dead)?
Help would be appreciated.
You can get this data using global variables, in a subjob at the end of your job. Most components provide a global variable called tComponent_NB_LINE (or _NB_LINE_INSERTED for database components) that gives you the number of lines output by the component.
For instance tFileOutputDelimited_1_NB_LINE or tOracleOutput_1_NB_LINE_INSERTED.
Using these variables you can log into console or file.
Here is a simple example. If you have a tOracleOutput_1 in your job you can do:
tPostJob -- OnComponentOk -- tFixedFlowInput -- Main -- tLogRow
Inside tFixedFlowInput you retrieve the variable
(Integer)globalMap.get("tOracleOutput_1_NB_LINE_INSERTED")`.
If you need to log aggregated info, you can append a tAggregateRow to your output components, and use tSetGlobalVar to get count by certain criteria.

Gerrit REST API: cannot use _sortkey to resume a query

I'm using Gerrit REST API to query all changes whose status is "merged". My query is
https://android-review.googlesource.com/changes/?q=status:merged&n=2
where "n=2" limits the size of query results to 2. So I got a JSON object like:
Of course there are more results. According to the REST document:
If the n query parameter is supplied and additional changes exist that match the query beyond the end, the last change object has a _more_changes: true JSON field set. Callers can resume a query with the N query parameter, supplying the last change’s _sortkey field as the value.
So I add the query parameter N with the _sortkey of the last change 100309. The new query is:
https://android-review.googlesource.com/changes/?q=status:merged&n=2&N=002e4203000187d5
With this new query, I was hoping that I'll get another 2 new query results, since I provided the _sortkey as a cursor of my previous search results.
However, it's really weird that this new query returns exactly the same results as the previous query, instead of the next 2 results as I expected. It seems like providing "N=002e4203000187d5" has no effect at all.
Does anybody know why using _sortkey to resume my query doesn't work?
I chatted with one of the developers at Google, and he confirmed that _sortkey has been removed from the newer versions of Gerrit they are running at android-review and gerrit-review. The N= parameter is no longer valid. The documentation will be updated to reflect this.
The alternative is to use &S=x to skip x results, which I tested and works well.
sortkey is deprecated in Gerrit v2.9 -
see the (Gerrit) ReleaseNotes-2.9.txt, under REST API - Changes:
[[sortkey-deprecation]]
Results returned by the [query changes] endpoint are now paginated using offsets instead of sortkeys.
The sortkey and sortkey_prev parameters on the endpoint are deprecated.
The results are now paginated using the --limit (-n) option to limit the number of results, and the -S option to set the start point.
Queries with sortkeys are still supported against old index versions, to enable online reindexing while clients have an older JS version.
See also here -
PSA: Removing the "sortkey" field from the gerrit-on-borg query interface:
...
Our solution is to kill the sortkey field and its related search operators (sortkey_before, sortkey_after, and resume_sortkey).
There are two ways you can achieve similar functionality.
Add "&S=" to your query to skip a fixed number of results.
(Note that this redoes the search so new results may have jumped ahead and
you might process the same change twice.
This is true of the resume_sortkey implementation as well,
so your code should already be able to handle this.)
Use the before/after operators.
Instead of taking the sortkey field from the last returned change and
using it in a resume_sortkey operator, you take the updated field from
the last returned change and use it in a before operator.
(This has slightly different semantics than the sortkey field, which
uses the change number as a tiebreaker when changes have similar updated times.)
...