Suppose I have a discovery rule that:
Gets model number via SNMP
Gets system.uname via the agent
Suppose I now want to create an action with the following condition:
Received value contains X456
AND
Received value contains Linux
Will that work? It seems that Zabbix may compare the first received value (let's say from the SNMP command) with Linux, which will not match. And then the whole condition will fail?
I see this in the documention: https://www.zabbix.com/documentation...ion/conditions
Service checks in a discovery rule, which result in discovery events,
do not take place simultaneously. Therefore, if multiple values are
configured for Service type, Service port or Received value conditions
in the action, they will be compared to one discovery event at a time,
but not to several events simultaneously. As a result, actions with
multiple values for the same check types may not be executed
correctly.
Is there a reliable way to do the above two conditions?
After analyzing the debug output of the Discoverer process, I now understand that the above will not work. That is because the discovery events (agent and snmp in this case) will fire one by one -- not at the same time. As a result, only one of those conditions will be true. I.e., either the received value will be X456 or it will be Linux.
Related
Is it possible to materialize the edits made as part of a scenario into a dataset in foundry?
I want for each scenario to write out the primary keys of the objects edited as part of the scenario.
The motivation is that I need to run multiple processes to compute metrics as part of the changed values for each scenario, at a scale and runtime that is not possible to do with Functions.
Edit with details:
The thing is that I am not doing actual edits to the objects for the object type, I don't want to apply it.
I tested out the "Action Log" and it does not seem like this picks up "uncommitted" actions, meaning actions that is just run as part of a scenario. Also, it does not seem to be a link to the scenario it was a part of, even if the changes were committed.
The workflow is that I have Object Type A, and I define multiple scenarios S on a subset of the objects in A.
Each scenario might make something like 50k edits to a subset of A, through multiple Actions backed by a Function.
I save some of the scenarios. Now I am able to load these scenarios and "apply" them on A again in Workshop.
However I need to be able to get all the primary keys, and the edited values of A materialized into a dataset (for each scenario), as I need to run some transformation logic to compute a metric for the change as part of each scenario (at a scale and execution time not possible in Functions).
The Action Log did not seem to help a lot for this. How do I get the "edits" as part of a saved scenario into a dataset?
The only logic you can run BEFORE applying will be functions.
Not sure about your exact logic but Function's Custom Aggregations can be very powerful: Docs here
this might not directly let you calculate the Diff but you could use the scenario compare widgets in workshop to compare your aggregation between multiple Scenarios
e.g. you have a function that sums(total profit)
Your Workshop could show:
Current Data:
$10k
Scenario A:
$5k
Scneario B:
$13k
instead of like:
Scenario A:
-$5k
Scenario B:
+$3k
Afaik there's no first class way of doing this (yet).
"Applying" a scenario basically means you're submitting the actions queued on the scenario to the ontology. So neither the actions nor the ontology are aware that they came from a scenario.
What I've been doing to achieve what you're working on is using the "Action Log". It's still in Beta so you might need to ask for it to be enabled. It will allow you on each action to define a "log" object to be created that can track the pks of your edited objects per Action.
How I do the action log is:
My Action log has the "timestamp" of the action when they were run.
My Scenarios have the "timestamp" of when it was applied.
Since "Applying a Scenario" means -> actually running all actions on Ontology (underlying data) this gets me this structure if I sort everything by timestamp:
Action 1
Action 2
Scenario A applied
Action 3
Action 4
Scenario B applied
this allows you to do a mapping later on of Action 1/2 must come from Scenario A and Action 3+4 come from Scenario B.
EDIT: Apparently you might be able to use the Scenario RID directly in the Action Logs too (which is a recent addition I haven't adopted yet)
This won't allow you tho to compute (in transforms...) anything BEFORE applying a scenario tho
I'm using Zabbix 3.2; I've configured mail alert Action for all triggers. My question is,
Say trigger(A) alerts (Problem event) on escalation and returns to normal (Ok event alert) after few mintues. I need to stop the alert if same 'A' Trigger happened in next few mintues. How can it be possible?
I've tried with this documentation;
https://www.zabbix.com/documentation/3.2/manual/config/notifications/action/escalations
The question seems to be about preventing trigger flapping. In general, three methods are suggested:
use trigger functions - for example, instead of last() use avg(15m) - then the alerting will happen only after the average value for 15 minutes has exceeded the threshold. Other useful trigger functions might be min() and max()
use hysteresis - this makes trigger fire at one threshold but resolve on another. Before Zabbix 3.2 that was done in the trigger expression; since Zabbix 3.2 there is a separate "recovery" field
use action escalations that do nothing at first, and only send an alert when the problem has been there for some period of time - for example, sending out the alert on the second or third step
All three methods achieve a similar outcome, but the key differences are:
the first method - trigger functions - makes the trigger fire later, but reduces the number of events (the times trigger fires)
the second method - hysteresis - makes the trigger fire at the same time as the "flappy" trigger, but delays the recovery event. It also reduces the number of events (the times trigger fires)
the third method - delayed escalation steps - does not affect the trigger at all, it can keep on flapping. It will only alert if the problem is there for a longer time, though.
Hysteresis will usually alert when a trigger would have flapped; delayed escalation steps will ignore short-lived problems.
Complexity-wise, I'd usually go with the first method - it is the easiest to configure, the hardest to misconfigure and the easiest to understand. I'd go with one of the two other methods if I specifically needed the way they make events/alerts behave - those methods have a bit higher potential to be misconfigured or misunderstood.
Note that the item key reference in the comment is wrong - host is separated from key with colon, full key name is missing and the parameter is wrong. See the agent key page in the manual for correct key syntax.
In my current project I need to check if a process Instance already exist. I should be able to check this against a value which is stored in the pipeline.
In WmMonitor I tried a few services but with none I was able to find the existing process.
Do I need to mark the pipeline variable as logged field to be able to find the process instance?
Mayeb someone can share an example?
Regards Dirk
Yes, you have to mark the field as a 'logged field' in order to be able to search for process instances with a certain value in this field. Use services from the package WmMonitor.
This is too long, so I used answer instead of comment.
As of my understanding, you have some data, for simplicity assume, that just one string represents the data. Let say this is in a DB table. You should have flag in same table - processed with values true/false (in DB typically 0/1) and there should be a scheduler creating processes for unprocessed data only.
So the pseudo code would be:
retrieve unprocessed data
for each record try to create process
if creation was successful, mark the data as processed and continue
in this scenario you do not need to check whether process was started already...
Feel free to comment if something is unclear ;-)
HI All,
I have a simple lookup transformation which finds matched and unmatched records, it creates table for matched and unmatched records. Now I want to send an email when the package finds any unmatched records and package should stop right there.
Thanks
Nick
Here are a few solutions:
In the Lookup Transformation Editor screen you can set the Specify how to handle rows with no matching entries field to Fail component and set the job that runs the package to send a notification on job failure. The downsides of this approach are you must run the package from a job schedule, the email that is generated is non-descriptive, and you can only define one operator for this job which may need to be different than your normal operator list (if you are notifying business users instead of IT folks).
In the Lookup Transformation Editor screen you can set the component to fail as described in 1 and setup a precedence constraint on the control flow that leads to sending the email on failure. You can also set the MaximumErrorCount for the parent object to 2. Some downsides to this approach include the fact that other errors may occur in the package and the package will still succeed or their may be an error on the source part of the data flow and you may want to handle those kinds of errors separately.
In the Lookup Transformation Editor screen you can Redirect rows to no match output, create a variable called RowCount of Data Type Int32, direct the Lookup No Match Output to a Row Count data transformation, send the Lookup Match Output to the normal destination, in the Control Flow set a precedence constraint to the next step where the Evaluation operations is Expression and Constraint where Value is equal to Success and Expression is equal to #[RowCount] == 0, add a Send Mail Task to the Control Flow which has a precendence constraint from the previous step where Value is equal to Success and Expression is equal to #[RowCount] > 1. This will allow you to let the package succeed and send only one email. The downside of this approach is that the data flow destination will still be populated by the matched source data and it won't immediately stop once a No Match is detected. It will only stop once the data flow itself completes.
I hope this meets your business needs. Let me know if you need any further assistance.
I am doing an SSIS look up transformation, looking up in a voyages table, however some of my records don't have voyages, so I get errors. Is there any way that I can skip the look up on those records.
To expand on unclepaul84's answer, you can configure your lookup component to perform one of three actions on a failed lookup.
Fail Component (the default and the action you have now from your question. Fails the job step (and maybe the entire package) when there are no matches to the row in a lookup attempt.)
Ignore Failure (Doesn't fail your job step, leaves a null in the field you brought in from the lookup i.e. Voyage name? )
Redirect Row (Doesn't fail your job step, allows you to direct rows with no voyage to a different processing flow for handling (i.e. if you want to put a default 'No Voyages' message in your Voyage Name field))
Alternatively, as John Saunders mentioned in his comment, you could test the VoyageID column and split your data flow into two paths depending upon if the VoyageID column is null. Since the Lookup component can handle this, I prefer using the single lookup rather than a conditional split followed by a lookup on one of the paths.
You could tell the lookup of component to ignore lookup failures.