it seems that it is not possible for me to trigger an event in OpenNMS using a threshold...
first the fact (as much detail as i can)
i want to monitor a html file, better, the content.
if a value is not what i expected OpenNMS should call be.
my html file:
Document Count: 5
in /var/lib/opennms/rrd/snmp/NODE are two files named: "documentCount" (.jbr & .meta)
--> because of the http-datacollection-config.xml
in my logfiles is written:
INFO [LegacyScheduler-Thread-2-of-50] RrdUtils: updateRRD: updating RRD file /var/lib/opennms/rrd/snmp/21/documentCount.jrb with values '1385031023:5'"
so the "5" is collected correctly.
now i created a threshold for this case:
<threshold type="high" ds-type="node"
value="4.0" rearm="2.0" trigger="1" triggeredUEI="uei.opennms.org/threshold/highThresholdExceeded"
filterOperator="or" ds-name="documentCount"
/>
in my collectd-configuration.xml is the threshold also enabled:
in my opinion the threshold of 4 is exceeded, because the value is 5. so the highTresholdEvent should be fired. BUT IT DOESNT.
so i'm here to ask if someone had an idea.
regards dawn
Check collectd.log with the following
tail -f collectd.log | grep -i thresholding
Threshold checking was moved to evaluate while the data is being retrieved a while back as opposed to a post process of rrd files.
Even with the log setting at info you should find some clues as to why the threshold rule is not matching any data.
Related
I'm relatively new to using Octave. I'm working on a project that requires me to collect the RGB values of all the pixels in a particular image and compare them to a list of other values. This is a time-consuming process that takes about half a minute to run. As I make edits to my code and test it, I find it annoying that I need to wait for 30 seconds to see if my updates work or not. Is there a way where I can run the code once at first to load the data I need and then set up an artificial starting point so that when I rerun the code (or input something into the command window) it only runs a desired section (the section after the time-consuming part) leaving the untouched data intact?
You may set your variable to save into a global variable,
and then use clear -v instead of clear all.
clear all is a kind of atomic bomb, loved by many users. I have never understood why. Hopefully, it does not close the session: Still some job for quit() ;-)
To illustrate the proposed solution:
>> a = rand(1,3)
a =
0.776777 0.042049 0.221082
>> global a
>> clear -v
>> a
error: 'a' undefined near line 1, column 1
>> global a
>> a
a =
0.776777 0.042049 0.221082
Octave works in an interactive session. If you run your script in a new Octave session each time, you will have to re-compute all your values each time. But you can also start Octave and then run your script at the interactive terminal. At the end of the script, the workspace will contain all the variables your script used. You can type individual statements at the interactive terminal prompt, which use and modify these variables, just like running a script one line at the time.
You can also set breakpoints. You can set a breakpoint at any point in your script, then run your script. The script will run until the breakpoint, then the interactive terminal will become active and you can work with the variables as they are at that point.
If you don't like the interactive stuff, you can also write a script this way:
clear
if 1
% Section 1
% ... do some computations here
save my_data
else
load my_data
end
% Section 2
% ... do some more computations here
When you run the script, Section 1 will be run, and the results saved to file. Now change the 1 to 0, and then run the script again. This time, Section 1 will be skipped, and the previously saved variables will be loaded.
I am trying to load multiple csv files into a new db using the neo4j-admin import tool on a machine running Debian 11. To try to ensure there's no collisions in the ID fields, I've given every one of my node and relationship files.
However, I'm getting this error:
org.neo4j.internal.batchimport.input.HeaderException: Group 'INVS' not found. Available groups are: [CUST]
This is super frustrating, as I know that the INV group definitely exists. I've checked every file that uses that ID Space and they all include it.Another strange thing is that there are more ID spaces than just the CUST and INV ones. It feels like it's trying to load in relationships before it finishes loading in all of the nodes for some reason.
Here is what I'm seeing when I search through my input files
$ grep -r -h "(INV" ./import | sort | uniq
:ID(INVS),total,:LABEL
:START_ID(INVS),:END_ID(CUST),:TYPE
:START_ID(INVS),:END_ID(ITEM),:TYPE
The top one is from my $NEO4J_HOME/import/nodes folder, the other two are in my $NEO4J_HOME/import/relationships folder.
Is there a nice solution to this? Or have I just stumbled upon a bug here?
Edit: here's the command I've been using from within my $NEO4J_HOME directory:
neo4j-admin import --force=true --high-io=true --skip-duplicate-nodes --nodes=import/nodes/\.* --relationships=import/relationships/\.*
Indeed, such a thing would be great, but i don't think it's possible at the moment.
Anyway it doesn't seems a bug.
I suppose it may be a wanted behavior and / or a feature not yet foreseen.
In fact, on the documentation regarding the regular expression it says:
Assume that you want to include a header and then multiple files that matches a pattern, e.g. containing numbers.
In this case a regular expression can be used
while on the description of --nodes command:
Node CSV header and data. Multiple files will be
logically seen as one big file from the
perspective of the importer. The first line must
contain the header. Multiple data sources like
these can be specified in one import, where each
data source has its own header.
So, it appears that the neo4j-admin import considers the --nodes=import/nodes/\.* as a single .csv with the first header found, hence the error.
Contrariwise with more --nodes there are no problems.
I thought I had covered both cases for the while loop, but brcov doesn't show 100% due to that "#" alert:
hash symbol
From the genhtml man page:
--branch-coverage
--no-branch-coverage
Specify whether to display branch coverage data in HTML output.
Use --branch-coverage to enable branch coverage display or
--no-branch-coverage to disable it. Branch coverage data display
is enabled by default
When branch coverage display is enabled, each overview page will
contain the number of branches found and hit per file or direc‐
tory, together with the resulting coverage rate. In addition,
each source code view will contain an extra column which lists
all branches of a line with indications of whether the branch
was taken or not. Branches are shown in the following format:
' + ': Branch was taken at least once
' - ': Branch was not taken
' # ': The basic block containing the branch was never executed
Note that it might not always be possible to relate branches to
the corresponding source code statements: during compilation,
GCC might shuffle branches around or eliminate some of them to
generate better code.
This option can also be configured permanently using the config‐
uration file option genhtml_branch_coverage.
I have an AMPL script that involves calling "solve" on a linear program many times. The solver I'm using is MINOS. After every time it solves, it outputs:
MINOS 5.51:
"option abs_boundtol 2.220446049250313e-16;" or "option
rel_boundtol 2.220446049250313e-16;" will change deduced dual values.
Is there a way to suppress this message?
I read this in the MINOS instructions:
For invocations from AMPL's solve command or of the form
minos stub ...
(where stub.nl is from AMPL's -ob or -og output options), you can use
outlev= to control the amount and kind of output:
outlev=0 no chatter on stdout
outlev=1 only report options on stdout
outlev=2 summary file on stdout
outlev=3 log file on stdout, no solution
outlev=4 log file, including solution, on stdout
which might be relevant but I don't understand it.
I have included "option solver_msg 0;" in my script; it turns off the announcement from MINOS that it got such-and-such an optimal value with so many iterations, but it doesn't affect the message I'm asking about here.
You can redirect the remaining solver output to /dev/null (or equivalent for your system) as follows:
solve > /dev/null;
As for the message about abs_boundtol and rel_boundtol, I think you can set them to a small positive value larger than 2.220446049250313e-16 to make the message go away. Note that this will affect the dual values computed for presolved constraints.
See also https://groups.google.com/d/msg/ampl/ERJ8nF_LnNU/75yWK9deBjUJ
for me "option show_boundtol 0;" worked. You can try this. By default it is "option show_boundtol 1;".
You can read about it here (http://ftp.icm.edu.pl/packages/netlib/ampl/changes)
Wanted to understand the example line of code given # perldoc.perl.org for getlogin
$login = getlogin || getpwuid($<) || "Kilroy";
It seems like it tries to get the user name from getlogin or getpwuid, but if either fails, use Kilroy instead. I might be wrong, so please correct me. Also, I've been using getlogin() in previous scripts - is there any difference between getlogin() and getlogin?
What is this code safeguarding against? Also, what purpose does $< serve? I'm not exactly sure what to search for when looking up what $< is and what it does.
EDIT
found this in the special variables section - still don't know why it is needed or what is does in the example above
$<
The real uid of this process.
(Mnemonic: it's the uid you came from,
if you're running setuid.) You can
change both the real uid and the
effective uid at the same time by
using POSIX::setuid(). Since changes
to $< require a system call, check $!
after a change attempt to detect any
possible errors.
EDIT x2
Is this line comparable to the above example? (it is currently what I use to avoid any potential problems with "cron" executing a script - i've never run into this problem, but i am trying to avoid any theoretical problem)
my $username = getlogin(); if(!($username)){$username = 'jsmith';}
You're exactly right. If getlogin returns false it will test getpwuid($<) if that returns false it will set $login to "Kilroy"
$< is the real uid of the process. Even if you're running in a setuid environment it will return the original uid the process was started from.
Edit to match your edit :)
getpwuid returns the user's name by the UID (in scalar context, which would be the case here). You would want $< as an argumnent in case the program switched UID at some point ($< is the original one it was started with)
The only thing it's guarding against is the fact that on some systems, in some circumstances, getlogin can fail to return anything useful. In particular, getlogin only does anything useful when the process it's in has a "controlling terminal", which non-interactive processes may not. See, e.g., http://www.perlmonks.org/?node_id=663562.
I think the fallback of "Kilroy" is just for fun, though in principle getpwuid can fail to return anything useful too. (You can have a user ID that doesn't have an entry in the password database.)