Zabbix action debugging - zabbix

Is there any way to debug zabbix (4.0) custom script?
In action log I see only "timeout" message but I would like to see exactly what went wrong. I do not have access to zabbix log.

Try Zabbix.Log it is like console.log in javascript.
Example:
Zabbix.Log(4, '[ Custom Script Log ] Log Initiated.. ');

Related

OCI SDK For DotNet Hangs When Doing Requests

I followed the example in described by Oracle here: https://docs.oracle.com/en-us/iaas/tools/dot-net-examples/51.3.0/vault/ListSecrets.cs.html and when I try any of the commands it sends a request and receives a response back with a response code of 200, but it does nothing after receiving a response. It just hangs. And I also only have 1 Vault and 1 Secret in that Vault. The logs stop with the log:
Setting Property Value from Header
Has any experienced this issue before?
I tried checking if it was the authentication, tried googling and tried other services like the KMSVaultService. I am expecting to get a value back when I call the ListSecrets method and for it not to hang.
The solution was to update the dependency packages. After doing that it started to work again

Why does CSRF get validated when executing console command in Yii2

I'm trying to run a background process.
my idea is, execute a command in php which in turn run the console Yii2.
$result = exec('php yii controller/action param param > result.log &');
In Localhost everything works great, but the server does not work :(
shows me the message: "Unable to verify the data sent", this should not make sense since it is running from the console.
When I run the command directly from the console, everything is going well, but when the command is launched at runtime does not work.
Friends, thank you very much collaboration.
"This was translated from Spanish"

Tracing mysql-java-connector errors

We are using BirtActuate in our application in showing reports.
Actuate -----> JDBC driver --------> MysqlDB
We are aiming to TRACE errors that appears while connecting via JDBC to mysql.
We have followed instructions available at http://dev.mysql.com/doc/connector-j/en/connector-j-reference-configuration-properties.html
and tried making connection using following connection string:
jdbc:mysql://192.168.0.1/TestDB?interactiveClient=true&autoReconnect=true&profileSQL=true&traceProtocol=true
As per the documentation of logger parameter in link mentioned we found that
The name of a class that implements "com.mysql.jdbc.log.Log" that will
be used to log messages to. (default is
"com.mysql.jdbc.log.StandardLogger", which logs to STDERR)
We want to trap all errors in a file so we can send that to support people to help us solving issues. I do not really know how to do that.
Adding &profileSQL=true&traceProtocol=true to JDBC connection URL will cause extra traces to be logged by the BirtActuate's default's logger in directory which in present birtActuateServer is $BIRT_HOME/server/data/logs
Go to the logs directory and run on command prompt
> grep -rl com.mysql.jdbc.exceptions .
This command should list the files in which it has found "com.mysql.jdbc.exceptions" string

How to capture JSON result from Azure CLI within NodeJS script

Is there a way to capture the JSON objects from the Azure NodeJS CLI from within a NodeJS script? I could do something like exec( 'azure vm list' ) and write a promise to process the deferred stdout result, or I could hijack the process.stream.write method, but looking at the CLI code, which is quite extensive, I thought there might be a way to pass a callback to the cli function or some other option that might directly return the JSON result. I see you are using the winston logger module -- I might be familiar with this, but perhaps there is a hook there that could be used.
azure vm list does have a --json option:
C:\>azure vm list -h
help: List Azure VMs
help:
help: Usage: vm list [options]
help:
help: Options:
help: -h, --help output usage information
help: -s, --subscription <id> use the subscription id
help: -d, --dns-name <name> only show VMs for this DNS name
help: -v, --verbose use verbose output
help: --json use json output
You can get the json result in the callback of an exec(...) call. Would this work for your?
Yes you can, check this gist: https://gist.github.com/4415326 and you'll see how without doing exec. You basically override the logger hanging off the CLI.
As a side note I am about to publish a new module, azure-cli-buddy that will make it easy to call the CLI using this technique and to receive results in JSON.

MySQL driver segfaulting under mod_perl - where to look for issue

I have a webapp that segfaults when the database in restarted and it tries to use the old connections. Running it under gdb --args apache -X leads to the following output:
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1212868928 (LWP 16098)]
0xb7471c20 in mysql_send_query () from /usr/lib/libmysqlclient.so.15
I've checked that the drivers and database are all up to date (DBD::mysql 4.0008, MySQL 5.0.32-Debian_7etch6-log).
Annoyingly I can't reproduce this with a trivial script:
use DBI;
use Test::More tests => 2;
my $dbh = DBI->connect( "dbi:mysql:test", 'root' );
sub test_db {
my ($number) = $dbh->selectrow_array("select 1 ");
return $number;
}
is test_db, 1, "connected to db";
warn "restart db now";
getc;
is test_db, 1, "connected to db";
Which gives the following:
ok 1 - connected to db
restart db now at dbd-mysql-test.pl line 23.
DBD::mysql::db selectrow_array failed: MySQL server has gone away at dbd-mysql-test.pl line 17.
not ok 2 - connected to db
# Failed test 'connected to db'
# at dbd-mysql-test.pl line 26.
# got: undef
# expected: '1'
This behaves correctly, telling me why the request failed.
What stumps me is that it is segfaulting, which it shouldn't do. As it only appears to happen when the whole app is running (which uses DBIx::Class) it is hard to reduce it to a test case.
Where should I start to look to debug this? Has anyone else seen this?
UPDATE: further prodding showed that it being under mod_perl was a red herring. Having reduced it to a simple test script I've now posted to the DBI mailing list. Thanks for your answers.
What this probably means is that there's a difference between your mod_perl environment and the one you were testing via your script. Some things to check:
Was your mod_perl compiled with the same version of Perl
Are the #INC's the same for both
Are you using threads in your mod_perl setup? I don't believe DBD::mysql is completely thread-safe.
I've seen this problem, but I'm not sure it had the same cause as yours. Are you by chance using a certain module for sending mails (forgot the name, sorry) from your application? When we had the problem in a project, after days of debugging we found that this mail module was doing strange things with open file descriptors, then forked off another process which called the console tool sendmail, which again did strange things with file descriptors. I guess one of the file descriptors it messed around with was the connection to the database, but I'm still not sure about that. The problem disappeared when we switched to another module for sending mails. Maybe it's worth a look for you too.
If you're getting a segfault, do you have a core file greated? If not, check ulimit -c. If that returns 0, your system won't create core files and you'll have to change that. If you do have a core file, you can use gdb or similar tools to debug it. It's not particularly fun, but it's possible. The start of the command will look something like:
gbd /usr/bin/httpd core
There are plenty of tutorials for debugging core files scattered about the Web.
Update: Just found a reference for ensuring you get core dumps from mod_perl. That should help.
This is a known problem in old DBD::mysql. Upgrade it (4.008 is not up to date).
There's a simple test script attached to https://rt.cpan.org/Public/Bug/Display.html?id=37027
that will trigger this bug.