Zabbix Trapper: Cannot get data from orabbix - zabbix

I am using orabbix to monitor my db. The data from the queries executed on this db using orabbix are sent to zabbix server. However, I am not able to see the data reaching zabbix.
On my zabbix web console, I see this message on the triggers added - "Trigger expression updated. No status update so far."
Any ideas?
My update interval for the trigger is set to 30 sec.

Based on the screenshots you posted, your host is named "wfc1dev1" and you have items with keys "WFC_WFS_SYS_001" and "WFC_WFS_SYS_002". However, based on the Orabbix XML that it sends to Zabbix, the hostname and item keys are different. Here is the XML:
<req><host>V0ZDMURFVg==</host><key>V0ZDX0xFQUZfU1lTXzAwMg==</key><data>MA==</dat‌​a></req>
From this, we can deduce the host:
$ echo V0ZDMURFVg== | base64 -d
WFC1DEV
The key:
$ echo V0ZDX0xFQUZfU1lTXzAwMg== | base64 -d
WFC_LEAF_SYS_002
The data:
$ echo MA== | base64 -d
0
It can be seen that neither the host name, nor item key match those configured on Zabbix server. Once you fix that, it should work.

Related

OpenSearch Installation | securityadmin.sh | UnavailableShardsException[[.opendistro_security][0] primary shard is not active Timeout

We installed OpenSearch on 4 VMs(1 coordinating node, 1 master node and 2 data nodes) and according to documentation https://opensearch.org/docs/latest/opensearch/cluster/
when we login to OpenSearch URL or via curl, we are getting following msg:
e.g.
[apm#IR-APM-DEV-MN1 config]$ curl -XGET https:// :9200/_cat/plugins?v -u 'admin:admin' --insecure
OpenSearch Security not initialized.
According to it and msg we saw “[opensearch-master] Not yet initialized (you may need to run securityadmin)" , we executed securityadmin script as follows:
./securityadmin.sh -cd ../securityconfig/ -nhnv -cacert ../../../config/root-ca.pem -cert ../../../config/kirk.pem -key ../../../config/kirk-key.pem -h -cn apm-cluster-1 -arc -diagnose
And got following error msg for example:
Will update '_doc/config' with ../securityconfig/config.yml
FAIL: Configuration for 'config' failed because of UnavailableShardsException[[.opendistro_security][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.opendistro_security][0]] containing [index {[.opendistro_security][_doc][config], source[n/a, actual length: [3.7kb], max length: 2kb]}] and a refresh]]
....
Can someone advise if any suggestions to overcome those errors? (primary shard is not active Timeout / increase max length )
Thanks,
Noam
simply we can disable the security method:
cd /path/to/opensearch-1.2.4
sudo nano config/opensearch.yml
Add the below line :-
plugins.security.disabled: true
If not try this link generate key and follow the given steps in the official.
https://opensearch.org/docs/latest/opensearch/install/tar/
Thank You.

jq get all values in a tabbed format

i'm trying to convert a json to a tab formatted data:
{"level":"INFO", "logger":"db", "msg":"connection successful"}
{"level":"INFO", "logger":"server", "msg":"server started"}
{"level":"INFO", "logger":"server", "msg":"listening on port :4000"}
{"level":"INFO", "logger":"server", "msg":"stopping s ervices ..."}
{"level":"INFO", "logger":"server", "msg":"exiting..."}
to something like this:
INFO db connection successful
INFO server server started
INFO server listening on port 4000
DEBUG server stopping s ervices ...
INFO server exiting...
I've tried this jq -r ' . | to_entries[] | "\(.value)"', but this prints each value on a separate line.
Assuming the keys are always in the same order, you could get away with:
jq -r '[.[]]|#tsv'
In any case, it would be preferable to use #tsv.

how to capture bitorrent infohash id in network using tcpdump or any other open scource tool?

i am working on a project where we need to collect the bitorrent infohash id running in our small ISP network. using port mirroring we can pass the all wan traffic to a server and run tcpdump tools or any other tool to find the infohash id download by bitorrent client. for example
tcpflow -p -c -i eth1 tcp | grep -oE '(GET) .* HTTP/1.[01].*'
this code is showing result like this
GET /announce?info_hash=N%a1%94%17%2c%11%aa%90%9c%0a%1a0%9d%b2%cfy%08A%03%16&peer_id=-BT7950-%f1%a2%d8%8fO%d7%f9%bc%f1%28%15%26&port=19211&uploaded=55918592&downloaded=0&left=0&corrupt=0&key=21594C0B&numwant=200&compact=1&no_peer_id=1 HTTP/1.1
now we need to capture only infohash and store it to a log or mysql database
can you please tell me which tool can do thing like this
Depending on how rigorous you want to be you'll have to decode the following protocol layers:
TCP, assemble packets of a flow. you're already doing that with tcpflow. tshark - wireshark's CLI - could do that too.
HTTP, extract the value of the GET header. A simple regex would do the job here.
URI, extracting the query string
application/x-www-form-urlencoded, info_hash key value pair extraction and handling of percent-encoding
For the last two steps I would look for tools or libraries in your programming language of choice to handle them.

MySQL login-path issues with clustercheck script used in xinetd

default: on
# description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
disable = no
flags = REUSE
socket_type = stream
type = UNLISTED
port = 9200
wait = no
user = root
server = /usr/bin/mysqlclustercheck
log_on_failure += USERID
only_from = 0.0.0.0/0
#
# Passing arguments to clustercheck
# <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>"
# Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local"
# Compatibility: server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local"
# 55-to-56 upgrade: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.extra"
#
# recommended to put the IPs that need
# to connect exclusively (security purposes)
per_source = UNLIMITED
}
/etc/xinetd.d #
It is kind of strange that script works fine when run manually when it runs using /etc/xinetd.d/ , it is not working as expected.
In mysqlclustercheck script, instead of using --user= and passord= syntax, I am using --login-path= syntax
script runs fine when I run using command line but status for xinetd was showing signal 13. After debugging, I have found that even simple command like this is not working
mysql_config_editor print --all >>/tmp/test.txt
We don't see any output generated when it is run using xinetd ( mysqlclustercheck)
Have you tried the following instead of /usr/bin/mysqlclustercheck?
server = /usr/bin/clustercheck
I am wondering if you could test your binary location with the linux which command.
A long time ago since this question was asked, but it just came to my attention.
First of all as mentioned, Percona Cluster Control script is called clustercheck, so make sure you are using the correct name and correct path.
Secondly, since the server script runs fine from command line, it seems to me that the path of mysql client command is not known by the xinetd when it runs the Cluster Control script.
Since the mysqlclustercheck script as it is offered from Percona, it uses only the binary name mysql without specifying the absolute path I suggest you do the following:
Find where mysql client command is located on your system:
ccloud#gal1:~> sudo -i
gal1:~ # which mysql
/usr/local/mysql/bin/mysql
gal1:~ #
then edit script /usr/bin/mysqlclustercheck and in the following line:
MYSQL_CMDLINE="mysql --defaults-extra-file=$DEFAULTS_EXTRA_FILE -nNE --connect-timeout=$TIMEOUT \
place the exact path of mysql client command you found in the previous step.
I also see that you are not using MySQL connection credentials for connecting to MySQL server. mysqlclustercheck script as it is offered from Percona, it uses User/Password in order to connect to MySQL server.
So normally, you should execute the script in the command line like:
gal1:~ # /usr/sbin/clustercheck haproxy haproxyMySQLpass
HTTP/1.1 200 OK
Content-Type: text/plain
Where haproxy/haproxyMySQLpass is the MySQL connection user/pass for HAProxy monitoring user.
Additionally, you should specify them to your script's xinetd settings like:
server = /usr/bin/mysqlclustercheck
server_args = haproxy haproxyMySQLpass
Last but not least, the signal 13 you are getting is because you try to write something in a script run by xinetd. If for example in your mysqlclustercheck you try to add a statement like
echo "debug message"
you probably going to see the broken pipe signal (13 in POSIX).
Finally, I had issues with this script using SLES 12.3 and I finally manage to run it not as 'nobody' but as 'root'.
Hope it helps

Active check if Debian is updated

I have a problem with active checking. I need to check if my debian servers are updated. Pasive check is allready working but I try active checks for first time. I inserted into zabbix_agentd.conf user parameter
UserParameter=system.sw.debianupdates,apt-get dist-upgrade -s |sed -n 's/^\([0-9]\+\) upgraded.*/\1/p'
and set ServerActive(same like server for passive), RefreshActiveChecks(120)
I created an item http://prntscr.com/6jyvww
If I check values in latest data in graph there are no value(no data) but if I try
zabbix_get -s IP -k system.sw.debianupdates
on zabbix server I get a value which I expected
Any idea what i forgoted set up or I have mistake in?