I would like to automate host creation on zabbix server without using agent on hosts. Tried to use Discovery rules and sending JSON data with zabbix_sender. But without luck. Server does not accept data.
Environment:
Zabbix server 3.4 installed on Centos 7.Hosts with Windows or Ubuntu.
On server I created host with name zab_trap
In that host I created Discovery rule with key zab_trap.discovery and type Zabbix_trapper. Then in Discovery rule I created Host prototype with name {#RH.NAME}.
Command line with JSON "data":
zabbix_sender.exe -z zab_server -s zab_trap -k zab_trap.discovery -o "{"data":[{"{#RH.NAME}":"HOST1"}]}"
I expected that "HOST1" will be created. But after execution I got:
"info from server: "processed: 0; failed: 1; total: 1; seconds spent: 0.000188"
sent: 1; skipped: 0; total: 1"
And there is no error in zabbix_server.log (with debug level 5)
I see this:
trapper got '{"request":"sender data","data":[{"host":"zab_trap","key":"zab_trap.discovery","value":"'{data:[{{#RH.NAME}:HOST1}]}'"}]}'
I think that maybe there is something wrong with JSON syntax.
Please help.
It seems I have found solution. Problem is hidden in a way to send JSON. As I understood it does not work properly or there is problem with syntax(quotes) if write JSON directly in command line. But it works if zabbix_sender send file with JSON.
Command line:
zabbix_sender -z zab_server -s zab_trap -i test.json
File test.json contain line:
- zab_trap.discovery {"data":[{"{#RH.NAME}":"HOST1"}]}
Host created.
If you want to use the command line, without file json, you need to clean the string with:
zabbix_sender.exe -z zab_server -s zab_trap -k zab_trap.discovery -o "$(echo '{"data":[{"{#RH.NAME}":"HOST1"}]}' | tr -cd '[:print:]')"
Related
I am using Ansible to automate some network troubleshooting tasks, but when I try to ping all my devices as a sanity check I get the following error:
"msg": "Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the remote directory. Consider changing the remote temp path in ansible.cfg to a path rooted in \"/tmp\".
When I run the command in Ansible verbose mode, right before this error I get the following output:
<10.25.100.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "echo Cmd exec error./.ansible/tmp/ansible-tmp-1500330345.12-194265391907358" && echo ansible-tmp-1500330345.12-194265391907358="echo Cmd exec error./.ansible/tmp/ansible-tmp-1500330345.12-194265391907358" ) && sleep 0'
I am an intern and thus only have read-only access to all devices; therefore, I believe the error is occurring because of the mkdir command. My two questions are thus:
1) Is there anyway to configure Ansible to not create any temp files on the devices?
2) Is there some other factor that may be causing this error that I might have missed?
I have tried searching through the Ansible documentation for any relevant configurations, but I do not have much experience working with Ansible so I have been unable to find anything.
The question does not make sense in a broader context. Ansible is a tool for server configuration automation. Without write access you can't configure anything on the target machine, so there is no use case for Ansible.
In a narrower context, although you did not post any code, you seem to be trying to ping the target server. Ansible ping module is not an ICMP ping. Instead, it is a component which connects to the target server, transfers Python scripts and runs them. The scripts produce a response which means the target system meets minimal requirements to run Ansible modules.
However you seem to want to run a regular ping command using Ansible command module on your control machine and check the status:
- hosts: localhost
vars:
target_host: 192.168.1.1
tasks:
- command: ping {{ target_host }}
You might want to play with failed_when, ignore_errors, or changed_when parameters. See Error handling in playbook.
Note, that I suggested running the whole play on localhost, because in your configuration, it doesn't make sense to configure the target machines to which you have limited access rights in the inventory.
Additionally:
Is there anyway to configure Ansible to not create any temp files on the devices?
Yes. Running commands through raw module will not create temporary files.
As you seem to have an SSH access, you can use it to run a command and check its result:
- hosts: 192.168.1.1
tasks:
- raw: echo Hello World
register: echo
- debug:
var: echo.stdout
If someone have multiple nodes and sudo permission, and you want to bypass Read Only restriction, try to use raw module, to remount disk, on remoute node with raed/write option, it was helful for me.
Playbook example:
---
- hosts: bs
gather_facts: no
pre_tasks:
- name: read/write
raw: ansible bs -m raw -a "mount -o remount,rw /" -b --vault-password-file=vault.txt
delegate_to: localhost
tasks:
- name: dns
raw: systemctl restart dnsmasq
- name: read only
raw: mount -o remount,ro /
I know that tools like 'jq' can let you add a pipe to the command and format it, but you can't use it for situations like displaying real time json logs from a local server. Is there a tool that can achieve that?
I assume you use a default linux distribution with systemd installed, and all services will communicate with systemd to make protocol entries.
Back in the days linux created log files only, but they got to big. Then they added logrotate to keep it a bit more clean.
A up to date linux distribution uses systemd to store protocol entries, which is much better.
To read a protocol you need to find the service you are looking for first.
To list all services connected with systemd call ...
systemctl list-units | grep service
You could use it without grep if you really cant use pipes.
Anyway, pick the service you like and call its protocol via journalctl. Here are some examples ...
journalctl -u ssh.service -p warning -n 30 # (sudo) last 30 warnings and errors SSH created
journalctl -u apache2.service -p err --no-pager # (sudo) all Apache entries without using the pager (less)
journalctl -k -p err --since today # (sudo) kernel error messages of today
Add the option -o json or -o json-pretty to it and you will have the output you wanted. You can also add option -F or --follow to show most recent journal entries, and continuously print new entries as they are appended to the journal. This way you could make cool backgrounds with conky for example.
You can also get the status of your service via systemctl ...
systemctl status apache2.service # (sudo) status of Apache
This way you can pretty print ...
real time json logs from a local server
like you wanted, and without using jq ...
admin#suse:~$ journalctl -u apache2.service -p warning -n 1 -o json-pretty --no-pager
{
"__CURSOR" : "s=65cbb17b253f44c0b800cee690cfb9ee;i=187fd;b=d3effa777ddf4ba9b4f82f126be4439e;m=51cb2bb2d53;t=54b7c4c9ffbeb;x=9fd3b1bc1e50a798",
"__REALTIME_TIMESTAMP" : "1490372117134315",
"__MONOTONIC_TIMESTAMP" : "5620815834451",
"_BOOT_ID" : "d3effa777ddf4ba9b4f82f126be4439e",
"_TRANSPORT" : "syslog",
"_SYSTEMD_SLICE" : "system.slice",
"_MACHINE_ID" : "dc3b59f36cc34cfa8873c5b530577a2a",
"_HOSTNAME" : "suse",
"SYSLOG_FACILITY" : "1",
"_UID" : "33",
"_GID" : "33",
"PRIORITY" : "4",
"SYSLOG_IDENTIFIER" : "/usr/lib/cgi-bin/captainlog/search.sh",
"MESSAGE" : "Request for the specified frontend is not possible. REMOTE_ADDR: xxxx:xxx:xxxx:xxxx:xxxx:xxxx:xxx:xxxx",
"_COMM" : "logger",
"_CAP_EFFECTIVE" : "0",
"_SYSTEMD_CGROUP" : "/system.slice/apache2.service",
"_SYSTEMD_UNIT" : "apache2.service",
"SYSLOG_PID" : "21314",
"_PID" : "21314",
"_SOURCE_REALTIME_TIMESTAMP" : "1490372117133991"
}
Here is a good guide I found on the internet.
According to the https://docs.docker.com/engine/reference/api/docker_remote_api_v1.24/#/list-tasks, filter can be only used to get running containers with a particular service name. For some reason, I am getting a full list of all tasks regardless of their names or desired states. I can't find any proper examples of using curl with JSON requests with Docker API.
I'm using the following command:
A)
curl -X GET -H "Content-Type: application/json" -d '{"filters":[{ "service":"demo", "desired-state":"running" }]}' https://HOSTNAME:2376/tasks --cert ~/.docker/cert.pem --key ~/.docker/key.pem --cacert ~/.docker/ca.pem
Returns everything
B)
trying to get something working from Docker Remote API Filter Exited
curl https://HOSTNAME:2376/containers/json?all=1&filters={%22status%22:[%22exited%22]} --cert ~/.docker/cert.pem --key ~/.docker/key.pem --cacert ~/.docker/ca.pem
This one returns "curl: (60) Peer's Certificate issuer is not recognized.", so I guess that curl request is malformed.
I have asked on Docker forums and they helped a little. I'm amazed that there are no proper documentation anywhere on the internet on how to use Docker API with curl or is it so obvious and I don't understand something?
I should prefix this with the fact that I have never seen curl erroneously report a certificate error when in fact there was some sort of other issue in play, but I will trust your assertion that this is in fact not a certificate problem.
I thought at first that your argument to filters was incorrect, because
according to the API reference, the filters parameter is...
a JSON encoded value of the filters (a map[string][]string) to process on the containers list.
I wasn't exactly sure how to interpret map[string][]string, so I set up a logging proxy between my Docker client and server and ran docker ps -f status=exited, which produced the following request:
GET /v1.24/containers/json?filters=%7B%22status%22%3A%7B%22exited%22%3Atrue%7D%7D HTTP/1.1\r
If we decode the argument to filters, we see that it is:
{"status":{"exited":true}}
Whereas you are passing:
{"status":["exited"]}
So that's different, obviously, and I was assuming that was the source of the problem...but when trying to verify that, I ran into a curious problem. I can't even run your curl command line as written, because curl tries to perform some globbing behavior due to the braces:
$ curl http://localhost:2376/containers/json'?filters={%22status%22:[%22exited%22]}'
curl: (3) [globbing] nested brace in column 67
If I correctly quote your arguments to filter:
$ python -c 'import urllib; print urllib.quote("""{"status":["exited"]}""")'
%7B%22status%22%3A%5B%22exited%22%5D%7D
It seems to work just fine:
$ curl http://localhost:2376/containers/json'?filters=%7B%22status%22%3A%5B%22exited%22%5D%7D'
[{"Id":...
I can get the same behavior if I use your original expression and pass -g (aka --globoff) to disable the brace expansion:
$ curl -g http://localhost:2376/containers/json'?filters={%22status%22:[%22exited%22]}'
[{"Id":...
One thing I would like to emphasize is the utility of sticking a proxy between the docker client and server. If you ever find yourself asking, "how do I use this API?", an excellent answer is to see exactly what the Docker client is doing in the same situation.
You can create a logging proxy using socat. Here is an example.
docker run -v /var/run/docker.sock:/var/run/docker.sock -p 127.0.0.1:1234:1234 bobrik/socat -v TCP-LISTEN:1234,fork UNIX-CONNECT:/var/run/docker.sock
Then run a command like so in another window.
docker -H localhost:1234 run --rm -p 2222:2222 hello-world
This example uses docker on ubuntu.
A docker REST proxy can be simple like this:
https://github.com/laoshanxi/app-mesh/blob/main/src/sdk/docker/docker-rest.go
Then you can curl like this:
curl -g http://127.0.0.1:6058/containers/json'?filters={%22name%22:[%22jenkins%22]}'
I want to ping an external ip from all of my servers that run zabbix agent.
I searched and find some articles about zabbix user parameters.
In /etc/zabbix/zabbix_agentd.conf.d/ I created a file named userparameter_ping.conf with following content:
UserParameter=checkip[*],ping -c4 8.8.8.8 && echo 0 || echo 1
I created an item named checkip in zabbix server with a graph but got no data. After some another digging I found zabbix_get and tested my userparameter but I got the error : ZBX_NOTSUPPORTED
# zabbix_get -s 172.20.4.43 -p 10050 -k checkip
my zabbix version :
Zabbix Agent (daemon) v2.4.5 (revision 53282) (21 April 2015)
Does anybody know what I can do to address this?
After some change and talks with folks in mailing list finally it worked but how :
first i created a file in :
/etc/zabbix/zabbix_agentd.conf.d/
and add this line :
UserParameter=checkip[*],ping -W1 -c2 $1 >/dev/null 2>&1 && echo 0 || echo 1
and run this command :
./sbin/zabbix_agentd -t checkip["8.8.8.8"]
checkip[8.8.8.8] [t|0]
so everything done but Timeout option is very important for us :
add time out in /etc/zabbix/zabbix_agentd.conf
Timeout=30
Timeout default is 3s so if we run
time ping -W1 -c2 8.8.8.8
see maybe it takes more than 3s so you got error :
ZBX_NOTSUPPORTED
It can be anything. For example timeout - default timeout is 3 sec and ping -c4 requires at least 3 seconds, permission/path to ping, not restarted agent, ...
Increase debug level, restart agent and check zabbix logs. Also you can test zabbix_agentd directly:
zabbix_agentd -t checkip[]
[m|ZBX_NOTSUPPORTED] [Timeout while executing a shell script.] => Timeout problem. Edit zabbix_agentd.conf and increase Timeout settings. Default 3 seconds are not the best for your ping, which needs 3+ seconds.
If you need more than 30s for the execution, you can use the nohup (command..) & combo to curb the timeout restriction.
That way, if you generate some file with the results, in the next pass, you can read the file and get back the results without any need to wait at all.
For those who may be experiencing other issues with the same error message.
It is important to run zabbix_agentd with the -c parameter:
./sbin/zabbix_agentd -c zabbix_agentd.conf --test checkip["8.8.8.8"]
Otherwise zabbix might not pick up on the command and will thus yield ZBX_NOTSUPPORTED.
It also helps to isolate the command into a script file, as Zabbix will butcher in-line commands in UserParameter= much more than you'd expect.
I defined two user parameters like this for sync checking between to samba DCs.
/etc/zabbix/zabbix_agentd.d/userparameter_samba.conf:
UserParameter=syncma, sudo samba-tool drs replicate smb1 smb2 cn=schema,cn=configuration,dc=domain,dc=com
UserParameter=syncam, sudo samba-tool drs replicate smb2 smb1 cn=schema,cn=configuration,dc=domain,dc=com
and also provided sudoer access for Zabbix user to execute the command. /etc/sudoers.d/zabbix:
Defaults:zabbix !syslog
Defaults:zabbix !requiretty
zabbix ALL=(ALL) NOPASSWD: /usr/bin/samba-tool
zabbix ALL=(ALL) NOPASSWD: /usr/bin/systemctl
And "EnableRemoteCommands" is enabled on my zabbix_aganetd.conf, sometimes when I run
zabbix_get -s CLIENT_IP -p10050 -k syncma or
zabbix_get -s CLIENT_IP -p10050 -k syncam
I get the error ZBX_NOTSUPPORTED: Timeout while executing a shell script.
but after executing /sbin/zabbix_agentd -t syncam on the client, Zabbix server just responses normally.
Replicate from smb2 to smb1 was successful.
and when it has a problem I get below error on my zabbix.log
failed to kill [ sudo samba-tool drs replicate smb1 smb2 cn=schema,cn=configuration,dc=domain,dc=com]: [1] Operation not permitted
It seems like it is a permission error! but It just resolved after executing /sbin/zabbix_agentd -t syncam but I am not sure the error is gone permanently or will happen at the next Zabbix item check interval.
I have successfully installed a zabbix environment. Now I want to use zabbix_sender, to send data from a third party program to zabbix. I created a host “api_test”, and an item “test item ” with the key “java.test.item”. Sending
zabbix_sender -z localhost -p 10051 -s "api_test" -k java.test.item -o 1234
from the linux server works perfectly and adds a dataset as expected.
The problem is, that I would like to use a discovery item, and I cannot find the right syntax for zabbix_sender. Here is how I configured the discovery rule:
And this is the Item Prototype:
I expected the following query to add an Item based on the item prototype, but nothing happens:
zabbix_sender -z localhost -p 10051 -s "api_test" -k java.th.discovery -o '{"data":[{"{#THNAME}:"test_thread"}]}'
I also tried with different quotations marks (single, double, without), but nothing seems to work.
Consequently, the following query I tried afterwards fails:
zabbix_sender -z localhost -p 10051 -s "api_test" -k java.th.ex["test_thread"] -o 98765
Question is: where am I mistaken? I guess it is the discovery rule, or the zabbix_sender syntax for discovery, but I cannot find anything in the documentation.
Any help is appreciated!
Steffen, your configuration in Zabbix frontend is correct.
However, there is a mistake in JSON syntax that you use in the command line. Double quotes after {#THNAME} are missing:
{"data":[{"{#THNAME}:"test_thread"}]}
You should see the error message about it in discovery list:
It should work after that issue is fixed. If not, please provide details about your "#Thread for discovery" macro and the error message that you get.