I wish to know if its possible to post process traps and events that zabbix server would have received from zabbix agents . I am hoping there is some configuration which I don't know of .
Since you don't give more details, my assumption will be that you want to do something in case of a certain event. Most probably a trigger. Like a service went down or there are too many open connections. These can be handled by using zabbix's Actions to intercept an event.
The following operation depends on what you have to do, it can be a remote command (executed on remote agent) or a script executed by server.
The remote command is a straight forward concept working OOTB. Just follow the manual and howtos.
Running something on the server isn't there, but you can trick zabbix to do just that by using custom alert scripts, which are just scripts launched by the server process. Then create a send message operation that uses your custom alert script and off you go.
Related
I would like to use Dokku for deploying my Rails apps. But I cannot find any method allows me to send the log to Zabbix? Does anyone have ideas? Thanks!
You can't send logs directly to Zabbix for it's not a log collector.
You need a Zabbix Agent installed to your app machine to analyze logs and trigger events or, if you are using a PaaS, you should implement web scenarios on your Zabbix Server to check specific URLs.
If you want to collect logs instead, you could implement a ELK stack.
Elastic search has its own alerting module but it's paid and IMHO Zabbix alerting is far better.
I'm writing an application which delivers data from remote devices over an HTTP API. These devices are on a mobile data connection and have limited resources.
I wish to receive custom monitoring data over the HTTP API, relying on the security model designed in the application, and push that data to Zabbix directly (or indirectly) from node.js. I do not wish to use Zabbix Agent on the remote devices.
I see that I can use zabbix_sender to send data to a Zabbix server containing a pre-configured host. This works great. I intend to deliver monitoring data over my custom API, and when received give this data to zabbix_sender inside the server network.
The problem is there are many devices in the field and more are being added all the time.
TL;DR:
When zabbix_sender provides a custom hostname which doesn't exist in Zabbix already, it fails.
I would like to auto-add discovered hosts, based upon new hostnames from zabbix_sender. How would I do this?
Also, extra respect if anyone can give examples of how to avoid zabbix_sender and send data directly from node.js to the Zabbix server. I mean: suggest an NPM package that you have experience using. (Update: Found working node.js package here: https://www.npmjs.com/package/node-zabbix-sender)
Zabbix configuration: I'm learning from Zabbix 2.4 installed in Docker, no custom configuration from this Dockerhub: https://hub.docker.com/r/zabbix/zabbix-2.4/
Probably the best would be to use the Zabbix API to create hosts directly.
Alternatively, you could set up an action and emulate active agent connection, which would make Zabbix create the host via the active agent auto-regstration.
You could also use low level discovery (LLD) to send in JSON, which would result in hosts/items being created, based on prototypes.
In all of these cases you have to wait for one minute (by default) for the hosts to appear in the Zabbix cache, then you can send the data.
Also note that Zabbix 2.4 is not supported anymore, it will receive no fixes - it is not a "long-term support" release.
I've created an SSIS package which runs perfect when scheduled as a job.Now I've have a requirement that a mail ought to be sent every time it runs stating if the package was successfully completed or failed.
I've created an SMTP Connection with server name as mx.xxxxxxxx(organization).I've neither checked windows authentication or Enable Secure Socket Layer Options(as suggested in various blogs).
The Job runs fine and sends mail when run manually but is failing when scheduled as a job.
I've tried running it by editing the command line as suggested by many but with no success.
Can you please suggest where I might be going wrong,
Below is the error:
Argument "SMTP" for option "connection" is not valid. The command line parameters are invalid. The step failed.
Since it fails when you run it from your production server, regardless of whether you run it manually or from the job, it is probably related either to connectivity (can your production server connect to the smtp server), or it could be permissions related, if you are using some kind of proxy account on the server that is different from the one you use in your local BIDS.
G'Day,
Is anyone able to provide some pointers on how I can notify my Delphi application that a particular record in my MySQL database has changed? Something along the lines of the event system from Interbase?
Ideas I have looked at:
.: Q4M :. (http://q4m.31tools.com/)
Pros: Native MySQL solution requiring no external daemons
Cons: No Win32 build exists due to it using Posix calls specific to Linux
.: MySQL Message API :. (http://messagequeue.lenoxway.net/)
Pros: Robust (using spread.org)
Cons: No Win32 binary. Additional configuration and daemon(s) of spread.org required
.: Custom User Defined Function :.
I am attempting to write a UDF that can use the Win32 API PostMessage() so send a windows message to a simple socket server.
Pros: Integrated (albeit with external DLL dependency) with MySQL. Can be customised to my needs
Cons: I cannot get it to work (See post MySQL User Defined Function to send a windows message). This may be because MySQL is running as a service
Any pointers, ideas etc. greatly appreciated.
--D
As an option you may consider to use a middle-tier solution like a RemObject DataAbstract or kbmMW. AFAIK, they allow to track the changes on the middle layer and provide mechanisms to notify clients about that.
I ended up implementing this as follows:
Created Windows app that listened on a TCP port as well as a Windows Pipe
Created a mySQL User Defined Function (UDF) that would connect to the above Windows Pipe and send some information
Added triggers to the tables in the database to invoke the UDF with information about which table, what operation (insert, deleted, update), primary key values
TCP clients can now connect to the Windows app to receive the information passed on from the UDF
The TCP clients can then refresh as needed using the information retrieved
Works well and is light weight bandwidth wise (as clients only refresh what they need). Also keeping the TCP Server on the same machine as the database and using a Windows Pipe means the pipe can be kept open, and by writing to the pipe there is no TCP stack overhead. Means the load on mySQL and the time taken to execute the UDF is very minor.
I have different sites running with 4 to 5 server at each location. All the locations have one monitoring server with Nagios. Now I want to create a central location and want to combine all the nagios services running at each location. Can anyone please point me to some documentation for these type of jobs.
There are two approaches that you can take.
Install a new Nagios core as you did at each location and perform active checks on each of the remote hosts. You'll likely end up installing NRPE on each of the remote hosts at each location and can read this document for the details: http://nagios.sourceforge.net/docs/nrpe/NRPE.pdf. If your remote servers are Windows servers, you can use NSClient to much of the same things that NRPE does for Linux hosts. This effectively centralizes your monitoring server. I also wrote some how-to style entries for using NRPE to run privileged commands http://blog.gnucom.cc/?p=479 or to run event handlers http://blog.gnucom.cc/?p=458. If you get tired of installing NRPE, you can use my script here http://blog.gnucom.cc/?p=185. I also have instructions to install NSClient here http://blog.gnucom.cc/?p=201.
Install a new Nagios core as you did at each location and perform passive checks by instructing the remote Nagios cores to feed their results to the new central Nagios core's passive command file. I haven't done this myself, so I'm going to point you to the communities documentation here http://nagios.sourceforge.net/docs/2_0/passivechecks.html. You could probably look at my event handler post to set up event handlers that send checks to the main server.
From my personal experience, the first option I mentioned is easier to implement, and is far easy to administer. However, as your server fleet grows you'll start seeing major CPU bottlenecks with the main Nagios core. This is where passive checks would become beneficial, as the main Nagios core simply waits for critical checks to be sent to it rather than having to check them itself.
Hope this helps. :)
A centralized view tool may be what you are looking for. There are a number of different options available.
Nagiosfusion
MK Livestatus
Nagcen
Thruk