Zabbix 5.2 storage monitoring less than n GB (OS independent) - zabbix

I am new to Zabbix. I have created monitoring for the cpu and memory. Now, I want to create a template for monitoring storage less than some GB especially for Windows or Linux which are in my network. But creating an OS independent trigger would be great.
I am following this tutorial but I think this is an older version (i am using 5.2) and the triggers are not shown in my inteface:
https://www.youtube.com/watch?v=PS8nE2Zkal8&t=54s&ab_channel=AigarsKadikis
Is there any easy way to make this happen(maybe importing some files).

I would suggest to take advantage of default pre-made templates officially provided by Zabbix SIA:
https://github.com/zabbix/zabbix/tree/master/templates/os/windows_agent
https://github.com/zabbix/zabbix/tree/master/templates/os/linux
as zabbix agent binaries for Windows and Linux are different, mapping for them needs to happen at host level under zabbix web interface.
Changing trigger trasholds can be done on a template level in autodiscovery triggers section or individually directly on the host entity.
Providing a full-fledged tutotial on Zabbix triggers and treasholds is not in scope of StackOverflow answer.

Related

How does Dynatrace OneAgent inject into Java

Classical Dynatrace monitoring worked by using an agent for monitoring java processes. You had to add the agent to the monitored VM and it worked.
Dynatrace OneAgent does this without agents. But how does it work. There was no agent added to the Java process. All that is needed is restarting the Java process. Tried it out with Liberty Server and could find two Dynatrace threads called ruxitautosensor and ruxitsubpathsender. But i do not understand how the injection works.
Dynatrace OneAgent changed the "/etc/ld.so.preload" file in OS:
/$LIB/liboneagentproc.so
"/etc/ld.so.preload" and env variable "LD_PRELOAD" are used to preload specified lib when starting new process.
It seems to me they are using standard JVM Tool Interface APIs.
-agentpath:<path-to-agent>=<options> to JVM.
Full documentation here: https://docs.oracle.com/javase/8/docs/platform/jvmti/jvmti.html
Example:
-agentpath:C:/PROGRA~2/DYNATR~1/oneagent/agent/lib64/oneagentloader.dll=isjdwppresent=true,loglevelcon=none,tenant=00000000-0000-0000-0000-000000000000,tenanttoken=XXXXXXXXXXXXXXXX,server=https://10.10.10.10:8443/communication
Note: Some strings have been obfuscated.
On a very high level the installed OS-level agent runs some processes which use OS-level functionality to iterate processes on the machine and inject the agent via various different techniques into all the technologies that are supported for "deep monitoring", e.g. Java, .NET and a number of others.
More details are likely not published for obvious reasons as all this gives a clear advantage compared to the traditional approach for injecting agents manually via adjusting startup scripts, especially if you are deploying into a very large environment.

How best to deploy this multi-tier app?

We currently have an application that runs on one dedicated server. I'd like to move it to OpenShift. It has:
A public-facing web app written in PhP
A Java app for administrators running on Wildfly
A Mysql database
A filesystem containing lots of images and documents that must be accessible to both the Java and PhP apps. A third party ftp's a data file to the server every day, and a perl script loads that into the db and the file system.
A perl script occasionally runs ffmpeg to generate videos, reading images from and writing videos to the filesystem.
Is Openshift a good solution for this, or would it be better to use AWS directly instead (for instance because they have dedicated file system components?)
Thanks
Michael Davis
Ottawa
The shared file system will definitely be the biggest issue here. You could get around it by setting up your applications to use Amazon S3 or some other shared Cloud file system though fairly easily.
As for the rest of the application, if I were setting this up I would:
Setup a scaled PHP application, even if you set the scaling to just use 1 gear this will allow you to put the MySQL database on it's own gear, and even choose a different size for it, such as having medium web gears (that run php) and a large gear that runs the MySQL database. This will also allow your wildfly gear to access the database since it will have a FQDN (fully qualified domain name) that any of your applications on your account can reach. However, keep in mind that it will use a non-standard port instead of 3306.
Then you can setup your WildFly server as whatever size you want, but, keep in mind that the MySQL connection variables will not be there, you will have to put them into your java application manually.
As for the perl script, depending on how intensive it is, you could run it on it's own whatever sized gear with some extra storage, or you could co-locate it with either the php or java application as a cron job. You can have it store the files on Amazon S3 and pull them down/upload them as it does the ffmpeg operations on them. Since OpenShift is also hosted on Amazon (In the US-EAST region) these operations should be pretty fast, as long as you also put your S3 bucket in the US-EAST region.
Those are my thoughts, hope it helps. Feel free to ask questions if you have them. You can also visit http://help.openshift.com and under "Contact Us" click on "Submit a request" and make sure you reference this StackOverflow question so I know what you are talking about, you can ask any questions you might have and we can discuss solutions for them.

WAMP MYSQL or MYSQL service on clients side?

We have designed an application using .NET framework. There is a client application and a server application. The client applications, webpages, android/ iphone applications fetch data from the server using the WCF service.
My issue here is that some of the data that can be set by the user on the application is being saved on the server but cannot does not reflect on the client side once the application is restarted, we have designed the application in such a way that every change on the client side will be reflected on the server side, this is done to make this a cloud based application.
Some of the settings changed or value input on the client side is updating on the server successfully but does noes reflect on the client machines using the direct MYSQL service. However there are absolutely no issues while using WAMP as the MYSQL service, i.e the clients using the WAMP server can see the changes made. We have tried matching the versions and also have tried new and old versions of the standalone MYSQL. Firewall settings all seem fine. Since we prefer to install the standalone MYSQL over WAMP on our customers machines, it would be great if you could shed some light on the possible issues. Is there any difference in the initial config of MYSQL and the default config of WAMP MYSQL.
Hence if there is any thing in particular to note or tweak in this regards it will be really helpful.
Thanks in advance.
I don't think you'll get a really good answer to your question, because the term "WAMP" does not refer to a specific package, but rather any package that includes Windows versions of Apache, MySQL and PHP (and sometimes Perl instead of or in addition to PHP). See this link for a list of some of the available packages:
http://en.wikipedia.org/wiki/Comparison_of_WAMPs
Those packages come with different standard configurations for MySQL. Since you didn't say which WAMP package (and version) you were using, there is no way of knowing in what way their standard configuration differs from a plain MySQL installation (where the version is also important)
Have you tried just running a diff on the respective config files?

Server Monitoring tools Apache/MySQL

my boss has asked me to find a tool that will monitor our sever health. Some kind of desktop application preferably that we can keep an eye on and will monitor us when capacity goes over a certain level, or we approach max storage etc.
We need to monitor both MySQL and Apache. I'm guessing I might need two tools.
THanks in advance
Have you looked at munin? it's not desktop... but i don't know why do you want to have a desktop solution?
you can monitor apache with SNMP module like mod_apache_snmp and tools like OpenNMS, and Nagios. Nagios supports monitoring mysql also.
You might also like to look at Megamon (http://www.megamon.com). Megamon is a complete monitoring solution capable of graphing numerous system performance aspects as well as escalating alerts and much more.
Megamon is not a desktop solution but runs as a Virtual Appliance. However, since it has a web interface, it is just as easy to use as a desktop application.
Have a look at SeaLion. SeaLion is a cloud based Linux server monitoring tool. Getting started is as easy as executing a command. It installs an agent at /usr/local/sealion-agent and runs as an unprivileged user (sealion). This agent will collect data at regular intervals across servers and this data will be available on your workspace. The latest version is shipped with NGINX, Apache, MySQL, MongoDB and Redis monitoring capability. It is free for 1 server with a 12 hours data retention policy.

How to run OpenERP 6.1 Web on a different machine

How do I run OpenERP Web 6.1 on a different machine than OpenERP server?
In 6.0 this was easy, there were 2 config files and 2 servers (server and "web client") and they communicated over TCP/IP.
I am not sure how to setup something similar for 6.1.
I was not able to find helpful documentation on this subject. Do they still communicate over TCP/IP? How do I configure the "web client" to use a different server machine? I would like to understand the new concept here.
tl;dr answer
It's meant only for debugging, but you can.
Use the openerp-web startup script that is included in the openerp-web project, which you can install from the source. There's no separate installer for it, as it's not meant for production. You can pass parameters to set the remote OpenERP server to connect to, e.g. --server-host, --server-port, etc. Use --help to see the options.
Long answer
OpenERP 6.1 comes with a series of architectural changes that allow:
running many OpenERP server processes in parallel, thanks to improved statelessness. This makes distributed deployment a breeze, and gives load-balancing/fail-over/high-availability capabilities. It also allows OpenERP to benefit from multi-processor/multi-core hardware.
deploying the web interface as a regular OpenERP module, relieving you from having to deploy and maintain two separate server processes. When it runs embedded the web client can also make direct Python calls to the server API, avoiding unnecessary RPC marshalling, for an extra performance boost.
This change is explained in greater details in this presentation, along with all the technical reasons behind it.
A standalone mode is still available for the web client with the openerp-web script provided in the openerp-web project, but it is meant for debugging purposes rather than production. It runs in mono-thread mode by default (see the --multi-thread startup parameter), in order to serialize all RPC calls and make debugging easier. In addition to being slower, this mode will also break all modules that have a web part, unless all regular OpenERP addons are also copied in the --addons-path of the web process. And even then, some will be broken because they may still partially depend on the embedded mode.
Now if you were simply looking for a distributed deployment model, stop looking: just run multiple OpenERP (server) processes with the full stack. Have a look at the presentation mentioned above to get started with Gunicorn, WSGI, etc.
Note: Due to these severe limitations and its relative uselessness (vs maintenance cost), the standalone mode for the web client has been completely removed (see rev, 3200 on launchpad) in OpenERP 7.0.