PostgreSQL monitoring With Zabbix - zabbix

In Zabbix, what is the possibility to monitor many postgres databases .
because only one server database can be defined in odbc.ini
Thanks .

If you look at the zabbix share site here, you will see several templates which provide for monitoring via the zabbix agent. By using the zabbix agent instead of the server side odbc option, you push the actual connectivity out to the agent (which could, of course, be on the same server as zabbix). This allows you to do separate discovery and even separate credentials for each server (e.g. in user macros).
I did not experiment with the templates present there, merely offer them as examples. Determining what exactly one monitors on a database is the more difficult subject, of course, since it can be highly variable by site -- what is "normal" at that site, and so what is worthy of an alert. But with LLD and ability to do arbitrary queries, you can form any items and triggers you may need. Effectively you can have your DBA (might be you also of course) craft the criteria, and just put it wholesale into the template.

Related

Is there a good solution for sharing secret variables and configurations between different servers, programs and processes?

Lets say you have a complex system that consists of several programs and processes running on several servers.
Now you want to change some important configuration variable like the password to a database that is used by several of these processes.
Is there a standardized solution that enables you to store these configurations at a central spot and pushes changes on update to different servers?
I imagine something like environment variables but that can be read and changed fast during runtime.
You can use MQ s(short for message queues) for distributed application communications.
Rackspace has a short article about different MQ s. Check below:
Using Message Queues
Most of these applications use SSL and HTTPS so you can have a peace of mind about the channels being secure.

mail website forum under one database

I've tried to find answer to my question but i couldn't find the right answer yet (would be glad if you point me to one). I'm a newbie when it comes to running services (websites, forum, wikis, emails). I'm rather experimenting.
I have couple of websites (mainly wordpress), mail server, forum, wikis, and file sharing (owncloud) hosted on one server.
Until now every time I would install new service I would create new database (mysql), just like the install readme's would advice. I would like to connect some of the services together. Mainly unified user database.
What is the best way to do it. Is having multiple databases versus one db heavier for my servers cpu load? Is it secure? Is it easy to administrate it?
If cpu load isn't issue while having multiple db's is it possible to create user database and link it to the services databases i would like to link it to?
Having multiple applications (forum, wiki, ...) access the same database is not likely to have any effect on CPU usage, but there are other drawbacks:
Table names used by applications might have conflicts (many of them might have a "session" or "posts" table). Some web apps have a feature to prefix table names with a string, like "wp_session" and "wp_posts" for example to get around conflicts.
Yes, it's less secure. When one of the applications has a security hole and someone manages to access its database, data of all applications is compromised.
Multiple databases is likely to be easier to manage when doing application upgrades, backups, removing or adding applications to the mix.
Accidentally break one database, and you'll break all apps.
To get the applications use the same authentication database it's usually not enough to point them at the same database, as they're likely to use a different database schema for storing user information (different columns in the auth database), different hashing for password storage, and so on.
The question is quite broad, and the specific answer depends a lot on the actual applications you're using. The best approach in general is probably to pick applications which support a protocol such as OpenID or OAuth, or an authentication backend such as an LDAP database or PAM (Pluggable Authentication Module). These methods allow you to use a single user database managed by a single method. The apps all need to work with the same backend. In any case, it's likely to be quite a learning experience to get it running smoothly.

Alternative to cgi-bin

This question asks about the disadvantages of 'cgi-bin based' services. As far as I can ascertain, apart from perhaps the naming convention, nothing much has changed over the years as far as web based client/server interaction is concerned. There is of course now the option to use AJAX clients but ultimately they are still stateless and code on the server, whatever language it is written in, still waits for input to be sent via 'GET' or 'POST' methods.
Having been out of the loop as far as web programming is concerned for quite a while, am I missing something obvious?
To clarify my question: The question I referred to suggests that 'cgi-bin' based systems are no longer in use, what is the new alternative?
#sarnold. Thank you for your answer. Just so I am 100% certain about this, even if a system is developed using the 'latest and greatest' server platform (I guess this would be a .net based system or Linux equivalent) it is still, ultimately, just a program, or programs, running (if using fast cgi) or waiting to be started on a server, so there really hasn't been any change over the years. If that is the case what alternative is Brian referring to in his question?
The largest changes have been in tools like mod_php that execute the code directly in the address space of the web server and FastCGI which implement something very nearly identical to the CGI protocol, but with a handful of long-lived processes, rather than fork(2) + execve(2) of a new interpreter for every single request.
Of course, both approaches have problems: executing the interpreter directly in the address space of the web server is potentially horrible for reliability and security: the server (typically) runs with the same privileges all the time, so separating users is (typically) impossible. Further, flaws in the interpreter can be quite common, so it isn't a good solution for shared hosting environments, because any user could run arbitrary code with the privileges required to access all the data of all the other users on the system.
The FastCGI approach almost keeps the same speed; it does sacrifice some speed for copying data around between processes, but this isn't a real big deal for anyone except huge volume sites. But, you can run multiple FastCGI systems as different user accounts attached to different locations of the single 'web server' (e.g., http://example.com/public/ runs under account www-public and http://example.com/private/ runs under account www-private), and the FastCGI systems don't need to run with the same privileges as the web server.
Of course, there are also servlet systems where the server calls into compiled callbacks (frequently, compiled to bytecode) code that is linked into the server process. Much less "scripting"-feel.

How can I sync a database driven website to a different server

I have a website using cPanel on a dedicated account, I would like to be able to automatically sync the website to a second hosting company or perhaps to a local (in house ) server.
Basically this is a type of replication. The website is database driven (MySQL), so Ideally it would sync everything (content, database, emails etc.) , but most importantly is syncing the website files and its database.
I'm not so much looking for a fail-over solution as an automatic replication solution, so if the primary site (Server) goes off-line, I can manually bring up the replicated site quickly.
I'm familiar with tools like unison and rsync, but most of these only sync file(s) and do not do too well with open database connections.
Don't use one tool when two is better; Use rsync for files, but use replication for MySQL.
If, for some reason, you don't want to use replication, you might want to consider using DRBD. This is of course only applicable if you're running Linux. DRBD is now part of the main kernel (since version 2.6.33).
And yes - I am aware of at least one large enterprise deployment of DRBD which is used, among other things, store MySQL database files. In fact, MySQL website even has relevant page on this topic.
You might also want to Google for articles against DRBD/MySQL combination; I remember reading few posts of that.

Should I move client configuration data to the server?

I have a client software program used to launch alarms through a central server. At first it stored configuration data in registry entries, now in a configuration XML file. This configuration information consists of Alarm number, alarm group, hotkey combinations, and such.
This client connects to a server using a TCP socket, which it uses to communicate this configuration to the server. In the next generation of this program, I'm considering moving all configuration information to the server, which stores all of its information in a SQL database.
I envision using some form of web interface to communicate with the server and setup the clients, rather than the current method, which is to either configure the client software on the machine through a control panel, or on install to ether push out an xml file, or pass command line parameters to the MSI. I'm thinking now the only information I would want to specify on install would be the path to the server. Each workstation would be identified by computer name, and configured through the server.
Are there any problems or potential drawbacks of this approach? The main goal is to centralize configuration and make it easier to make changes later, because our software is usually managed by one or two people at most.
Other than allowing for the client to function offline (if such a possibility makes sense for your application), there doesn't appear to be any drawback of moving the configuration to a centralized location. Indeed even with a centralized location, a feature can be added in the client to cache the last known configuration, for use when the client is offline).
In case you implement a [centralized] database design, I suggest to consider storing configuration parameters in an Entity-Attribute-Value (EAV) structure as this schema is particularly well suited for parameters. In particular it allows easy addition and removal of particular parameters and also the handling parameters as a list (paving the way for a list-oriented display as well in the UI, and therefore no changes needed in the UI either when new types of parameters are introduced).
Another reason why configuartion parameter collections and EAV schemas work well together is that even with very many users and configuration points, the configuration data remains small enough that is doesn't suffer some of the limitations of EAV with "big" tables.
Only thing that comes to mind is security of the information. In either case you probably have that issue though. Probably be easier to interface with though with a database as everything would be in one spot.