What's the difference between Prometheus and Zabbix? [closed] - zabbix

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
What are the differences between Prometheus and Zabbix?

Both Zabbix and Prometheus may be used in various monitoring scenarios, and there isn't any particular specialization in either of these. Zabbix is older than Prometheus and probably more stable, with more ready-to-use solutions.
Zabbix has a core written in C and a web UI based on PHP. Also it uses "agents" (client-side programs) written in C.
Prometheus is written in the Go language.
Zabbix stores data in an RDBMS (MySQL, PostgreSQL, Oracle, or SQLite) of the user's choice. Prometheus uses its own database embedded into the backend process (it is a non-relational database specially designed for storing monitoring data in a similar fashion to OpenTSDB's data model).
Zabbix by default uses a "pull" model when a server connects to agents on each monitoring machine, and agents periodically gather the information and send it to a server. The alternative is "active checks" mode when agents establish a connection with a server and send data to it when it need.
Prometheus uses a "pull" model when a server gathers information from client machines. But Prometheus Push Gateway may be used in cases when a "push" model is needed.
Prometheus requires an application to be instrumented with the Prometheus client library (available in different programming languages) for preparing metrics. But for monitoring a system or software that can't be instrumented, there is an official "blackbox exporter" that allows probing endpoints over a range of protocols; additionally, a wide spread of third-party "exporters" and tools are available to help expose metrics for Prometheus (similar to "agents" for Zabbix). One such tool is Telegraf.
Zabbix uses its own TCP-based communication protocol between agents and a server.
Prometheus uses HTTP with Protocol Buffers (+ text format for ease of use with curl).
Zabbix offers its own web UI for visualization. Prometheus offers a basic tool for exploring gathered data and visualizing it in simple graphs on its native server and also offers a minimal dashboard builder. But Prometheus is and is designed to be supported by modern visualizing tools like Grafana.
Zabbix has support for alerting in its core.
Prometheus offers a solution for alerting that is separated from its core into the Alertmanager application.

Zabbix thinks in terms of machines, so you're limited to thinking about things in those terms. Alerts can be triggered based on simple math.
Prometheus doesn't have that restriction, and you're free to think in terms of services or datacenters. Alerts can be triggered by any valid expression, such as the average latency is too high or disks will fill up in four hours.
Evolving from Machines to Services explains more about the difference between machine-based and service-based monitoring.

Zabbix is written in C and PHP. It's more classic-monitoring.
Prometheus is written in Go, and it's recommended for Cloud, SaaS/OpenStack monitoring.
But you can use both. Prometheus is faster because of the database and Zabbix has a smaller footprint (because it’s written in C). In Zabbix you can do most things in the web GUI, but in Prometheus you must edit files like in Nagios.
Here is a German article about Prometheus: Prometheus für das Cloud- und Enterprise-Monitoring

Related

Use MQTT or remote MySQL to get data on a server? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I currently set up several Raspberry Pis in different locations that collect data (temperature, humidity) every 5 seconds. To visualize that data I'm wondering if it's better to send that data via MQTT to a VPS and save it to a local MySQL database or just use a remote MySQL connection and insert the data directly into the MySQL database.
Currently, I can't really see any advantages of MQTT. Do any of u have other opinions?
Thanks and greetings,
Jonas.
Any answer to this is going to be somewhat opinion based but I believe that the following are some areas where MQTT may provide you with benefits:
Message delivery (especially over poor quality links) - with MQTT (and QOS>0) once the message has been accepted delivery is guaranteed (within limits!); the client/broker will handle network issues etc. Note that some MQTT clients do not support offline buffering (for example the Paho .Net client) but you can resolve this by running a broker on each Pi setup as a bridge. If you use MySQL then you will need to deal with connectivity issues yourself (and handle data persistence during network outages if that is important to you).
Bandwidth - MQTT messages are likely to be smaller (this depends upon how you pack your messages but the protocol adds very little overhead).
Security - the MySQL security guidelines state that the MySQL "port should not be accessible from untrusted hosts". Any product can have security issues but MySQL is a much larger and more complex system than an MQTT broker so has a larger attack surface.
Loose Coupling - Connecting directly to the database from your remote nodes locks you into that database & schema. Using MQTT allows you to rearchitect your backend (including moving to, say, PostgreSQL) without pushing out any changes to the client (important when you have a lot of remote devices or cannot remotely update them).
Pub/Sub model - The Publish/Subscribe model used by MQTT offers a number of advantages such as subscribing to live data from your test system.
Bi-Directional - If you need to, for example, control a relay on your remote device then it's easy to send a message to it from the server (this can also be used to do things like request logs for diagnostics).

In term of performance is mormot rest connection is better than Oracle direct connection? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I work in a company on a huge application made with Delphi 10 Seattle (VCL), using direct connection to Oracle database server (2-tier).
One of the experts proposed to migrate to the 3-tier architecture using mormot (or another library/components).
One of his arguments that the new architecture (3-tier) will provide more performance (time of exchange data), because https is faster than direct oracle connection and json objects are much lighter (without using a cache policy on the rest) and we can after that make web clients.
I didn't understand:
Why is https faster than oracle connection protocol? (if it's true, why oracle doesn't use https and json as protocol for exchange data?).
Isn't a security problem if we let all the functions and queries on the client side (even if we will do web clients)?
Cordially
In the context of any n-Tier framework, remote SOA access via a REST solution could have performance benefits.
The mORMot documentation tries to introduce all those concepts (SOA, REST, MVC, n-Tier, Clean Architecture, DDD...) and, then only, speaks about performance impact. Don't take a documentation sentence in isolation. Consider the whole picture, and the use-cases.
Why is https faster than oracle connection protocol?
HTTPS is not "faster" in se. What was meant is that it could be more efficient for a remote connection, especially over the Internet.
The Oracle Connection protocol was designed to run on a local network, whereas HTTP is a query/answer model.
The main PRO of the Oracle protocol is that it is more complete (complex?) than a simple query/answer model: it can dialogue with the Oracle server to cache statements and data, it can have real-time notifications, it can prepare the data in binary form ready to be encoded.
In terms of performance, the main CON of the Oracle protocol is that it requires more round-trips other the wire: it was designed to work on local network, with a low latency. Over an Internet connection, it will be much slower, and, for security reasons, is very likely to be encapsulated into a VPN - reducing even more the performance.
We should not speak of "performance" in an abstract way. There are several ways to skin a cat... If you want raw query performance, use another kind of database, like Redis.
But for business applications, the main "performance" point is perhaps more about scaling. And here, the Oracle protocol has a bigger cost in its database connections. Maintaining a connection, especially maintaining transactions, can be very demanding. You can maintain up to a few dozen/hundredths of simultaneous DB connections on a typical Oracle server. Whereas it is very easy to have a REST server maintaining thousands of simultaneous clients. And even if you currently expect only a few clients, how could you imagine your future? All serious applications expect a REST-like interface, nowadays. And keep the database connection local on the server side.
Isn't a security problem if we let all the functions and queries on the client side (even if we will do web clients)?
Security is another concern, and here a REST Web client has known and proven strategy, with proper audit methodology. You will never "let all functions on the client", in a REST interface. The framework offers several ways of authentication and authorization - check the documentation, from URI-signature to JWT.
Opening an Oracle endpoint to the whole Internet is not a good idea, in terms of security and scalability. Even Oracle is offering dedicated solutions for proper networking.
Anyway, a framework like mORMot was designed to offer REST, SOA, ORM and web MVC in a single package, and performance was driven from the ground-up - as a bonus. If you expect to design a RAD VCL/FMX application, go with direct database connection, and be data-centric. If you want something more open and maintainable, consider SOA, and be service-centric. Today, I develop all my SOA solutions as Micro-Services, with stand-alone databases, and mORMot as tooling, with huge performance - up to million of data items per second.
Sounds like a vague story to me, and I'd have a hard time believing this unless I got well documented proof. And even then, it's easy to compare apples and oranges when comparing the performance of such different ways of architecture.
I don't think that https in general is faster than a direct connection, but it depends on a lot of variables.
Moreover, mORMot itself needs to connect to the database as well, for which it can use some Direct Oracle Access (I assume the same as the one you compare it with), or OleDB (aka ADO) which is in general the same or slower than DOA, so there is no gain there. There is only the extra overhead of mORMot. See software architecture of mORMot.
So, how can it be better?
If the middle tier uses connection pooling when the client cannot, for instance. In that case, having a middle tier for pooling connections can lead to better performance, because no new connection needs to be established on every request. This can save a lot of overhead.
The same goes for caching. If you want to build a web site or web app, having a middle tier can greatly improve caching, if query results can be cached regardless of user. If you have client side caching, you cache it for that user/browser/session only, while in the middle tier, you can cache some data for all your visitors, which is a great benefit.
If there is a lot of processing needed before the database is involved. This can be faster on a high end middle tier. Then again, if you have lots of users, you might run out of hardware capacity (or cloud budget), and you might consider doing part of the legwork on the client.
So there are all kinds of benefits to 3-tier. whosrdaddy already named scalability, portability and redundance. Performance can be one of the benefits as well, if you tick any point like the ones listed above, but in general it's not a main reason for going from 2-tier to N-tier.

What is Client/Server Technology in DBMS?

I wanna really know what is Client/Server in DBMS,in Hardware,Software and in Architectural point of view.
Whats the difference between Client/Server Technology and File Handling System.
A Client and a Server are two separate entities -- hardware and/or software. The Client asks a question; the Server sits around waiting for questions and provides answers.
The "separate entities" is to emphasize they are logically separate, even though you may put them on the same hardware.
In databases, the Client says "SELECT ..."; the Server says "here's the result set for that query". Or it might say "there are no database rows satisfying that query". Or the Client says "Please INSERT ..."; the Server says "OK, that's done". Note, in this latter example, the "result" is more of just an "acknowledgement".
A database Client may, but does not have to be, on a separate physical computer as the database Server.
The terms "client" and "server" correspond to roles in a communication by two (or more) software components(like father and son in a family relationship).
Usually the software-component that has the data and the logic of operating on that data is called the server, as it serves with data and activity. A software-component that connects to that server and communicates with it and has not all data and logic is called the client, which is usually quite passive. Server and client are not bound to hardware: You can have a HTTP-server on your working machine as well as a browser (a HTTP-client). In real life you apply separation of concerns also to hardware: You have big data-stores with high responsive hardware that you dedicate to the server-software-component and lots of smaller working machines that have a client-software-component to connect to the servers.
This concept can be applied to most software systems, like databases (server keeps data, client knows how to asks for data), documents (HTTP-servers have the documents, manage them and can even contain logic-components, like PHP scripts or applications, and usually browsers as clients). Server and client are not opposit. Having an application server, like a SAP system, the server is usually also client to other services. The application logic is usually seperated from the database, so the application, being a server for the application clients, is (or has) a client to the database.
As the client/server view is a hierarchical division of software communication, you can also have components with equal rights. Some distributed architectures have equal components that communicate with each other, having the same abilities und logic, and eventually having all or part of the data.
In a client-server separation of software both components can be on the same hardware, but they also can communicate via networks and be on different hardwares. Usually the server has the hard-working part so you can have a lot of lightweight clients that only send requests for currently needed data and logic.
But that all is not a must have. When a computer connects to another computer and copies all the logic (programms) and data from it, to become another server, in the process of copying all that information, the taking machine is the client and the giving machine the server.
I'm not sure, what you mean with "file handling systems". A file handling system is usually a software component to serve you data from a file system. Usually it is a local issue, the file system operates on the harddiscs on one hardware. But there are also distributed storages, like NAS (network area storage), where you also have client and server components connected through a network.
So to sum up, what the advantages of a client/server architecture are:
separation of concerns (this allows for specialisation)
independent scalability of the server and the clients
concentration of logic / data that works together (follows separation of concerns) this makes maintainance of the logic on the server much easier (imagine you would have to update all browsers to have a change in your application)
Hardware Component of a Client/Server System
It has mainly 3 types Client,Network and Database Server
Client may be PC's,Laptop,Moblie,Tablet.
Network is the Cabling,Communication Lines,NIC,Hub's,Routers,LAN,WAN.
Server is a computer which is having Enough Processing Speed,Internal RAM,Disk Storage etc.
Software Component of a Client/Server System
It has 2 type's Client and Database Server,Application Software is run on Client side,it uses data that Stored on Server via Sql Queries through data access API's like JDBC and ADO.net.
Architectural Component of a Client/Server System
It mainly uses 2 type Application Servers and Web Servers,Business component is stored in Application Server.Web Servers are used to store Web Application's and Web Service's.

Cloud based web service for Web Applications

My web application uses PHP/MySQL on the server side to fetch and store data in a database. The DB size will increase with the user base, and can be huge. The application has been built and run on a conventional server, i.e no "cloud" specific code has been written (I have no experience with cloud systems; Is running services on them any different from running on a normal server?)
My concerns:
1. If I buy space on Amazon Elastic Compute Cloud, can I directly port all my code to the new server, or do I have to use some APIs specific to that? Since it's pay as you go, it's highly suitable for such a requirement.
2. What are the other options for hosting a web service that would require large server space? How might apps like Whatsapp be doing the same?
Thanks.
1) The answer to the first question depends on the type of service you're buying. Cloud comes in many forms, from Infrastructure as a Service (which basically offers you hardware as a service on which you can run your software stack) to Software as a Service (e.g. Gmail, which lets you use applications (or APIs) hosted in the cloud ).
The best alternative, in your case I think it is Platform as a Service (e.g Heroku) which defines a set of technologies supported by the provider and how to use them.
Either case, how difficult it is depends on your app and the specification of the service and the level of support offered, so you have to dig a little deeper (starting with guides of how to deploy a similar app would be a good choice).
2) Startups and other medium size companies use cloud providers such as Amazon, Rackspace etc and when they reach a certain size tend to build their own data centers (e.g Zynga). There's a threshold beyond which is better to manage your own infrastructure instead of buying services from others.

Web servers vs application servers, Open source database Security vs Enterprise Database security

I am working on creating a spec for a startup to create a financial broker check website. It involves storing information about financial advisers and payment details of the users (so obviously needs a lot of security). What kind of databases are best suited for the application. Is MySQL or its open source variations enough or is it better to go with Oracle Enterprise etc. Also any info about the usefulness of application servers over traditional web servers (cloud based or normal) in this scenario and the preferred scripting language (PHP, Ruby, Python) for secure web applications.
Your choice of language, database, etc. has a relatively small impact on the security of your application. The developer's understanding of how to write secure code and the developer's understanding of the features provided by their tools is far more important. It is entirely possible to write a secure application on an open source LAMP stack. It is entirely possible to write a secure application on a completely closed source stack. It is also very easy to write insecure applications on any stack.
An enterprise database like Oracle will (depending on the edition, the options that are licensed, and the add-ons that are purchased) provide a host of security functions that may be useful. You can transparently encrypt the data at rest, you can encrypt the data when it flows over the network to the app server, you can prevent the DBA from viewing sensitive data, you can audit the actions of the DBA and other users, etc. But these sorts of things really only come into play when you've written a reasonably secure application to begin with. It does you little good to encrypt all the data if your application is vulnerable to SQL injection attacks and can be easily hacked to present all the decrypted data to the attacker, for example.