I'm currently building a monitoring system with Zabbix. I'm especially trying to set up a Zabbix proxy and
As far as I know, if I set the Zabbix proxy as active, it's then not necessary for the Zabbix server to have a connection with the Zabbix proxy.
Now, what I wonder is if each Zabbix proxy must have its own static IP.
If the proxy is set to active, there is no need for them to have static IP addresses - all that is needed is a TCP connection to the server.
Related
I have zabbix-server with Public IP on AWS (EC2 Amazon Linux 2).
I would like to use this server to monitor devices (VMs, printers etc.) inside our company private network. I have full access to the network/devices and AWS EC2 instance.
Should I install zabbix-proxy in company network and then set up connection between zabbix-server and endpoint which has static public ip with port-forwarding? Or just port forwarding?
Zabbix proxy is the recommended method. Several benefits over directly monitoring the endpoints:
Reduces the needed connectivity between the sites to a single TCP port.
Reduces the amount of network traffic.
All traffic can be easily encrypted between both sites.
Proxy can operate in either active or passive mode:
Active - proxy connects to the Zabbix server (likely preferred in this case).
Passive - Zabbix server connects to the proxy.
I've some mysql host in private network and would like to use a reverse proxy server (i.e. nginx) to connect with mysql-client to mysql host via reverse proxy server.
An example to better understand my answer:
suppose I've:
a mysql server with ip yyy.1
a mysql server with ip yyy.2
both in the network of a proxy server with ip XXX, and I associate to XXX the DNS mysql-server1.com and mysql-server2.com.
My goal is connect to mysql server yyy.1 when I use mysql client to connect to XXX by calling mysql-server1.com on port 3306, and similare when try mysql-server2.com on port 3306.
The problem with nginx is that I can't differenciate TCP request by server name, so in XXX server I should associate one port foreach mysql server, but this implies that every time I should change the port also in mysql client settings, and I don't wanna this!
There's some proxy-server that can accomplish that?
Could I use IP Table to route the request mysql-server1.com:3306 to localhost:[some port], where I could use [some port] in proxy setting to forward the requests to the server yyy.1?
This is impossible.
In the MySQL Client/Server protocol, the client never identifies the hostname to which it is attempting to connect to. Unlike in some other protocols, such as HTTP (with the Host header), the original name the client used to resolve an IP address from DNS is not preserved. TLS SNI is also not available, because TLS negotiation on a MySQL connection does not begin until the client reads the server capability flags to discover whether the server supports TLS, at which point the client asks to switch the connection to TLS... and this, of course, is after the connection is is already established.
In the MySQL Client/Server protocol, the server always talks first.
Your only options are for the proxy machine to listen on multiple IP addresses, with a DNS hostname pointing to each IP, and use the address to which the client connected to determine which server to use.
Or, each proxy instance listens on a separate port.
The protocol design prevents name-based virtual hosting.
So for a project that I am working on at my office, I have a .NET application that will be storing and retrieving data to/from an AWS RDS MySQL Server that I have setup. The problem that I have run into is that port 3306 is not open on the work network.
I have reached out to my networking department to see what they can do about opening this port. They asked me if there was a way to set a static IP to this AWS RDS Instance. They only want to open the port based on the server's IP address rather than open the port 3306 completely for security reasons they say. After some research, I have seen that it is possible to set an elastic IP (similar to static IP?) on an AWS EC2 instance but I am curious about setting a static IP on an AWS RDS Instance. I did not see anywhere on the AWS Dashboard about setting a static IP for my RDS Instance. The reason behind the static IP is so that when the IP that is associated with the endpoint DNS that they provide changes, they wont need to adjust the firewall settings to accommodate this change.
Is it possible to have the port open for only this specific DNS
endpoint that AWS provides? If not, is it possible to set an IP
to static on the RDS instance?
What sort of security concerns are there if they were to completely open port 3306?
Thank you!
You don't need a fixed IP for RDS Instance. When you create a RDS instance AWS service defines a URL for your instance. This URL is fixed. Even in case of IP change the URL will still route to the correct instance.
You can tell your IT team to create a firewall rule in port 3306 for the RDS instance URL and it will work fine.
About the security, the idea is to close the inbound connections on port 3306 to your site. This will restrict anyone trying to connect to your internal instances at the same time that you can connect to all hosts in the internet using this port. There is no need to close all the outbound connections. But...
Is a information security best practice to apply the least privileged access principle. This means: only allow what is specificaly needed. If they open the port for all hosts, maybe in the future, someone can discover a new vulnerability and exploit it, because no one in your IT team will remember why was needed to open the port for all hosts. So.. they keep open only what is needed.
By default, when you open the bind-address to listen to the outside, the default communication between the MySQL client & server is not secured, that means anyone that can do a MitM attack can view every transactions made.
There is options out there to protect against this type of attack (SSH Tunneling or enabling SSL in MySQL) but from what I understand, Amazon RDS doesn't implement, by default, any SSL security.
So I'm wondering, when you create an RDS instance, is it like installing MySQL on a server and opening the 3306 port or am I missing something?
A few points. Firstly AWS RDS for mysql does support ssl. This is discussed here
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_MySQL.html#MySQL.Concepts.SSLSupport
Second, the usual way to arrange servers in a AWS VPC is to have "private" and "public" subnets. The private subnets route to other private hosts and perhaps to other hosts in the same VPC. But they have no Elastic IPs and no direct access to the Internet Gateway. It is usual to put databases on private subnets so that their ports are not exposed
There is a nice diagram on this page showing this concept
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Security.html
Lastly, AWS RDS exists within the philosphy of the shared responsiblity model
This tries to make it clear what security AWS services provide and what is supposed to be the responsibility of the customer
If you're creating an instance you've to also allow port 3306 to be open at your endpoints. This means you've to also configure your security settings as to which IP you've to allow for this connection. Regarding SSL security or SSH, as a good practice you should rely on ssh keys with Pass phrase.
I am looking to deploy an nginx/dns server on a vps proxy that maps to the real back-end in a different geographical location. The back-end runs apache,mysql,dovecot,postfix. It is a pay-for mail server. The users get entered through apache through php into mysql, and when users set up IMAP, dovecot/postfix pools them from mysql and delivers or uses the smtp outbound.
I read about something in the nginx.conf file, that I can declare the mail hostname on the proxy as so:
mail {
server_name mail.example.com;
...
}
This mail.example.com is the actual mx for the example.com mail exchanger listed in DNS? Here is where that came from:
"As you can see, we declared the name of this server at the top of the mail context.
This is because we want each of our mail services to be addressed as mail.example.
com. Even if the actual hostname of the machine on which NGINX runs is different,
and each mail server has its own hostname, we want this proxy to be a single point
of reference for our users. This hostname will in turn be used wherever NGINX
needs to present its own name, for example, in the initial SMTP server greeting."
So from my understanding, the physical hostname of the proxy should be something else besides mail.example.com. So in DNS on the proxy, I can define that as anyhost.example.com? The proxy also proxies back to my apache on the back-end.
Finally, on the back-end, how do I set up my DNS for that? What hostname do I choose for the actual box running apache,mysql,dovecot,postfix? Its all on one box. I understand that on the registrar, I point 2 nameservers, these should be two proxies, that way running a dig would only pull up the proxies and the MX which should be "known" to be on the proxy.
in your case where all of the services in one box including the proxy, you can set the apache, mysql and other services accessible only from localhost / 127.0.0.1. Then from nginx you put
upstream: 127.0.0.1:80
upstream: 127.0.0.1:3306
therefore the nginx is serving frontend request and forward them to designated services