haproxy acl to filter tcp request based on client IP taken from PROXY protocl v2 headers - acl

So we have an AWS setup with NLB so that we can attach it to EIP and thus be static for customer whitelisting.
NLB sends Proxy Protocol v2 headers to haproxy which sends it back to the backend servers.
The app is picking this up and uses it.
The issue that I am trying to figure out is that since haproxy gets the internal IP of the NLB and it gets public client IP only through PROXY protocol header,
how can I parse client IP from PROXY header and filter it based on whitelisting ACL in haproxy?
I have a listen configuration like this
listen backoffice
bind 10.2.3.4:443 accept-proxy
mode tcp
default-server inter 3s fall 3 rise 3 error-limit 1 on-error mark-down
option tcplog
acl network_allowed src 5.6.7.8 1.4.6.8 4.6.7.8
tcp-request connection reject if !network_allowed
server backoffice.prod.local 10.1.2.3:443 check-send-proxy send-proxy-v2-ssl-cn port 443
THIS ^^ breaks the haproxy for this backend - it works only if I comment the tcp-request line and reload
To me, this looks like quite useful use case but so far no online research has shown any promising haproxy configuration to do just that.
I have only less than 2 days to figure this out so let's hope the community can lend a hand in a short time

So I ended up figuring it out for my case :)
working config is this :
listen backoffice
bind 10.2.3.4:443 accept-proxy
mode tcp
default-server inter 3s fall 3 rise 3 error-limit 1 on-error mark-down
option tcplog
acl network_allowed src 5.6.7.8 1.4.6.8 4.6.7.8
tcp-request content reject if !network_allowed
server backoffice.prod.local 10.1.2.3:443 check port 443
I removed the part where proxy headers are passed to backend as it did not need them.
I only needed to have NLB pass proxy header to haproxy so that we can reliably filter on source/ client IP

Related

How to make ELB pass protocol to node.js process (Elastic Beanstalk)

I have ELB balancing TCP traffic to my Node.js processes. When ELB is balancing TCP connections it does not send the X-Forwarded-Proto header like it does with http connections. But I still need to know if the connection is using SSL/TLS so I can respond with a redirect from my Node process if it is not a secure connection.
Is there a way to make ELB send this header when balancing TCP connections?
Thanks
You can configure proxy protocol for your ELB to get connection related information. In case of HTTP the ELB adds headers telling about the client information, in case of TCP however, AWS ELB simply passes through the headers from the client without any modifications, this causes the back end server to lose client connection information as it is happening in your case.
To enable proxy control for your ELB, you will have to do it via API, there is currently no way to do it via UI.
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html
The above doc is a step-by-step guide on how to do this, I don't want to paste the same here as that information might change over time.
EDIT:
As it turns out, Amazon implements Version 1 of the proxy protocol only which does not give away SSL information. It does however give port numbers which was requested by the client and a process can be developed stating something like if the request was over port 443 then it was SSL. I don't like it as it is indirect, requires hardocoding and coordination between devops and developers... seems to be the only way for now...lets hope AWS ELB starts supporting Version 2 of the proxy protocol which does have SSL info soon.

Google Cloud HTTP Load Balancer can't connect to my instance

I have created a HTTP load balancer to basically redirect from port 80 to port 8080. The server on my instance is running on port 8080.
I can connect to the server directly but the LB is not able to connect to the instance, both accessing the LB's IP directly and also the health check always fails. The instance group the LB is using consist of just that single instance.
I read Google Compute Engine health checks failing
and the google-address-manager is running. However, when running ip route table list local there is no routing for my LB. The user in the above question is using Network load balancing and not HTTP load balancing (as I am) so I don't know if that is related?
Or perhaps it's related to a firewall? I have added my LB's ip address to a firewall rule that allows tcp:8080
Does anybode have any idea how can I fix this? I am not experienced with debian nor gcp.
Show I just try and run the route add command referenced in the above question? If so, how come the google-address-manager is not adding the route?
Thank you in advance!
You need to make sure that your port mapping on instance group is set to correct port, the 8080 in your case.
First, edit your instance group and change the port name and port to 8080:
Then, navigate to your http backend's settings and change the default port to the port name you've configured in your instance group.
Finally, make sure that your firewall rules allow access on port 8080 from 0.0.0.0/0 or at least from the IP address of HTTP load balancer (130.211.0.0/22)
I had the same issue and fixed it by adding a firewall rule for the health checker (which is not the same IP as your LB!). See https://cloud.google.com/compute/docs/load-balancing/health-checks?hl=en_US#http_and_https_load_balancing for instructions.
In my case, I did not configure the HTTP health check correctly.
I used "/" as path, but on my backend, "/" redirects to a login-page (HTTP 301), which responds with a HTTP 200.
The health check does not follow a redirect, every HTTP response code != 200 is assumed unhealthy (from Debugging Health Checks in Load Balancing on Google Compute Engine).
So, I changed my path to "/login", this fixed my issue.

HTTPS load balancer in Google Container Engine

I'm trying to set up an HTTPS load balancer for GKE using HTTPS L7 load balancer but for some reason is not working. Even the HTTP load balancer in the HTTP Load Balancing walkthrough. The forwarding rule's IP address is created and I'm able to ping and telnet to port 80. But when request via curl it give me a error.
<title>502 Server Error</title> </head> <body text=#000000
bgcolor=#ffffff> <h1>Error: Server Error</h1> <h2>The server
encountered a temporary error and could not complete your request.
<p>Please try again in 30 seconds.</h2> <h2></h2> </body></html>
All the steps were fine and I created a firewall without any tags for the ${NODE_PORT} but it didn't work.
Has anyone encountered this problem?
I had the same problem with my application, the problem is that we did not have an endpoint returning "Success" and the health checks were always failing.
It seems that the HTTP/HTTPS load balancer will not send the request to the cluster nodes if the health checks are not passing, so my solution was to create an endpoint that always returns 200 OK, and as soon as the health checks were passing, the LB started working.
I just walked through the example and (prior to opening up a firewall for $NODE_PORT) saw the same 502 error.
If you look in the cloud console at
https://console.developers.google.com/project/<project>/loadbalancing/http/backendServices/details/web-map-backend-service
you should see that the backend shows 0 out of ${num_nodes_in_cluster} as healthy.
For your firewall definition, make sure that you set the source filter to 130.211.0.0/22 to allow traffic from the the load balancing service and set the allowed protocols and ports to tcp:$NODE_PORT.
I use GKE, and I just walked through the example and it works fine, but when I route to my own service, it does not work. (my service is a rest api service)
I found that the biggest difference between my service and the example, is that: the example got a root endpoint("/"), but I do not support it.
So, I solved this problem in this way: add a root endpoint("/") to my service, and just return success(an empty endpoint that returns nothing), and then re-create the ingress, and waited for several minutes, and then the ingress works!!
I think this problem should be caused by healthy checker UNHEALTHY instances do not receive new connections.
Here is a link for Healthy checks: https://cloud.google.com/compute/docs/load-balancing/health-checks
The issue resolved after a few minutes (like 5-10 minutes) in my case.
If using an ingress, there may be additional information in the events relating to the ingress. To view these:
kubectl describe ingress example
In my case, the load balancer was returning this error because there was no web server running on my instances and instance-groups to handle the network request.
I installed nginx on all the machines and then it started working.
From now on, I made a point to add nginx in my startup script while creating the vm/instance.
If you are using nginx behind your loadbalancer then it's important that the default_server is returning 200 or some other 2**. That means that if you for example have a rewrite rule that returns 301 then it will fail.
The solution is to set default_server on your main server:
server {
# Rewrite calls to www
listen 443;
server_name example.com;
return 301 https://www.example.com$request_uri;
}
server {
listen 443 default_server;
server_name www.example.com;
...
Adding a Firewall Rule for Source: 130.211.0.0/22(the Load Balancer range on GCP) for the tcp:$NODEPORTIP fixed this for me.
I created an
endpoint to all request that contain 'GoogleHC' in the user-agent.
so,
server{
server_name example.com www.example.com
if ($http_user_agent ~* 'GoogleHC.*') {
return 200 'isaac newton';
}
}

create a domain name pointing to an IP of port different than 80

I would like to use a domain name to point to a web page on the local server's IP address. However, the problem is that the page is linked to an IP address set up on port 8088 rather than 80 because the latter is already used by another web page. By the domain company I was told that they cannot do it because the domain can only point to an IP address set up on port 80. So now I am in a deadlock. What alternatives do I have and how can I make a domain pointing to the IP:8088?
Thanks
The domain company that you talked to may have done a poor job of explaining how domains work. Domain names don't refer to specific ports. They just refer to IP addresses. The client can look up a hostname to get the IP address which the client should connect to, but the client has to figure out the port without the help of DNS. Port 80 is just the default port for HTTP service.
You can certainly run a web server on port 8088 if you like. The port number would have to appear in the URL, e.g. http://somehost.example.com:8080/some/page. Clients would parse this and know to connect to port 8080 instead of the default port 80.
If you don't want URLs to contain the port number, then requests are going to go to the default port 80, and you have no choice but to make the web server running on port 80 handle these requests. HTTP/1.1 requests include the hostname which the client wants to contact, and modern web server programs are normally capable of serving completely different sets of content based on the hostname in the request. There are few ways todo what you need:
Just configure the web server for port 80 to handle both sites. This will depend on what web server software you're using. Apache for example calls these "virtual hosts", and here is a set of examples. This is a typical solution, and some people run hundreds of sites on the same server this way.
Run your two web servers as you planned. Set up the server for port 80 to be a reverse proxy for the second website. The server would continue to serve content for the site it handles now. When it receives a request for the second site, it would relay the request to the server running on port 8088, and relay the server's response back to the client.
Move the existing server for port 80 to a different port. Run a pure reverse proxy server on port 80, relaying requests for both web sites to their respective web servers.
You might be better off taking further questions to https://webmasters.stackexchange.com/ or https://serverfault.com/.
You can use a Proxy to reroute the given domain to the IP:PORT. To accomplish this you could either spin up a Nginx server and configure it as your reverse proxy or use this project that does exactly what you want and with almost no config https://github.com/cristianoliveira/ergo
If you run Apache on port 80, which is the most common case then the easiest way to solve this issue is to set a VirtualHost that uses ProxyPass.
<VirtualHost sub.domain.com:80>
ProxyPass / https://ip-or-domain.com:8088/
</VirtualHost>

Nginx reverse proxy DNS configuration

I am looking to deploy an nginx/dns server on a vps proxy that maps to the real back-end in a different geographical location. The back-end runs apache,mysql,dovecot,postfix. It is a pay-for mail server. The users get entered through apache through php into mysql, and when users set up IMAP, dovecot/postfix pools them from mysql and delivers or uses the smtp outbound.
I read about something in the nginx.conf file, that I can declare the mail hostname on the proxy as so:
mail {
server_name mail.example.com;
...
}
This mail.example.com is the actual mx for the example.com mail exchanger listed in DNS? Here is where that came from:
"As you can see, we declared the name of this server at the top of the mail context.
This is because we want each of our mail services to be addressed as mail.example.
com. Even if the actual hostname of the machine on which NGINX runs is different,
and each mail server has its own hostname, we want this proxy to be a single point
of reference for our users. This hostname will in turn be used wherever NGINX
needs to present its own name, for example, in the initial SMTP server greeting."
So from my understanding, the physical hostname of the proxy should be something else besides mail.example.com. So in DNS on the proxy, I can define that as anyhost.example.com? The proxy also proxies back to my apache on the back-end.
Finally, on the back-end, how do I set up my DNS for that? What hostname do I choose for the actual box running apache,mysql,dovecot,postfix? Its all on one box. I understand that on the registrar, I point 2 nameservers, these should be two proxies, that way running a dig would only pull up the proxies and the MX which should be "known" to be on the proxy.
in your case where all of the services in one box including the proxy, you can set the apache, mysql and other services accessible only from localhost / 127.0.0.1. Then from nginx you put
upstream: 127.0.0.1:80
upstream: 127.0.0.1:3306
therefore the nginx is serving frontend request and forward them to designated services