Getting Orgin IP From Load Balancer - google-compute-engine

Is there a way to get the origin IP of the user from the HTTP load balancing w/ GCloud? We are currently using just Network Load Balancing, and are needing to move to a cross region balancer although we need to user's IP for compliance and logging.
Does it pass in a header or something along those lines?
Thanks ~Z

The documentation (https://cloud.google.com/compute/docs/load-balancing/http/) says it's the first IP address of the X-Forwarded-For header.
X-Forwarded-For: <client IP(s)>, <global forwarding rule external IP>

If you are sure that you do not run any other proxy (that append additional IPs into X-Forwarded-For) behind Google Cloud Balancing, you can get the second to last IP from X-Forwarded-For as immediate client IP. Or even if you have some proxies but know the exact number of additional IPs that will be appended, you can also add those into account.
From https://cloud.google.com/compute/docs/load-balancing/http/#components:
X-Forwarded-For: <unverified IP(s)>, <immediate client IP>, <global forwarding rule external IP>, <proxies running in GCP> (requests only)
Only the <immediate client IP> and <global forwarding rule external IP> entries are provided by the load balancer. All other entries in
the list are passed along without verification.
IPs that comes before immediate client IP could be spoofed IPs or IPs coming from client proxies. Even if the client spoofs X-Forwarded-For header, the load balancer still appends the actual IP that hits the load balancer.

Ok, so after digging though headers and other things I found the following header that is passing the origin IP and thee IP for the user.
$_SERVER['HTTP_X_FORWARDED_FOR']
You will need to split it by the ',' and take the first part of the string. This is the user IP, that is being pushed by the Google Cloud HTTP Balancer.

Based on HTTP_X_FORWARDED_FOR header, a nice Nginx rule to split the IPs chain :
set $realip $remote_addr;
if ($http_x_forwarded_for ~ "^(\d+\.\d+\.\d+\.\d+)") {
set $realip $1;
}
fastcgi_param REMOTE_ADDR $realip;
Paste it after include fastcgi_params; directive to be effective.
If you're using Cloudflare, you can get original client IP from HTTP_CF_CONNECTING_IP.

I found this article
https://geko.cloud/forward-real-ip-to-a-nginx-behind-a-gcp-load-balancer/
You can whitelist/ignore IPs that are known to GCP like the static ip needed for registering the loadbalancer
set_real_ip_from 36.129.221.25/32; // LB Public IP address
set_real_ip_from 130.211.0.0/22; // Private IP range for GCP Load Balancers
set_real_ip_from 35.191.0.0/16; //Private IP range for GCP Load Balancers
real_ip_header X-Forwarded-For;
real_ip_recursive on;

Related

Ec2 public DNS and address net::ERR_INSECURE_RESPONSE in browser

I am doing a code based load balancing i.e. on first request to the main server, it returns an address, to which the browser will open a persistent conncection using (wss) websocket. But, due to compatability with my mobile app, i'm returning a public DNS of aws instance, Ex: ec2-35-154-101-63.ap-south-1.compute.amazonaws.com which works fine in the mobile app. The browser however is refusing the connection because the address of websocket is not matching the parent domain. Are there any options to fix this other than using a Wesocket address from the same domain.
Edit: I had no choice rather than return a subdomain address for the websocket connection.
You are connecting to a URL that does not match the name in the SSL certificate.
You have two choices, either map the domain name to the instance or (not recommended) issue a certificate that matches the URL that you are using (ec2-35-154-101-63.ap-south-1.compute.amazonaws.com)
The correct approach is to specify the A address (IP address) in your DNS server for the EC2 instance so that requests for your_domain_name resolve correctly and then USE that URL and not the EC2 instance DNS name. You cannot specify the AWS URL for https as the SSL certificate was not issued to that identity.

Access external client IP from behind Google Compute Engine network load balancer

I am running a Ruby on Rails app (using Passenger in Nginx mode) on Google Container Engine. These pods are sitting behind a GCE network load balancer. My question is how to access the external Client IP from inside the Rails app.
The Github issue here seems to present a solution, but I ran the suggested:
for node in $(kubectl get nodes -o name | cut -f2 -d/); do
kubectl annotate node $node \
net.beta.kubernetes.io/proxy-mode=iptables;
gcloud compute ssh --zone=us-central1-b $node \
--command="sudo /etc/init.d/kube-proxy restart";
done
but I am still getting a REMOTE_ADDR header of 10.140.0.1.
On ideas on how I could get access to the real Client IP (for geolocation purposes)?
Edit: To be more clear, I am aware of the ways of accessing the client IP from inside Rails, however all of these solutions are getting me the internal Kubernetes IP, I believe the GCE network load balancer is not configured (or perhaps unable) to send the real client IP.
A Googler's answer to another version of my question verifies what I am trying to do is not currently possible with the Google Container Engine Network Load Balancer currently.
EDIT (May 31, 2017): as of Kubernetes v1.5 and up this is possible on GKE with the beta annotation service.beta.kubernetes.io/external-traffic. This was answered on SO here. Please note when I added the annotation the health checks were not created on the existing nodes. Recreating the LB and restarting the nodes solved the issue.
It seems as though this is not a rails problem at all, but one of GCE. You can try the first part of
request.env["HTTP_X_FORWARDED_FOR"]
Explanation
Getting Orgin IP From Load Balancer advises that https://cloud.google.com/compute/docs/load-balancing/http/ has the text
The proxies set HTTP request/response headers as follows:
Via: 1.1 google (requests and responses)
X-Forwarded-Proto: [http | https] (requests only)
X-Forwarded-For: <client IP(s)>, <global forwarding rule external IP> (requests only)
Can be a comma-separated list of IP addresses depending on the X-Forwarded-For entries appended by the intermediaries the client is
traveling through. The first element in the section
shows the origin address.
X-Cloud-Trace-Context: <trace-id>/<span-id>;<trace-options> (requests only)
Parameters for Stackdriver Trace.

Google Cloud HTTP Load Balancer can't connect to my instance

I have created a HTTP load balancer to basically redirect from port 80 to port 8080. The server on my instance is running on port 8080.
I can connect to the server directly but the LB is not able to connect to the instance, both accessing the LB's IP directly and also the health check always fails. The instance group the LB is using consist of just that single instance.
I read Google Compute Engine health checks failing
and the google-address-manager is running. However, when running ip route table list local there is no routing for my LB. The user in the above question is using Network load balancing and not HTTP load balancing (as I am) so I don't know if that is related?
Or perhaps it's related to a firewall? I have added my LB's ip address to a firewall rule that allows tcp:8080
Does anybode have any idea how can I fix this? I am not experienced with debian nor gcp.
Show I just try and run the route add command referenced in the above question? If so, how come the google-address-manager is not adding the route?
Thank you in advance!
You need to make sure that your port mapping on instance group is set to correct port, the 8080 in your case.
First, edit your instance group and change the port name and port to 8080:
Then, navigate to your http backend's settings and change the default port to the port name you've configured in your instance group.
Finally, make sure that your firewall rules allow access on port 8080 from 0.0.0.0/0 or at least from the IP address of HTTP load balancer (130.211.0.0/22)
I had the same issue and fixed it by adding a firewall rule for the health checker (which is not the same IP as your LB!). See https://cloud.google.com/compute/docs/load-balancing/health-checks?hl=en_US#http_and_https_load_balancing for instructions.
In my case, I did not configure the HTTP health check correctly.
I used "/" as path, but on my backend, "/" redirects to a login-page (HTTP 301), which responds with a HTTP 200.
The health check does not follow a redirect, every HTTP response code != 200 is assumed unhealthy (from Debugging Health Checks in Load Balancing on Google Compute Engine).
So, I changed my path to "/login", this fixed my issue.

Preventing HTTP access to the servers of Amazon's Elastic Beanstalk

We have a system running on Amazon's Beanstalk.
We would like to limit access to the server to HTTPS only.
When blocking HTTP on the environment settings - it prevents accessing through the beanstalk DNS.
However, if someone knows the public IP (or name) of any of the servers - he can access them directly through HTTP. It seems that the LB forwards the requests to port 80, so we can not change the security group and remove port 80.
Is there a simple way, to limit HTTP access to be only from the LB?
Thanks
You should be able to do this through EC2 Security Groups, which is an Elastic Beanstalk environment property.
By default this allows connections to port 80 from any IP address, but you could remove that rule or replace it with your own IP address (for testing purposes).
Failing that, you could reroute all HTTP traffic to HTTPS at the application level or simply test the CGI property *server_port_secure* and refuse to answer.
Yes, you need http/80 to be open for health-check to work. The option for you is to redirect all the other requests (except the health check URL) to use https - This way though the port is open, you "dont serve any data in an insecure way".
You have at least two options:
1 - set the Security Group Policy that allows access on port 80 from the Load Balancer only. IMPORTANT!!! Do not use the Load balancer IP in the instances' security group. Use the Load balancer security group ID instead.
2 - remove the public IPs from the instances. You should be good if all your EC2 instances have a private IPs and the ELB has a public IP.

dyndns equivalent for ports? (so that port changes don't require config file changes)

DynDNS et. al. are great for not having to put IP addresses in config files... I put the dyndns domain in the config and if I ever want to change the server location I just update it in one place, and the config stays the same. But what if I want to change the port number that's used? Is there an equivalent for ports - so that I can also get what port to connect to from some service just like I get the IP from DynDNS? Or what's another solution (besides not changing the ports)?
DynDNS and DNS in general has the main purpose of not having to remember a host by its IP address. The DynDNS part comes is mostly to solve the issue of people who don't have static IP addresses, and they occasionally get new IP addresses when their DHCP leases expire.
The original intention wasn't really meant to account for someone purposely changing their IP address or port numbers. Usually a service is on a well known port that doesn't change, such has 80 for http. Depending on the protocol, you could set up a well-known port, and then have it redirect to a different port. As an example, some websites will redirect port 80 to 8080, but this is protocol dependent. This also won't work for a lot of other protocols, and you're usually stuck with the port you choose.
Using DynDNS I access three different machines behind the same router by simply adding a colon and the port number just as if I were adding it to a static IP address (ie myhome-computer.dyndns.biz:1234 ). Each port points to a different internal ip in the router. This works fine with my free host account. However, I am not aware of a port identifier that could report as the DynDNS host app does.