the traffic is accepted at firewall then forwarded to private load-balancer on oracle cloud, the forwarded to internal web servers. the problem is that client IP address at we server is seen as load-balancer IP address. is there a way to add X-Forwarded-for or X-real-IP in order to see actual IP address on internal web server.
The Loadbalancer may be configured to use TCP as opposed to HTTP. This can cause the header to not be added correctly. Try reconfiguring the loadbalancer to use HTTP
Related
I am trying to setup a reverse proxy with Caddy, I also want to use subdomains to point to my different services, so I bought a domain but the domain can only point to an ip-address, and my routers ip-address is not static so to solve that I registered a subdomain on Duckdns and that subdomain is pointing to my routers ip-address all the time, the subdomains on that I payed for have DNS set to point to Duckdns and I have opened port 80 & 443 on my router to point to my server machine that is running Caddy, the caddyfile simply have the domains I payed for point to localhost services.
It works but only on LAN, outside it does not work
If your public IP address is not the same as nslookup mydomain.duckdns.org; the problem is DNS. check your dynamic DNS
client's configuration file for inaccuracies. Restart your router and trial that it works as expected
If the IP addresses match, but you cannot make access from outside the network, its a port forwarding issue. Check port forwarding rules on your router, and opened ports on your server.
sudo ufw status verbose and sudo ss -ltnp are helpful server commands.
If the IP addresses match, but you cannot make access from inside
the network, hairpin NAT is the issue. This is a router issue. Buy
a more feature complete router from your ISP, or setup a local DNS
server to resolve this minor annoyance.
[Using your phone, enable WiFi for 'inside' type testing; disable WiFi for 'outside' type testing].
I'd like to log the user's IP address in an OpenShift application. I'm using this access log pattern in my WildFly application server configuration:
<access-log pattern="%{i,X-Forwarded-For} | %A%t%h%l%u%r%s%b%T%I" directory="${jboss.home.dir}" prefix="access" suffix=".log" worker="default"/>
So it basically logs the X-Forwarded-For header.
It works just fine for HTTP connections, but it prints a single - character instead of the client's real IP address when a websocket connection is made.
I've found this bug ticket: https://bugzilla.redhat.com/show_bug.cgi?id=1313395, but there seems to be a commit that fixes the problem.
Is there a way to get the user's real IP address in this situation?
Is there a way to get the origin IP of the user from the HTTP load balancing w/ GCloud? We are currently using just Network Load Balancing, and are needing to move to a cross region balancer although we need to user's IP for compliance and logging.
Does it pass in a header or something along those lines?
Thanks ~Z
The documentation (https://cloud.google.com/compute/docs/load-balancing/http/) says it's the first IP address of the X-Forwarded-For header.
X-Forwarded-For: <client IP(s)>, <global forwarding rule external IP>
If you are sure that you do not run any other proxy (that append additional IPs into X-Forwarded-For) behind Google Cloud Balancing, you can get the second to last IP from X-Forwarded-For as immediate client IP. Or even if you have some proxies but know the exact number of additional IPs that will be appended, you can also add those into account.
From https://cloud.google.com/compute/docs/load-balancing/http/#components:
X-Forwarded-For: <unverified IP(s)>, <immediate client IP>, <global forwarding rule external IP>, <proxies running in GCP> (requests only)
Only the <immediate client IP> and <global forwarding rule external IP> entries are provided by the load balancer. All other entries in
the list are passed along without verification.
IPs that comes before immediate client IP could be spoofed IPs or IPs coming from client proxies. Even if the client spoofs X-Forwarded-For header, the load balancer still appends the actual IP that hits the load balancer.
Ok, so after digging though headers and other things I found the following header that is passing the origin IP and thee IP for the user.
$_SERVER['HTTP_X_FORWARDED_FOR']
You will need to split it by the ',' and take the first part of the string. This is the user IP, that is being pushed by the Google Cloud HTTP Balancer.
Based on HTTP_X_FORWARDED_FOR header, a nice Nginx rule to split the IPs chain :
set $realip $remote_addr;
if ($http_x_forwarded_for ~ "^(\d+\.\d+\.\d+\.\d+)") {
set $realip $1;
}
fastcgi_param REMOTE_ADDR $realip;
Paste it after include fastcgi_params; directive to be effective.
If you're using Cloudflare, you can get original client IP from HTTP_CF_CONNECTING_IP.
I found this article
https://geko.cloud/forward-real-ip-to-a-nginx-behind-a-gcp-load-balancer/
You can whitelist/ignore IPs that are known to GCP like the static ip needed for registering the loadbalancer
set_real_ip_from 36.129.221.25/32; // LB Public IP address
set_real_ip_from 130.211.0.0/22; // Private IP range for GCP Load Balancers
set_real_ip_from 35.191.0.0/16; //Private IP range for GCP Load Balancers
real_ip_header X-Forwarded-For;
real_ip_recursive on;
I am looking to deploy an nginx/dns server on a vps proxy that maps to the real back-end in a different geographical location. The back-end runs apache,mysql,dovecot,postfix. It is a pay-for mail server. The users get entered through apache through php into mysql, and when users set up IMAP, dovecot/postfix pools them from mysql and delivers or uses the smtp outbound.
I read about something in the nginx.conf file, that I can declare the mail hostname on the proxy as so:
mail {
server_name mail.example.com;
...
}
This mail.example.com is the actual mx for the example.com mail exchanger listed in DNS? Here is where that came from:
"As you can see, we declared the name of this server at the top of the mail context.
This is because we want each of our mail services to be addressed as mail.example.
com. Even if the actual hostname of the machine on which NGINX runs is different,
and each mail server has its own hostname, we want this proxy to be a single point
of reference for our users. This hostname will in turn be used wherever NGINX
needs to present its own name, for example, in the initial SMTP server greeting."
So from my understanding, the physical hostname of the proxy should be something else besides mail.example.com. So in DNS on the proxy, I can define that as anyhost.example.com? The proxy also proxies back to my apache on the back-end.
Finally, on the back-end, how do I set up my DNS for that? What hostname do I choose for the actual box running apache,mysql,dovecot,postfix? Its all on one box. I understand that on the registrar, I point 2 nameservers, these should be two proxies, that way running a dig would only pull up the proxies and the MX which should be "known" to be on the proxy.
in your case where all of the services in one box including the proxy, you can set the apache, mysql and other services accessible only from localhost / 127.0.0.1. Then from nginx you put
upstream: 127.0.0.1:80
upstream: 127.0.0.1:3306
therefore the nginx is serving frontend request and forward them to designated services
We have a system running on Amazon's Beanstalk.
We would like to limit access to the server to HTTPS only.
When blocking HTTP on the environment settings - it prevents accessing through the beanstalk DNS.
However, if someone knows the public IP (or name) of any of the servers - he can access them directly through HTTP. It seems that the LB forwards the requests to port 80, so we can not change the security group and remove port 80.
Is there a simple way, to limit HTTP access to be only from the LB?
Thanks
You should be able to do this through EC2 Security Groups, which is an Elastic Beanstalk environment property.
By default this allows connections to port 80 from any IP address, but you could remove that rule or replace it with your own IP address (for testing purposes).
Failing that, you could reroute all HTTP traffic to HTTPS at the application level or simply test the CGI property *server_port_secure* and refuse to answer.
Yes, you need http/80 to be open for health-check to work. The option for you is to redirect all the other requests (except the health check URL) to use https - This way though the port is open, you "dont serve any data in an insecure way".
You have at least two options:
1 - set the Security Group Policy that allows access on port 80 from the Load Balancer only. IMPORTANT!!! Do not use the Load balancer IP in the instances' security group. Use the Load balancer security group ID instead.
2 - remove the public IPs from the instances. You should be good if all your EC2 instances have a private IPs and the ELB has a public IP.