I'm trying to set up an HTTPS load balancer for GKE using HTTPS L7 load balancer but for some reason is not working. Even the HTTP load balancer in the HTTP Load Balancing walkthrough. The forwarding rule's IP address is created and I'm able to ping and telnet to port 80. But when request via curl it give me a error.
<title>502 Server Error</title> </head> <body text=#000000
bgcolor=#ffffff> <h1>Error: Server Error</h1> <h2>The server
encountered a temporary error and could not complete your request.
<p>Please try again in 30 seconds.</h2> <h2></h2> </body></html>
All the steps were fine and I created a firewall without any tags for the ${NODE_PORT} but it didn't work.
Has anyone encountered this problem?
I had the same problem with my application, the problem is that we did not have an endpoint returning "Success" and the health checks were always failing.
It seems that the HTTP/HTTPS load balancer will not send the request to the cluster nodes if the health checks are not passing, so my solution was to create an endpoint that always returns 200 OK, and as soon as the health checks were passing, the LB started working.
I just walked through the example and (prior to opening up a firewall for $NODE_PORT) saw the same 502 error.
If you look in the cloud console at
https://console.developers.google.com/project/<project>/loadbalancing/http/backendServices/details/web-map-backend-service
you should see that the backend shows 0 out of ${num_nodes_in_cluster} as healthy.
For your firewall definition, make sure that you set the source filter to 130.211.0.0/22 to allow traffic from the the load balancing service and set the allowed protocols and ports to tcp:$NODE_PORT.
I use GKE, and I just walked through the example and it works fine, but when I route to my own service, it does not work. (my service is a rest api service)
I found that the biggest difference between my service and the example, is that: the example got a root endpoint("/"), but I do not support it.
So, I solved this problem in this way: add a root endpoint("/") to my service, and just return success(an empty endpoint that returns nothing), and then re-create the ingress, and waited for several minutes, and then the ingress works!!
I think this problem should be caused by healthy checker UNHEALTHY instances do not receive new connections.
Here is a link for Healthy checks: https://cloud.google.com/compute/docs/load-balancing/health-checks
The issue resolved after a few minutes (like 5-10 minutes) in my case.
If using an ingress, there may be additional information in the events relating to the ingress. To view these:
kubectl describe ingress example
In my case, the load balancer was returning this error because there was no web server running on my instances and instance-groups to handle the network request.
I installed nginx on all the machines and then it started working.
From now on, I made a point to add nginx in my startup script while creating the vm/instance.
If you are using nginx behind your loadbalancer then it's important that the default_server is returning 200 or some other 2**. That means that if you for example have a rewrite rule that returns 301 then it will fail.
The solution is to set default_server on your main server:
server {
# Rewrite calls to www
listen 443;
server_name example.com;
return 301 https://www.example.com$request_uri;
}
server {
listen 443 default_server;
server_name www.example.com;
...
Adding a Firewall Rule for Source: 130.211.0.0/22(the Load Balancer range on GCP) for the tcp:$NODEPORTIP fixed this for me.
I created an
endpoint to all request that contain 'GoogleHC' in the user-agent.
so,
server{
server_name example.com www.example.com
if ($http_user_agent ~* 'GoogleHC.*') {
return 200 'isaac newton';
}
}
Related
I have set up an Google Cloud Compute Instance:
Machine type
n1-standard-1 (1 vCPU, 3.75 GB memory)
CPU platform
Intel Haswell
Zone
us-east1-c
I can ssh in using the external address.
I have installed the vncserver and can access it on port 5901 from localhost as well as the internal IP.
I am trying to access it from the static, external IP address but it is not working.
I have configured the firewall to open to port to 0.0.0.0/0, but it is not reachable.
Can anyone help?
------after further investigation from the tips from the two answers (thanks, both!), I have a partial answer:
The Google Cloud Compute instance was set, by default, to not allow
HTTP traffic. I reset the configuration to allow HTTP traffic. I
then tried the troubleshooting tip to run a small HTTP service in
python. I was able to get a ressponse from the service over the
internet.
The summary of the current situation is as follows:
The external IP address can be reached
It is enabled and working for SSH
It is enabled and working for HTTP
It does not seem to allow traffic from vncserver
Any idea how to configure the compute instance to allow for vncserver traffic?
If you already verified that Google Firewall or your VM are not blocking packets, you must make sure that VNC service is configured to listen on the external IP address.
You can always use a utility like nmap outside Google project to reveal information on the port status.
enable http/https traffic form the firewall as per the need. it will work!!
The Google Cloud Compute instance was set, by default, to not allow HTTP traffic. I reset the configuration to allow HTTP traffic. I then tried the troubleshooting tip to run a small HTTP service in python. I was able to get a response from the service over the internet.
As such, the original question is answered, I can access Google Cloud Compute Instance External IP. My wider issue is still not solved, but I will post a new, more specific question about this issue
TLDR: make sure you are requesting http not https
In my case i was following the link from my CE instance's External Ip property which takes you directly to the https version and i didn't set up https, so that was causing the 'site not found' error.
Create an entry in your local ssh config file as below with mentioned local forward port. In my case its an example of yarn's IP, which I want to access in browser.
Host hadoop
HostName <External-IP>
User <Local-machine-username>
IdentityFile ~/.ssh/<private-key-for-above-user>
LocalForward 8089 <Internal-IP>:8088
In addition to having the firewall rules to allow HTTP traffic in both Google Cloud Platform and within the OS of the instance, make sure you install a web server such as Apache or Nginx.
After installing the web server, you connect to the instance using SSH and verify you do not get a failed connection with the following command:
$ sudo wget http://localhost
If the connection is positive, it means that you can access your external URL:
http://<IP-EXTERNAL-VM>
Usually there are two main things to check.
1. Port
By default, only port 80, 443 and ICMP are exposed. If your server is running on a different port, create a record for the same.
2. Firewall
Make sure you are allowing http and https traffic based on your need.
oua re
For me the problem was that I set up the traffic for the firewall rule to be 'Egress' instead of 'Ingress'.
If anyone already initiated 'https'
just disable it and check again.
I have ELB balancing TCP traffic to my Node.js processes. When ELB is balancing TCP connections it does not send the X-Forwarded-Proto header like it does with http connections. But I still need to know if the connection is using SSL/TLS so I can respond with a redirect from my Node process if it is not a secure connection.
Is there a way to make ELB send this header when balancing TCP connections?
Thanks
You can configure proxy protocol for your ELB to get connection related information. In case of HTTP the ELB adds headers telling about the client information, in case of TCP however, AWS ELB simply passes through the headers from the client without any modifications, this causes the back end server to lose client connection information as it is happening in your case.
To enable proxy control for your ELB, you will have to do it via API, there is currently no way to do it via UI.
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html
The above doc is a step-by-step guide on how to do this, I don't want to paste the same here as that information might change over time.
EDIT:
As it turns out, Amazon implements Version 1 of the proxy protocol only which does not give away SSL information. It does however give port numbers which was requested by the client and a process can be developed stating something like if the request was over port 443 then it was SSL. I don't like it as it is indirect, requires hardocoding and coordination between devops and developers... seems to be the only way for now...lets hope AWS ELB starts supporting Version 2 of the proxy protocol which does have SSL info soon.
I have created a HTTP load balancer to basically redirect from port 80 to port 8080. The server on my instance is running on port 8080.
I can connect to the server directly but the LB is not able to connect to the instance, both accessing the LB's IP directly and also the health check always fails. The instance group the LB is using consist of just that single instance.
I read Google Compute Engine health checks failing
and the google-address-manager is running. However, when running ip route table list local there is no routing for my LB. The user in the above question is using Network load balancing and not HTTP load balancing (as I am) so I don't know if that is related?
Or perhaps it's related to a firewall? I have added my LB's ip address to a firewall rule that allows tcp:8080
Does anybode have any idea how can I fix this? I am not experienced with debian nor gcp.
Show I just try and run the route add command referenced in the above question? If so, how come the google-address-manager is not adding the route?
Thank you in advance!
You need to make sure that your port mapping on instance group is set to correct port, the 8080 in your case.
First, edit your instance group and change the port name and port to 8080:
Then, navigate to your http backend's settings and change the default port to the port name you've configured in your instance group.
Finally, make sure that your firewall rules allow access on port 8080 from 0.0.0.0/0 or at least from the IP address of HTTP load balancer (130.211.0.0/22)
I had the same issue and fixed it by adding a firewall rule for the health checker (which is not the same IP as your LB!). See https://cloud.google.com/compute/docs/load-balancing/health-checks?hl=en_US#http_and_https_load_balancing for instructions.
In my case, I did not configure the HTTP health check correctly.
I used "/" as path, but on my backend, "/" redirects to a login-page (HTTP 301), which responds with a HTTP 200.
The health check does not follow a redirect, every HTTP response code != 200 is assumed unhealthy (from Debugging Health Checks in Load Balancing on Google Compute Engine).
So, I changed my path to "/login", this fixed my issue.
We are trying to test using the HTTP Load Balancing instead of the Network Load Balancing. When we try and go to http://beta.stubwire.com/ we get a 400 error saying we are speaking plain HTTP to an SSL-enabled server port but then if I go to https://beta.stubwire.com it works fine. I have included some screen shots of the load balancer below, is the error in here or is it the way the server is configured?
Finally able to figure this out. Under the "Backend services" of the load balancer you can click on Edit. Then you can select the different "Backend" that you wanted it to contain. On this we had specified "80, 443" as it mentioned we could list both. But we had a backend service created for secure and non secure pages. So only specifying the port that that service uses fixed that issue.
I am attempting to use purely https with my compute engine. I have a network load balancer created that forwards to a pool with my instance in it. However, the pool has constantly failing health checks because it won't let me configure a health check that uses https.
I'm using apache to redirect 80 to 443. Does anyone know how to either create an https health check or have the http health check follow the redirect?
Thanks for any help.
--edit--
I finally came across some documentation at http://googlecloudplatform.blogspot.com/2015/07/Debugging-Health-Checks-in-Load-Balancing-on-Google-Compute-Engine.html.
Failure 5: Not answering directly with a 200 response code The web server may be configured to redirect to a page that returns an HTTP 200 response code. The health check will not follow the redirect; it expects the health check page to return a 200 directly.
This basic capability has been supported at every other hosting provider we've been on. Why can't this be done? What am I missing?
I spent the whole day trying to configure a purely https based load balancer in GCloud for a Kubernetes cluster with an ingress controller.
I finally got it working, so maybe I share my experience with people that struggle with the same configuration. If the health-check fails for the instances you will usually see the following accessing your websites URL.
Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds.
1) Protocol: GCloud introduced new health checks which can be configured for HTTPS, SSLTCP, SSL, HTTP, HTTPS, or HTTP/2 probing. This can help the original problem to prevent a redirect from port 80 to port 443.
2) Path: The most common issue is a that the "/" path of your application will not return a 200 OK and thus let the health issue fail. This can be prevented by adding a path argument to your health check e.g. "/index".
3) Ingress HTTPS: This is relatively simple. Adding a secret or a pre-shared-cert to your ingress.yaml will automatically result in an HTTPS Load Balancer instead of HTTP. Further information to follow are here
Lastly, the guide from the docs for Setting up HTTP Load Balancing with Ingress .
However, even though the new HTTPS Health checks seem to work, they are still in the beta phase and bugs are reported in the issue tracker. The documentation for the gcloud-ingress-controller can be found here.