Google Load-Balancing CDN - google-compute-engine

I am using the Google Load-Balancer with the CDN option enabled.
When I setup the Backend Configuration for the load-balancer, I setup a backend with instances in US-Central, US-West and US-East.
Everything is working great, except all traffic is being routed only to the US-West backend service.
Does the load-balancer option route traffic to the closest backend service?
I see that there is an advanced menu in the load balancer for creating forwarding rules, target proxies and more.
Is there something I need to do to make my load-balancer load closest to client?
If they are in Florida and the CDN does not have the file, they get routed to the US-East VM Instance?
If that is not possible, it seems like having only an US-Central server would better than having US-Central, US-East and US-West? That way East Coast misses are not going to the West Coast to get the file. Everything will pull from the central location.
Unless there is a way to route traffic from the load-balancer to the closest VM instance, it seems as if the only solution would be to create different load balancers with the CDN enabled and use DNS routing to point to the CDN pool that is closest.
That setup would use 3 different CDN ip address's, 3 Compute Engine ip address's and dns latency or location routing. If they are in Florida, route them to the Google Load Balancer CDN in the east coast.
I'm not sure that would be a good solution on top of the Anycast ip routing. It seems like overkill.
Thank you for listening and any help or guidance would be appreciated.
"By default, to distribute traffic to instances, Google Compute Engine picks an instance based on a hash of the source IP and port and the destination IP and port."
Similar question: Google compute engine load balancing not routing properly Except all traffic in a live environment is all going to the same VM instance.
I am using the Google CDN Frontend Anycast ip address.
I think Elving is right and there may be a mis-configuration. Here is a screen shot of the VM instances in the Google Cloud. It says the two instances aren't in use.
Here is another picture of the Instances Groups. I don't see a clear way to make the instances attached to the instance groups.

The load balancer will automatically route traffic to the nearest instance group with capacity. You don't need to do anything other than configure your backend service to use multiple instance groups.
There's more information at https://cloud.google.com/compute/docs/load-balancing/http/.

Related

In Google Cloud Platform, Is there a any service to be a replacement of Web server (Nginx Or Apache) and Load Balancer(Haproxy)

RightNow, I am manging URL's and its redirection using Nginx hosted on physical machine. URL Redirection is achieved and pointed to different load balancers(Haproxy) as mentioned in the vhost file of Nginx.
Is there any option available in GCP to provide the services of redirection without using Nginx and Apache? And also let me know what are the alternate options available in GCP for Haproxy
From what I understand you have a few services (and maybe some static content) serving via Haproxy (which is doing load balancing) to the Internet.
Based on that I assume that if someone wants to go to "yourservice.com/example1" would be redirected by load balancer to service1, and if someone types "yourservice.com/static1" them he will be served static content by different service etc.
GCP has exactly a service you're asking for which can do url/content based load balancing. You can also move your current services to the Google Compute Engine (as virtual machines) or Google Kubernetes Engine which will run your services as a containers.
Also - using GCE or GKE can do autoscaling if that's what you need.
Load balancing provided by the GCP can do all the content based balancing that Haproxy which you're using now.
If you're using some internal load balancing now, I believe it could also be done with one load balancer ( and could simplify your configuration (just some VM or containers running your services and one load balancer).
You can read more about load balancing concepts and specifically about setting up GCP to do that.

Which URL/IP to use, when accessing Kubernetes Nodes on Rancher?

I am trying to expose services to the world outside the rancher clusters.
Api1.mydomain.com, api2.mydomain.com, and so on should be accessible.
Inside rancher we have several clusters. I try to use one cluster specifically. It's spanning 3 nodes node1cluster1, node2cluster1 and node2cluster1.
I have added ingress inside the rancher cluster, to forward service requests for api1.mydomain.com to a specific workload.
On our DNS I entered the api1.mydomain.com to be forwarded, but it didn't work yet.
Which IP URL should I use to enter in the DNS? Should it be rancher.mydomain.com, where the web gui of rancher runs? Should it be a single node of the cluster that had the ingress (Node1cluster1)?
Both these options seem not ideal. What is the correct way to do this?
I am looking for a solution that exposes a full url to the outside world. (Exposing ports is not an option as the companies dns cant forward to them.)
Simple answer based on the inputs provided: Create a DNS entry with the IP address of Node1cluster1.
I am not sure how you had installed the ingress controller, but by default, it's deployed as "DaemonSet". So you can either use any one of the IP addresses of the cluster nodes or all the IP addresses of the cluster nodes. (Don't expect DNS to load balance though).
The other option is to have a load balancer in front with all the node IP addresses configured to actually distribute the traffic.
Another strategy that I have seen is to have a handful of nodes dedicated to run Ingress by use of taints/tolerations and not use them for scheduling regular workloads.

GCE managed group (autoscaling) - Proxy/Load Balancer for both HTTP(S) and TCP requests

I have an autoscaling istance group, i need to setup a Proxy/Load balancer that take request and send it to the istance group.
I thinked to use a Load balancer, but I need to grab both HTTP(S) and TCP requests.
There is some way (or some workaround) to solve this?
EDIT: The problem is that from TCP LB settings i can set the backend service (the managed group that i need to set) only for one port.
For your use case, a single load balancing configuration available on Google Cloud Platform will not be able to serve the purpose. On the other hand, since you are using managed instance groups (Autoscaling), it can not be used as backend for 2 different load balancers.
As per my understanding, the closest you can go is by using Network load balancing (TCP) and install SSL certificate to handle HTTPS requests
on the instance level.

Monitoring unhealthy hosts on google cloud

I am using an external monitoring service (not stackdriver)
I wish to monitor the number of unhealthy hosts on my load balancer.
It seems like the google cloud api doesn't expose this metrics
therefore I implemented a custom script that gets the instance groups of the load balancer, get the instances' data (dns) and performs the health check
pretty cumbersome. is there a simple way to do it?
You can use the command 'gcloud compute backend-services get-health' to get the status of each instance in your backend service. This command will provide the current status of each instance, HEALTHY or UNHEALTHY, that is part of your backend service.

Azure Traffic Manager Browser Caching Issue

In Azure's traffic manager, I am doing some testing with TWO failover URLs: Two different endpoints are configured for the traffic manager (failover1.mysite.com, failover2.mysite.com.), however, my local browser (Chrome for example) seems to be caching the DNS record on its own and redirecting to what it thinks is still the destination, rather than letter Azure Traffic Manager re-route. Trying the request in a new browser or Incognito session will result in the request reaching the correct site. But for existing sessions, failover updates are not being registered and still hitting the site we are trying to redirect traffic away from. Does anyone have any experience with this?
I had the same issue while I was dealing with Azure Traffic Manager or AWS CloudFront.
DNS Record is associated with its TTL value. It is not something wrong with the Azure Traffic Manager. It is the TTL value that is letting the DNS client to cache the IP address.
How to check TTL value of DNS:
If you are using Windows,
https://support.rackspace.com/how-to/nslookup-checking-dns-records-on-windows/
If you are using linux follow the detailed instructions here,
https://www.cyberciti.biz/faq/howto-use-dig-to-find-dns-time-to-live-ttl-values/
Hope it helps.
From Microsoft's overview of their load balancing services:
Traffic Manager is a DNS-based traffic load balancer [...] it load balances only at the domain level. For that reason, it can't fail over as quickly as Front Door, because of common challenges around DNS caching and systems not honoring DNS TTLs.
With Front Door you can route requests to different backends based on rules and/or the health of the backends themselves so it doesn't have the issue you describe.