Azure Traffic Manager Browser Caching Issue - google-chrome

In Azure's traffic manager, I am doing some testing with TWO failover URLs: Two different endpoints are configured for the traffic manager (failover1.mysite.com, failover2.mysite.com.), however, my local browser (Chrome for example) seems to be caching the DNS record on its own and redirecting to what it thinks is still the destination, rather than letter Azure Traffic Manager re-route. Trying the request in a new browser or Incognito session will result in the request reaching the correct site. But for existing sessions, failover updates are not being registered and still hitting the site we are trying to redirect traffic away from. Does anyone have any experience with this?

I had the same issue while I was dealing with Azure Traffic Manager or AWS CloudFront.
DNS Record is associated with its TTL value. It is not something wrong with the Azure Traffic Manager. It is the TTL value that is letting the DNS client to cache the IP address.
How to check TTL value of DNS:
If you are using Windows,
https://support.rackspace.com/how-to/nslookup-checking-dns-records-on-windows/
If you are using linux follow the detailed instructions here,
https://www.cyberciti.biz/faq/howto-use-dig-to-find-dns-time-to-live-ttl-values/
Hope it helps.

From Microsoft's overview of their load balancing services:
Traffic Manager is a DNS-based traffic load balancer [...] it load balances only at the domain level. For that reason, it can't fail over as quickly as Front Door, because of common challenges around DNS caching and systems not honoring DNS TTLs.
With Front Door you can route requests to different backends based on rules and/or the health of the backends themselves so it doesn't have the issue you describe.

Related

GCE managed group (autoscaling) - Proxy/Load Balancer for both HTTP(S) and TCP requests

I have an autoscaling istance group, i need to setup a Proxy/Load balancer that take request and send it to the istance group.
I thinked to use a Load balancer, but I need to grab both HTTP(S) and TCP requests.
There is some way (or some workaround) to solve this?
EDIT: The problem is that from TCP LB settings i can set the backend service (the managed group that i need to set) only for one port.
For your use case, a single load balancing configuration available on Google Cloud Platform will not be able to serve the purpose. On the other hand, since you are using managed instance groups (Autoscaling), it can not be used as backend for 2 different load balancers.
As per my understanding, the closest you can go is by using Network load balancing (TCP) and install SSL certificate to handle HTTPS requests
on the instance level.

json - Encryption SSL/TLS End to End

So a little back story about the security and project. Developing a private application for a customer. This application will need to be secure. One way we are securing it is by not allowing outside connections to this. Which means that only internal connections can be made. Or connections over VPN which we will pawn the security off of this to the VPN provider. However we must address and have in our minds the security concern of local users. We had many thoughts of this by simply pawning the security off on level 2 network devices and ldap security within the organization. However we now face the struggle of within the authorized user set (some very smart people) how do we keep security here.
So question is. If we have an SSL layering the application. Only allowing users to access the webserver via an SSL connection. Will it secure all traffic?
Scenario:
User A logs on to this website running on IP address 10.x.x.180(under the ssl).
User B is sitting with wireshark open and is sniffing in this network for any traffic to ip of 10.x.x.180.
User A makes a call to website to view a webpage. This webpage calls for a local json file on this server. Returns json to the application. Then this json is read and displayed to User A.
Q. Will User B be able to see this data in his sniffing packets? or will he simply see SSL encrypted data?
Q. Will User B be able to see this data in his sniffing packets? or
will he simply see SSL encrypted data?
He will only see the encrypted SSL traffic which provides an end-to-end encryption.

Google Load-Balancing CDN

I am using the Google Load-Balancer with the CDN option enabled.
When I setup the Backend Configuration for the load-balancer, I setup a backend with instances in US-Central, US-West and US-East.
Everything is working great, except all traffic is being routed only to the US-West backend service.
Does the load-balancer option route traffic to the closest backend service?
I see that there is an advanced menu in the load balancer for creating forwarding rules, target proxies and more.
Is there something I need to do to make my load-balancer load closest to client?
If they are in Florida and the CDN does not have the file, they get routed to the US-East VM Instance?
If that is not possible, it seems like having only an US-Central server would better than having US-Central, US-East and US-West? That way East Coast misses are not going to the West Coast to get the file. Everything will pull from the central location.
Unless there is a way to route traffic from the load-balancer to the closest VM instance, it seems as if the only solution would be to create different load balancers with the CDN enabled and use DNS routing to point to the CDN pool that is closest.
That setup would use 3 different CDN ip address's, 3 Compute Engine ip address's and dns latency or location routing. If they are in Florida, route them to the Google Load Balancer CDN in the east coast.
I'm not sure that would be a good solution on top of the Anycast ip routing. It seems like overkill.
Thank you for listening and any help or guidance would be appreciated.
"By default, to distribute traffic to instances, Google Compute Engine picks an instance based on a hash of the source IP and port and the destination IP and port."
Similar question: Google compute engine load balancing not routing properly Except all traffic in a live environment is all going to the same VM instance.
I am using the Google CDN Frontend Anycast ip address.
I think Elving is right and there may be a mis-configuration. Here is a screen shot of the VM instances in the Google Cloud. It says the two instances aren't in use.
Here is another picture of the Instances Groups. I don't see a clear way to make the instances attached to the instance groups.
The load balancer will automatically route traffic to the nearest instance group with capacity. You don't need to do anything other than configure your backend service to use multiple instance groups.
There's more information at https://cloud.google.com/compute/docs/load-balancing/http/.

Google Cloud Network Load Balance Security concerns

I'm planning to create a web site that runs on several different machines in Google Cloud Compute, and I'm serious thinking to use the Network Load Balancing of Google. But I have some questions regards security and usability.
My machines can have a private ip address with the http port opened ?( we don't when some hacker is trying to get in ours servers)
My http response will have the machine own ip address or the ip in the Network Load Balance ?
Does Google protect the opened port in Google Cloud Compute machine against SYN,Pack flow attacks( like a router)?
You could use the HTTP/S load balancing to do what you want. https://cloud.google.com/compute/docs/load-balancing/http/
See https://cloud.google.com/compute/docs/load-balancing/http/cross-region-example#optional_remove_external_ips_except_for_a_bastion_host for removing external IPs.
Responses will come from the load balanced IP, not your VMs' IPs.
Yes for some types of malicious traffic, because the load balancing layer is doing full proxying. This means TCP and SSL termination both happen before your VMs.
if your machines have only private IP (RFC 1918 space) and no external IP, then configuring NLB doesn't make them externally accessible directly on port 80 (if thats what you configure for your service).
google does handle some level of attacks, but if you are like for a full-fledged ddos, then implementing additional layer on your end helps.
No. Is only possible to have the port 80(http) open if and only if the instance has a public ip address; however, it is possible to limit the machine instances affected with a bastion host.
No. Using the Network Load Balance will protect the ip address of you machine, but is possible (in theory) to gather the machine external ip address with random ip address scans or some flaw in the application.
GCE machine instances have some sort of protection, but they are susceptible to TCP or UDP flood according to securityfocus.

How to automatically detect a server?

We have developed a client app and a server app. The client communicates with the server using the http protocol and sends some data to be processed by the server.
Our structure allow us to have the server installed anywhere. I can be on the same client network or even on the cloud.
When the server is hosted on the cloud, it makes sense asking the user for the server address (since it can change if the user wishes to) but it does not make sense when the server is on the same network that the client. Besides that, we are currently asking users to configure the server ip/name in order to connect to the server.
To avoid this (asking users for the address) I have developed a discovery service based on UDP. The client broadcasts a message that the server answer with its address. It does work on some cases, but it does not when the user has some kind of firewall, proxy or even an anti virus.
I have read a lot about discovery services, and the one that a like most is Bonjour.
So, the question is: what is the best way of discovering a server's IP when the server is on the same network that the client without being blocked by firewalls, proxies, etc?
You can keep your service purely local (in the intranet) and build on top of what you are using now by implementing hole punching. You can get past firewalls, but Im really not sure about AV software policies.
Or you can establish a well-known http-based discovery service in the internet.
A server comes alive, sends its (local) ip address to the discovery service (keeps sending keep-alives)
On startup, the client queries that discovery service, identifies the local subnet he is in, and gets back the local ip address of the server.
That of course creates a single point of failure in your system in that if the discovery service kicks the bucket, your clients cannot find servers. You can remedy that by replicating the service and/or introducing fallback mechanisms (like the purely local discovery you have), which you probably want to do anyway. The only problem you might have is the subnet identification, if computers in local subnets dont share external IP addresses (then it depends on what a local subnet is for you).