backend utilization - google cloud platform load balancer - 500% - google-cloud-load-balancer

Good morning everybody, i got an issue this morning in my company web load balancer.
we have a google loadbalancer linked with 2 instance that are 2 pfsense where we configure our webservers.
this morning happend that in loadbalancer - backend - backend details the backend load/utilization was 500% and consequently all of my webserver was not viewable by customers.
CPU load and memory were ok and i cannot see any issues, has such a thing ever happened to anyone?
best regards,

Related

Can an application running on OpenShift be part of a load balancer setup with the same application running on legacy Linux/Tomcat

I am maintaining a couple of spring boot web applications.
They are currently running as WAR files deployed on the same Tomcat instance on two Linux servers.
In front I have a load balancer to distribute the load through url myapps.mydomain.com.
Apart from the actual applications, both backend Tomcat instances exposes /up/up.html to allow the load balancer to know the status of each.
myapps.mydomain.com:
ip-address-1:8080/up/up.html
ip-address-2:8080/up/up.html
Now I am in the process of migrating the applications to OpenShift, and I have all application endpoints including /up/up.html exposed by OpenShift as myapps.openshift.mydomain.com.
For a period I would like to run the OpenShift apps in parallel with the legacy servers through the legacy load balancer - basically:
myapps.mydomain.com:
ip-address-1:8080/up/up.html
ip-address-2:8080/up/up.html
myapps.openshift.mydomain.com:80/up/up.html
such that the load is distributed one third to each.
The guys managing the legacy load balancer claims that this cannot be done:-(
I myself dont know much about load balancers. I have been googling the subject and found articles about routing from "Edge Load Balancers" to Openshift, but I really dont know the right term for what I am trying to do.
I was hoping the load balancer could treat myapps.openshift.mydomain.com as just another blackbox - like the two legacy servers.
Can this be done?
And if so - what is the correct terminology for this concept - what is the proper name for what I am trying to do?

Azure App Service - Outbound IPs Changed?

I have a site running on an Azure App Service. It connects to a MySQL DB on Google CloudSQL.
All of a sudden I am getting an error when I hit a page on my site. The error is:
Configuration Error - Reading from the stream has failed.
I know this is related to MySQL and an attempt to read from the DB.
The DB itself is fine - minimal connections, no stress.
The site runs fine from my local VS connecting to said database.
This makes me think I have hit some kind of 'outbound' connection limit on Azure. Can anyone confirm?
The Azure site is up and running but as soon as it tries to connect to the DB it falls over.
Thanks for any help you can give!
Update - IP Changed??
It appears that the App Service outbound IP addresses changed at some point yesterday so our external MySQL DB started blocking the connection attempts. Has anyone experienced this? Every single outbound IP changed. Nothing has been changed on the setup of the app (no scaling etc)

Google Load-Balancing CDN

I am using the Google Load-Balancer with the CDN option enabled.
When I setup the Backend Configuration for the load-balancer, I setup a backend with instances in US-Central, US-West and US-East.
Everything is working great, except all traffic is being routed only to the US-West backend service.
Does the load-balancer option route traffic to the closest backend service?
I see that there is an advanced menu in the load balancer for creating forwarding rules, target proxies and more.
Is there something I need to do to make my load-balancer load closest to client?
If they are in Florida and the CDN does not have the file, they get routed to the US-East VM Instance?
If that is not possible, it seems like having only an US-Central server would better than having US-Central, US-East and US-West? That way East Coast misses are not going to the West Coast to get the file. Everything will pull from the central location.
Unless there is a way to route traffic from the load-balancer to the closest VM instance, it seems as if the only solution would be to create different load balancers with the CDN enabled and use DNS routing to point to the CDN pool that is closest.
That setup would use 3 different CDN ip address's, 3 Compute Engine ip address's and dns latency or location routing. If they are in Florida, route them to the Google Load Balancer CDN in the east coast.
I'm not sure that would be a good solution on top of the Anycast ip routing. It seems like overkill.
Thank you for listening and any help or guidance would be appreciated.
"By default, to distribute traffic to instances, Google Compute Engine picks an instance based on a hash of the source IP and port and the destination IP and port."
Similar question: Google compute engine load balancing not routing properly Except all traffic in a live environment is all going to the same VM instance.
I am using the Google CDN Frontend Anycast ip address.
I think Elving is right and there may be a mis-configuration. Here is a screen shot of the VM instances in the Google Cloud. It says the two instances aren't in use.
Here is another picture of the Instances Groups. I don't see a clear way to make the instances attached to the instance groups.
The load balancer will automatically route traffic to the nearest instance group with capacity. You don't need to do anything other than configure your backend service to use multiple instance groups.
There's more information at https://cloud.google.com/compute/docs/load-balancing/http/.

Wordpress setup latency on Azure

I have a Wordpress environment setup on Azure.
Front end is on WebApp (Size is S2 - 2 cores & 3.5 GB RAM) whilst DB is on 2 replicated Classic Virtual Machines (Size F2 - 2 cores / 4 GB Memory).
We also tried connecting the web app to the VMs over a point-to-site VPN which in a nutshell is a VPN from 1 Azure service (WebApp) to another (VMs), so ultimately connection is still being made over the internet.
I'm looking for ways to improve network latency between Azure's WebApp and Virtual Machines.
Firstly, If your trying to "improve" the network latency then you have a issue somewhere else. Please provide more details on your latency issue.
You should be pushing towards the ARM stuff now. If you want to improve the performance then you can try using azure service fabric.

Openshift tomcat killed after one day

I have deployed a java web application in openshift using tomcat server.Application is running fine. But after 1-2 days automatically tomcat server gets killed.so i have to manually start the server.can any one tell me what is going wrong here.i'm using free account.
Thanks,
A.G.Ishara
Applications on the free account get idled automatically after 24 hours of no external http requests. The application should then automatically restart itself once a request is made.