i want to understand the concept and the traffic flow in case of using AGIC. I'm using azure advanced networking in AKS. What i see that azure automatically creates an Azure load balancer once the cluster is created. So now i have an App gateway working together with AGIC . So what's the role of the Azure load balancer in this case?. And how then the traffic will flow ingress and igress of the cluster?. Should also the load balancer have a public IP ? Any explanation or resources would be very helpful.
Related
I am learning how to create an APIM instance using Powershell using the steps give here. https://learn.microsoft.com/en-us/azure/api-management/api-management-howto-integrate-internal-vnet-appgateway
Here at one point they talk about creating a API Management virtual network object.
What exactly is API Management virtual network object ?
AFAIK,
Within the Network, to access backend services - Azure APIM Instance should be deployed in an Azure Virtual Network.
So, you would be creating the Virtual Network, subnets in that created VNet, NSGs, NSG rules for the Application Gateway, etc.
When you attach the above details (VNet, SubNet Data) in an object to the APIM Instance/Service, it can be known as APIM Virtual Network Object.
We are deploying our services in a containerized environment using AWS fargate. Our single Task has all the service definitions in it and is successfully deployed to the container.
We need urls to our deployed services in the containers for further processing. Is there a way by which we get api gateway to these containers ?
You can create URLs for your fargate services by creating Application load balancers.
https://itnext.io/run-your-containers-on-aws-fargate-c2d4f6a47fda
RightNow, I am manging URL's and its redirection using Nginx hosted on physical machine. URL Redirection is achieved and pointed to different load balancers(Haproxy) as mentioned in the vhost file of Nginx.
Is there any option available in GCP to provide the services of redirection without using Nginx and Apache? And also let me know what are the alternate options available in GCP for Haproxy
From what I understand you have a few services (and maybe some static content) serving via Haproxy (which is doing load balancing) to the Internet.
Based on that I assume that if someone wants to go to "yourservice.com/example1" would be redirected by load balancer to service1, and if someone types "yourservice.com/static1" them he will be served static content by different service etc.
GCP has exactly a service you're asking for which can do url/content based load balancing. You can also move your current services to the Google Compute Engine (as virtual machines) or Google Kubernetes Engine which will run your services as a containers.
Also - using GCE or GKE can do autoscaling if that's what you need.
Load balancing provided by the GCP can do all the content based balancing that Haproxy which you're using now.
If you're using some internal load balancing now, I believe it could also be done with one load balancer ( and could simplify your configuration (just some VM or containers running your services and one load balancer).
You can read more about load balancing concepts and specifically about setting up GCP to do that.
I am using an external monitoring service (not stackdriver)
I wish to monitor the number of unhealthy hosts on my load balancer.
It seems like the google cloud api doesn't expose this metrics
therefore I implemented a custom script that gets the instance groups of the load balancer, get the instances' data (dns) and performs the health check
pretty cumbersome. is there a simple way to do it?
You can use the command 'gcloud compute backend-services get-health' to get the status of each instance in your backend service. This command will provide the current status of each instance, HEALTHY or UNHEALTHY, that is part of your backend service.
I have followed the GKE tutorial for creating an HTTP Load Balancer using the beta Ingress type and it works fine when using the nginx image. My question is about why Ingress is even necessary.
I can create a container engine cluster and then create a HTTP Load Balancer that uses the Kubernetes-created instance group as the service backend and everything seems to work fine. Why would I go through all of the hassel of using Ingress when using Kubernetes for only part of the process seems to work just fine?
While you can create "unmanaged" HTTP Load Balancer by yourself, what happens when you add new deployments (pods with services) and want traffic to be routed to them as well (perhaps using URL Maps)?
What happens when one of your services goes down for some reason and the new service allocates another node port?
The great thing about Ingress is that it manages the HTTP Load Balancer for you while keeping track on Kubernetes' resources and updates the HTTP Load Balancer accordingly.
The ingress object serves two main purposes:
It is simpler to use for repeatable deployments than configuring the HTTP balancer yourself, because you can write a short declarative yaml file of what you want you balancing to look like rather than a script of 7 gcloud commands.
It is (at least somewhat) portable across cloud providers.
If you are running on GKE and don't care about the second one, you can weigh the ease of use of the ingress object and the declarative syntax vs. the additional customization that you get from configuring the load balancer manually.