I have followed the GKE tutorial for creating an HTTP Load Balancer using the beta Ingress type and it works fine when using the nginx image. My question is about why Ingress is even necessary.
I can create a container engine cluster and then create a HTTP Load Balancer that uses the Kubernetes-created instance group as the service backend and everything seems to work fine. Why would I go through all of the hassel of using Ingress when using Kubernetes for only part of the process seems to work just fine?
While you can create "unmanaged" HTTP Load Balancer by yourself, what happens when you add new deployments (pods with services) and want traffic to be routed to them as well (perhaps using URL Maps)?
What happens when one of your services goes down for some reason and the new service allocates another node port?
The great thing about Ingress is that it manages the HTTP Load Balancer for you while keeping track on Kubernetes' resources and updates the HTTP Load Balancer accordingly.
The ingress object serves two main purposes:
It is simpler to use for repeatable deployments than configuring the HTTP balancer yourself, because you can write a short declarative yaml file of what you want you balancing to look like rather than a script of 7 gcloud commands.
It is (at least somewhat) portable across cloud providers.
If you are running on GKE and don't care about the second one, you can weigh the ease of use of the ingress object and the declarative syntax vs. the additional customization that you get from configuring the load balancer manually.
Related
RightNow, I am manging URL's and its redirection using Nginx hosted on physical machine. URL Redirection is achieved and pointed to different load balancers(Haproxy) as mentioned in the vhost file of Nginx.
Is there any option available in GCP to provide the services of redirection without using Nginx and Apache? And also let me know what are the alternate options available in GCP for Haproxy
From what I understand you have a few services (and maybe some static content) serving via Haproxy (which is doing load balancing) to the Internet.
Based on that I assume that if someone wants to go to "yourservice.com/example1" would be redirected by load balancer to service1, and if someone types "yourservice.com/static1" them he will be served static content by different service etc.
GCP has exactly a service you're asking for which can do url/content based load balancing. You can also move your current services to the Google Compute Engine (as virtual machines) or Google Kubernetes Engine which will run your services as a containers.
Also - using GCE or GKE can do autoscaling if that's what you need.
Load balancing provided by the GCP can do all the content based balancing that Haproxy which you're using now.
If you're using some internal load balancing now, I believe it could also be done with one load balancer ( and could simplify your configuration (just some VM or containers running your services and one load balancer).
You can read more about load balancing concepts and specifically about setting up GCP to do that.
I am trying to expose services to the world outside the rancher clusters.
Api1.mydomain.com, api2.mydomain.com, and so on should be accessible.
Inside rancher we have several clusters. I try to use one cluster specifically. It's spanning 3 nodes node1cluster1, node2cluster1 and node2cluster1.
I have added ingress inside the rancher cluster, to forward service requests for api1.mydomain.com to a specific workload.
On our DNS I entered the api1.mydomain.com to be forwarded, but it didn't work yet.
Which IP URL should I use to enter in the DNS? Should it be rancher.mydomain.com, where the web gui of rancher runs? Should it be a single node of the cluster that had the ingress (Node1cluster1)?
Both these options seem not ideal. What is the correct way to do this?
I am looking for a solution that exposes a full url to the outside world. (Exposing ports is not an option as the companies dns cant forward to them.)
Simple answer based on the inputs provided: Create a DNS entry with the IP address of Node1cluster1.
I am not sure how you had installed the ingress controller, but by default, it's deployed as "DaemonSet". So you can either use any one of the IP addresses of the cluster nodes or all the IP addresses of the cluster nodes. (Don't expect DNS to load balance though).
The other option is to have a load balancer in front with all the node IP addresses configured to actually distribute the traffic.
Another strategy that I have seen is to have a handful of nodes dedicated to run Ingress by use of taints/tolerations and not use them for scheduling regular workloads.
I am maintaining a couple of spring boot web applications.
They are currently running as WAR files deployed on the same Tomcat instance on two Linux servers.
In front I have a load balancer to distribute the load through url myapps.mydomain.com.
Apart from the actual applications, both backend Tomcat instances exposes /up/up.html to allow the load balancer to know the status of each.
myapps.mydomain.com:
ip-address-1:8080/up/up.html
ip-address-2:8080/up/up.html
Now I am in the process of migrating the applications to OpenShift, and I have all application endpoints including /up/up.html exposed by OpenShift as myapps.openshift.mydomain.com.
For a period I would like to run the OpenShift apps in parallel with the legacy servers through the legacy load balancer - basically:
myapps.mydomain.com:
ip-address-1:8080/up/up.html
ip-address-2:8080/up/up.html
myapps.openshift.mydomain.com:80/up/up.html
such that the load is distributed one third to each.
The guys managing the legacy load balancer claims that this cannot be done:-(
I myself dont know much about load balancers. I have been googling the subject and found articles about routing from "Edge Load Balancers" to Openshift, but I really dont know the right term for what I am trying to do.
I was hoping the load balancer could treat myapps.openshift.mydomain.com as just another blackbox - like the two legacy servers.
Can this be done?
And if so - what is the correct terminology for this concept - what is the proper name for what I am trying to do?
I have an autoscaling istance group, i need to setup a Proxy/Load balancer that take request and send it to the istance group.
I thinked to use a Load balancer, but I need to grab both HTTP(S) and TCP requests.
There is some way (or some workaround) to solve this?
EDIT: The problem is that from TCP LB settings i can set the backend service (the managed group that i need to set) only for one port.
For your use case, a single load balancing configuration available on Google Cloud Platform will not be able to serve the purpose. On the other hand, since you are using managed instance groups (Autoscaling), it can not be used as backend for 2 different load balancers.
As per my understanding, the closest you can go is by using Network load balancing (TCP) and install SSL certificate to handle HTTPS requests
on the instance level.
I have a server which contains the data to be served upon API requests from mobile clients. The data is kind of persistent and update frequency is very low (say once in a week). But the table design is pretty heavy which makes API requests to be served slowly
The Web Service is implemented with Yii + Postgre SQL.
Using memcached is a way to solve this problem? If yes, how can I manage, if the cached data becomes dirty?
Any alternative solution for this? Postgre has any built-in mechanism like MEMORY in MySQL?
How about redis?
You could use memcached, but again everybody would hit you database server. In your case, you are saying the query results are kind of persistent so it might make more sense to cache the JSON responses from your Web Service.
This could be done using a Reverse Proxy with a built in cache. I guess an example might help you the most how we do it with Jetty (Java) and NGINX:
In our setup, we have a Jetty (Java) instance serving an API for our mobile clients. The API is listening on localhost:8080/api and returning JSON results fetched from some queries on a local Mysql database.
At this point, we could serve the API directly to our clients, but here comes the Reverse Proxy:
In front of the API sits an NGINX webserver listening from 0.0.0.0:80/ (everywhere, port 80)
When a mobile client connects to 0.0.0.0:80/api the built-in Reverse Proxy tries to fetch the exact query string from it's cache. If this fails, it fetches it from localhost:8080/api, puts it in it's cache and serves the new value found in the cache.
Benefits:
You can use other NGINX goodies: automatic GZIP compression of the cached JSON files
SSL endpoint termination at NGINX.
NGINX workers might benefit you, when you have a lot more connections, all requesting data from the cache.
You can consolidate your service endpoints
Think about cache-invalidation:
You have to think about cache-invalidation. You can tell NGINX to hold on it's cache, say, for a week for all HTTP 200 request for localhost:8080/api, or 1 minute for all other HTTP status codes. But if there comes the time, where you want to update the API in under a week, the cache is invalid, so you have to delete it somehow or turn down the caching time to an hour or day (so that most people will hit the cache).
This is what we do: We chose to delete the cache, when it is dirty. We have another JOB running on the Server listening to an Update-API event triggered via Puppet. The JOB will take care of clearing the NGINX cache for us.
Another idea would be to add the clearing cache function inside your Web Service. The reason we decided against this solution is: The Web Service would have to know it runs behind a reverse proxy, which breaks separation of concerns. But I would say, it depends on what you are planning.
Another thing, which would make your Web Service more right would be to serve correct ETAG and cache-expires headers with each JSON file. Again, we did not do that, because we have one big Update Event, instead of small ones for each file.
Side notes:
You do not have to use NGINX, but it really easy to configure
NGINX and Apache have SSL support
There is also the famous Reverse Proxy (https://www.varnish-cache.org), but to my knowledge it does not do SSL (yet?)
So, if you were to use Varnish in front of your Web Service + SSL, you would use a configuration like:
NGINX -> Varnish -> Web Service.
References:
- NGINX server: http://nginx.com
- Varnish Reverse Proxy: https://www.varnish-cache.org
- Puppet IT Automation: https://puppetlabs.com
- NGINX reverse proxy tutorial: http://www.cyberciti.biz/faq/howto-linux-unix-setup-nginx-ssl-proxy/ http://www.cyberciti.biz/tips/using-nginx-as-reverse-proxy.html