I'm trying to set up an Elastic Beanstalk applciatoin using HTTP2. To do this, I have created an ALB.
Target group:
Weird thing is that even I have setup the load balancer as shared in the Beanstalk configuration, an additional listener has been created:
This is the listner of the ALB:
That's the one being used by the environment, but I do not know how to change it back to the correct one. Any idea?
The instances never reach a healthy state. I'm starting my node application (using the fully managed solution) like this: .listen(PORT) where PORT is an environment variable set by AWS. It usually is 8080, in case it helps.
Related
I need to access a postgres database from my java code which resides in openshift cluster. I need a way to do so. without initiating port forwarding manually through oc port forward command.
I have tried using openshift java client class openshift connection factory to get the connection by passing server url and username password through which I log in to the console but it dint help.
(This is mostly just a more detailed version of Will Gordon's comment, so credit to him.)
It sounds like you are trying to expose a service (specifically Postgres) outside of your cluster. This is very common.
However the best method to do so does depend a bit on your physical infrastructure because we are by definition trying to integrate with your networking. Look at the docs for Getting Traffic into your Cluster. Routes are probably not what you want, because Postgres is a TCP protocol. But one of the other options in that chapter (Load Balancer, External IP, or NodePort) is probably your best option depending on your networking infrastructure and needs.
I am trying to expose services to the world outside the rancher clusters.
Api1.mydomain.com, api2.mydomain.com, and so on should be accessible.
Inside rancher we have several clusters. I try to use one cluster specifically. It's spanning 3 nodes node1cluster1, node2cluster1 and node2cluster1.
I have added ingress inside the rancher cluster, to forward service requests for api1.mydomain.com to a specific workload.
On our DNS I entered the api1.mydomain.com to be forwarded, but it didn't work yet.
Which IP URL should I use to enter in the DNS? Should it be rancher.mydomain.com, where the web gui of rancher runs? Should it be a single node of the cluster that had the ingress (Node1cluster1)?
Both these options seem not ideal. What is the correct way to do this?
I am looking for a solution that exposes a full url to the outside world. (Exposing ports is not an option as the companies dns cant forward to them.)
Simple answer based on the inputs provided: Create a DNS entry with the IP address of Node1cluster1.
I am not sure how you had installed the ingress controller, but by default, it's deployed as "DaemonSet". So you can either use any one of the IP addresses of the cluster nodes or all the IP addresses of the cluster nodes. (Don't expect DNS to load balance though).
The other option is to have a load balancer in front with all the node IP addresses configured to actually distribute the traffic.
Another strategy that I have seen is to have a handful of nodes dedicated to run Ingress by use of taints/tolerations and not use them for scheduling regular workloads.
There are many situations where we need to override nginx conf in AWS beanstalk environment.
Set a max file attachment
Force http to https
Set different cache expires for different static resources
Setup WebSocket
gzip filetypes
etc etc
AWS Support suggests using a nginx.conf which is a copy from the beanstalk app by looking at /etc/nginx/nginx.conf in the instance. This is to be used as the base and then new configs or blocks to be added.
Then use .ebextensions/nginx/nginx.conf with this content in the project.
However, the biggest problem with this is that if the base nginx.conf is changed by AWS then it might be very difficult to first know when it has changed and then repeating the steps of copying it and then adding the overrides. Something like this
The other options that most of the websearches give is use of container_commands and creation of files in appdeploy or configdeploy
In container_commands folks have suggested to modify /tmp/deployment/config/#etc#nginx#conf.d#00_elastic_beanstalk_proxy.conf
Something like this
The problem with this approach is that it works only when app is deployed and not when any beanstalk configuration has changed ( like changing an env var ).
My question is which is the better recommended way of overriding the nginx config.
To help keep your elastic beanstalk deployment manageable, please keep in mind a few things:
First: AWS will not make changes to any config in your Elastic Beanstalk production environment directly. Similarly, you should not make any changes in your production elastic beanstalk environment directly either.
AWS recommends you change your configuration in your development environment and redeploy via the console or via the Elastic Beanstalk Command Line Interface (CLI) when you want to make that change go live. This is regardless of containerized or not, or load balanced or not, elastic beanstalk nginx config file or the override option.
Second: from the first link you provided: You either use the elastic beanstalk default nginx config OR the override config in .ebextensions. There is not a mix of both. This should help lessen your confusion. When you make changes to either in your development environment, this change implies a new version of your app, and you need to deploy it to your production to take effect.
Third: nginx can act as a proxy to your origin server and the origin server might be dictating a cache expire for assets. There are ways to change the configs on your nginx config to override the origin setting if needed. From the NGINX caching guide:
By default, NGINX respects the Cache-Control headers from origin
servers. It does not cache responses with Cache-Control set to
Private, No-Cache, or No-Store or with Set-Cookie in the response
header. NGINX only caches GET and HEAD client requests.
I hope this helps clear things up. Deploy your app and keep these techniques in mind go ahead and deploy. If its wrong, delete it and try again.
Ask a more specific question regarding your application and specific config if you get stuck. The more details you provide to your question, the better we can help you.
I have two openshift routers, running as pods, running in OSE.
However, I don't see any associated services in my openshift cluster which forwards traffic / loadbalances to them.
Should I expose my routers to the external world in a normal OSE environment?
Note that this is in a running openshift (OSE) cluster, so I do not think it would be appropriate to recreate the routers with new service accounts, and even if I did want to do this, it isn't always gauranteed that I will have access inside of OpenShift to do so.
If you are talking about the haproxy routers which are a part of the OpenShift platform, and which handle routing of external HTTP/HTTPS requests through to the pods of an application which has been exposed using a route, then no, you should not at least expose then as an OpenShift Route. Adding a Route for them would be circular as the router is what implements the route.
The incoming port of the haproxy routers does need to be exposed outside of the cluster, but this should have been handled as part of the setup you did when the OpenShift cluster was installed. Exactly what you may needed to have done to prepare for that when installing the OpenShift cluster depends on your target system into which OpenShift was installed.
It may be better to step back and explain the problem you are having. If it is an installation issue, you may be better asking on one of the lists at:
https://lists.openshift.redhat.com/openshiftmm/listinfo
as that is more frequented by people more familiar with installing OpenShift.
I am using Tomcat7 which is running on port 80.
Services directly to instance IP works just fine but calling services from LB IP throws 502 error.
Assuming, you are using managed instance group for maintaining the homogeneous instances. You need to establish a service endpoint which the load balancer can use to direct the traffic. This might be the problem.
I have written the steps to set up an load balancer here. As, load balancer contains lot of moving parts like target proxies, forwarding rules, backend services. It is difficult to debug without any config files. Posting your config here, would help us debug it better.
What I did to make Load balancing (LB) work is mentioned below.
I created a layer of nginx which by default runs on port 80.
I connected to tomcat7 layer using default file of nginx. Tomcat is now running on default port i.e. 8080.
So when LB tries to connect to my instance group it connects through http port 80.
Health check is really important. Health check of LB should pass. To make it pass keep a file on instance group instances. Like "/foo/bar/index.html" on "/var/lib/tomcat7/webapps/foo/bar/index.html". So that LB can directly connect to this file. Once the health check has passed. Then it wont show that instances are unhealthy.
Keep the same health check for instance group. Instance group also checks for same path as mentioned above.
Ideally health check should have passed without keeping this file. But have tried it several times it does not pass the health check the only way to make it pass is to keep that file.