There are many situations where we need to override nginx conf in AWS beanstalk environment.
Set a max file attachment
Force http to https
Set different cache expires for different static resources
Setup WebSocket
gzip filetypes
etc etc
AWS Support suggests using a nginx.conf which is a copy from the beanstalk app by looking at /etc/nginx/nginx.conf in the instance. This is to be used as the base and then new configs or blocks to be added.
Then use .ebextensions/nginx/nginx.conf with this content in the project.
However, the biggest problem with this is that if the base nginx.conf is changed by AWS then it might be very difficult to first know when it has changed and then repeating the steps of copying it and then adding the overrides. Something like this
The other options that most of the websearches give is use of container_commands and creation of files in appdeploy or configdeploy
In container_commands folks have suggested to modify /tmp/deployment/config/#etc#nginx#conf.d#00_elastic_beanstalk_proxy.conf
Something like this
The problem with this approach is that it works only when app is deployed and not when any beanstalk configuration has changed ( like changing an env var ).
My question is which is the better recommended way of overriding the nginx config.
To help keep your elastic beanstalk deployment manageable, please keep in mind a few things:
First: AWS will not make changes to any config in your Elastic Beanstalk production environment directly. Similarly, you should not make any changes in your production elastic beanstalk environment directly either.
AWS recommends you change your configuration in your development environment and redeploy via the console or via the Elastic Beanstalk Command Line Interface (CLI) when you want to make that change go live. This is regardless of containerized or not, or load balanced or not, elastic beanstalk nginx config file or the override option.
Second: from the first link you provided: You either use the elastic beanstalk default nginx config OR the override config in .ebextensions. There is not a mix of both. This should help lessen your confusion. When you make changes to either in your development environment, this change implies a new version of your app, and you need to deploy it to your production to take effect.
Third: nginx can act as a proxy to your origin server and the origin server might be dictating a cache expire for assets. There are ways to change the configs on your nginx config to override the origin setting if needed. From the NGINX caching guide:
By default, NGINX respects the Cache-Control headers from origin
servers. It does not cache responses with Cache-Control set to
Private, No-Cache, or No-Store or with Set-Cookie in the response
header. NGINX only caches GET and HEAD client requests.
I hope this helps clear things up. Deploy your app and keep these techniques in mind go ahead and deploy. If its wrong, delete it and try again.
Ask a more specific question regarding your application and specific config if you get stuck. The more details you provide to your question, the better we can help you.
Related
I have an application running in Openshift 4.6.
The pod is running, I can exec into it and check this, I can port-forward to it and access it.
when trying to access the application, I get the error message:
Application is not available The application is currently not serving
requests at this endpoint. It may not have been started or is still
starting.
Possible reasons you are seeing this page:
The host doesn't exist. Make sure the hostname was typed correctly and
that a route matching this hostname exists.
The host exists, but doesn't have a matching path. Check if the URL
path was typed correctly and that the route was created using the
desired path.
Route and path matches, but all pods are down. Make sure that the
resources exposed by this route (pods, services, deployment configs,
etc) have at least one pod running.
There could be multiple reasons for this. You don't really provide enough debugging details to get to the next steps. But I generally find it helps to work backwards through the request.
Can you access the pod via port-forward? You say you've already tested this, but I include it for completeness. But I also mention it to make sure that you are verifying that you are serving the protocol you expect. If you have HTTPS passthrough on the route, but you are serving HTTP from your pod, there will obviously be a problem.
Can you access the pod providing your service from outside the pod (but within the cluster)? e.g. create a debug pod and see if you can connect to your service with curl some other client. If this doesn't work, you may not be exposing the ports of your pod correctly. Check the pod definitions.
Can you access the service from outside the pod (but within the cluster)? e.g. from your debug pod, use the service directly. If this doesn't work, you may have the selector on your service wrong. Or some other problem with your service. Check the service definition.
Can you access the route from inside the cluster? e.g. from your debug pod, try to use the full route URL. If this doesn't work, you've narrowed it down to the route definition. Again, HTTPS vs HTTP can sometimes be a mistake here such as having HTTPS passthrough when your service doesn't support HTTPS. Check the route definition.
Finally, try accessing the route eternally. Which is sounds like you have already tried. But if you've narrowed it down such that your route works internally you've determined that the problem is something in the external network. It doesn't sound like this is your problem, but it's something to keep in mind.
I'm trying to set up an Elastic Beanstalk applciatoin using HTTP2. To do this, I have created an ALB.
Target group:
Weird thing is that even I have setup the load balancer as shared in the Beanstalk configuration, an additional listener has been created:
This is the listner of the ALB:
That's the one being used by the environment, but I do not know how to change it back to the correct one. Any idea?
The instances never reach a healthy state. I'm starting my node application (using the fully managed solution) like this: .listen(PORT) where PORT is an environment variable set by AWS. It usually is 8080, in case it helps.
I have just exposed my database on openshift and it gives me an 'https://....' url
Does anybody know how to connect using DBeaver by using this url that openshift gave to me.
The error that dbeaver says to me is the following
Malformed database URL, failed to parse the main URL sections.
Short answer: You can't with aRoute
Route can only expose http/https traffic
If you want to expose tcp traffic (like for a database), do not create aRouteand change yourServicetype to "NodePort"`
Check my previous answer for this kind of problem (exposing MQ in this case): How to connect to IBM MQ deployed to OpenShift?
OpenShift doc on NodePorts: https://docs.openshift.com/container-platform/4.7/networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-nodeport.html
There's another way to do this.
If your Route is set to "passthrough" it will just look at the SNI headers to determine where to route the traffic but won't unwrap it (and expect http inside) which will let it pass other traffic through to a pod.
I use this mechanism to run a ZNC bouncer (irc traffic) behind SNI.
The downside is you need to provide your own TLS cert inside the pod instead of leveraging the general one available to *.apps.(cluster).com
As for the specific error, "Malformed database URL", I've not used this software but from a quick websearch it looks like you want to rewrite the https://(appname).(clustername).com into a jdbc:.../hostname... string, and then enable TLS in settings.
I found this page that talks about setting it up, so it might be helpful if you've not around found it -- https://github.com/dbeaver/dbeaver/issues/9573
RightNow, I am manging URL's and its redirection using Nginx hosted on physical machine. URL Redirection is achieved and pointed to different load balancers(Haproxy) as mentioned in the vhost file of Nginx.
Is there any option available in GCP to provide the services of redirection without using Nginx and Apache? And also let me know what are the alternate options available in GCP for Haproxy
From what I understand you have a few services (and maybe some static content) serving via Haproxy (which is doing load balancing) to the Internet.
Based on that I assume that if someone wants to go to "yourservice.com/example1" would be redirected by load balancer to service1, and if someone types "yourservice.com/static1" them he will be served static content by different service etc.
GCP has exactly a service you're asking for which can do url/content based load balancing. You can also move your current services to the Google Compute Engine (as virtual machines) or Google Kubernetes Engine which will run your services as a containers.
Also - using GCE or GKE can do autoscaling if that's what you need.
Load balancing provided by the GCP can do all the content based balancing that Haproxy which you're using now.
If you're using some internal load balancing now, I believe it could also be done with one load balancer ( and could simplify your configuration (just some VM or containers running your services and one load balancer).
You can read more about load balancing concepts and specifically about setting up GCP to do that.
I am running a multisite instance of Locomotive CMS on a scalable Openshift cartridge.
The issue I am having is that haproxy sends GET requests to the root of each Apache instance, returning an erroneous 404, because no host is specified.
Locomotive works fine, but needs a host to each request, so it will serve the appropriate website.
How can I workaround this problem?
You can try sshing into your gear and modifying the ~/haproxy/haproxy.cfg to check a different url instead of / to make sure that your application is up and running.