Our server setup is the following:
a proxy and load balancer directs all the requests to its machines behind. The problem is, that these machines behind do not know where they are. If the proxy gets the request for
www.bridge.de/m01
he redirects to machine01.
Machine01 only knows its local path
m01
For an application solution for a password reset functionality I considered several opportunities.
We decided to pass the value of URL from 'before proxy' to the database of machine01. So machine01 'knows' its external context for that specific requests.
My question is: Is there a better way to pass external URL context to machines behind a proxy? We are using JavaEE, JSP and MySql for our application. Virtual machines running with CentOS.
Thanks for any suggestions! :D
Your question is not fully clear.
I assume you have the issue, what your load balancer terminates the connection and forwards you the request.
Usually your balancer provides you the origin URL of the request, since you may need it from time to time.
In this case you can check your http headers. If it is not provided, you have to reconfigure your balancer to provide you the needed details.
check this: Strategies for dealing with URIs when building an application that sits behind a reverse proxy
Related
I have an application running in Openshift 4.6.
The pod is running, I can exec into it and check this, I can port-forward to it and access it.
when trying to access the application, I get the error message:
Application is not available The application is currently not serving
requests at this endpoint. It may not have been started or is still
starting.
Possible reasons you are seeing this page:
The host doesn't exist. Make sure the hostname was typed correctly and
that a route matching this hostname exists.
The host exists, but doesn't have a matching path. Check if the URL
path was typed correctly and that the route was created using the
desired path.
Route and path matches, but all pods are down. Make sure that the
resources exposed by this route (pods, services, deployment configs,
etc) have at least one pod running.
There could be multiple reasons for this. You don't really provide enough debugging details to get to the next steps. But I generally find it helps to work backwards through the request.
Can you access the pod via port-forward? You say you've already tested this, but I include it for completeness. But I also mention it to make sure that you are verifying that you are serving the protocol you expect. If you have HTTPS passthrough on the route, but you are serving HTTP from your pod, there will obviously be a problem.
Can you access the pod providing your service from outside the pod (but within the cluster)? e.g. create a debug pod and see if you can connect to your service with curl some other client. If this doesn't work, you may not be exposing the ports of your pod correctly. Check the pod definitions.
Can you access the service from outside the pod (but within the cluster)? e.g. from your debug pod, use the service directly. If this doesn't work, you may have the selector on your service wrong. Or some other problem with your service. Check the service definition.
Can you access the route from inside the cluster? e.g. from your debug pod, try to use the full route URL. If this doesn't work, you've narrowed it down to the route definition. Again, HTTPS vs HTTP can sometimes be a mistake here such as having HTTPS passthrough when your service doesn't support HTTPS. Check the route definition.
Finally, try accessing the route eternally. Which is sounds like you have already tried. But if you've narrowed it down such that your route works internally you've determined that the problem is something in the external network. It doesn't sound like this is your problem, but it's something to keep in mind.
I have just exposed my database on openshift and it gives me an 'https://....' url
Does anybody know how to connect using DBeaver by using this url that openshift gave to me.
The error that dbeaver says to me is the following
Malformed database URL, failed to parse the main URL sections.
Short answer: You can't with aRoute
Route can only expose http/https traffic
If you want to expose tcp traffic (like for a database), do not create aRouteand change yourServicetype to "NodePort"`
Check my previous answer for this kind of problem (exposing MQ in this case): How to connect to IBM MQ deployed to OpenShift?
OpenShift doc on NodePorts: https://docs.openshift.com/container-platform/4.7/networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-nodeport.html
There's another way to do this.
If your Route is set to "passthrough" it will just look at the SNI headers to determine where to route the traffic but won't unwrap it (and expect http inside) which will let it pass other traffic through to a pod.
I use this mechanism to run a ZNC bouncer (irc traffic) behind SNI.
The downside is you need to provide your own TLS cert inside the pod instead of leveraging the general one available to *.apps.(cluster).com
As for the specific error, "Malformed database URL", I've not used this software but from a quick websearch it looks like you want to rewrite the https://(appname).(clustername).com into a jdbc:.../hostname... string, and then enable TLS in settings.
I found this page that talks about setting it up, so it might be helpful if you've not around found it -- https://github.com/dbeaver/dbeaver/issues/9573
I'm just learning OSE 3. I'd like to deploy two Node.js Web applications I have created. So I have created a Project with two Node.js deployments, which are now running in their own Pod.
My question is, how are they supposed to communicate ? say for example one application needs to redirect to the other, or include components from the other application.
Should I hardcode the route of each application in a configuration file or so ?
Thanks!
For internal communication between the two services, you can use the name of the service as the host name when making connections. This is possible because the name of the services are added to an internal DNS server so that a host name lookup on the name will yield the correct IP for the service at that time. When the service has multiple pods, an internal IP load balancer will automatically route the request to one of the pods.
For the question about redirects, that seems to suggest you have both services exposed publicly and want to have one service return a HTTP response that redirects the HTTP client to a URL which falls to the other service. What the redirect URL needs to be is going to depend on how you are exposing the services. That is, whether each service is exposed as a different hostname or you have used path based routing of OpenShift to overlay one at a sub URL of the other under the same host.
Either way, you probably want to use an environment variable passed in via the deployment configuration to indicate to the service triggering the redirect, to tell it what the URL prefix is that it needs to redirect to. You would manually set this up. This at least means you haven't hardwired it in your code.
If you mean something else by redirect, you will need to explain better what you mean.
I am attempting to use purely https with my compute engine. I have a network load balancer created that forwards to a pool with my instance in it. However, the pool has constantly failing health checks because it won't let me configure a health check that uses https.
I'm using apache to redirect 80 to 443. Does anyone know how to either create an https health check or have the http health check follow the redirect?
Thanks for any help.
--edit--
I finally came across some documentation at http://googlecloudplatform.blogspot.com/2015/07/Debugging-Health-Checks-in-Load-Balancing-on-Google-Compute-Engine.html.
Failure 5: Not answering directly with a 200 response code The web server may be configured to redirect to a page that returns an HTTP 200 response code. The health check will not follow the redirect; it expects the health check page to return a 200 directly.
This basic capability has been supported at every other hosting provider we've been on. Why can't this be done? What am I missing?
I spent the whole day trying to configure a purely https based load balancer in GCloud for a Kubernetes cluster with an ingress controller.
I finally got it working, so maybe I share my experience with people that struggle with the same configuration. If the health-check fails for the instances you will usually see the following accessing your websites URL.
Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds.
1) Protocol: GCloud introduced new health checks which can be configured for HTTPS, SSLTCP, SSL, HTTP, HTTPS, or HTTP/2 probing. This can help the original problem to prevent a redirect from port 80 to port 443.
2) Path: The most common issue is a that the "/" path of your application will not return a 200 OK and thus let the health issue fail. This can be prevented by adding a path argument to your health check e.g. "/index".
3) Ingress HTTPS: This is relatively simple. Adding a secret or a pre-shared-cert to your ingress.yaml will automatically result in an HTTPS Load Balancer instead of HTTP. Further information to follow are here
Lastly, the guide from the docs for Setting up HTTP Load Balancing with Ingress .
However, even though the new HTTPS Health checks seem to work, they are still in the beta phase and bugs are reported in the issue tracker. The documentation for the gcloud-ingress-controller can be found here.
I have a server which contains the data to be served upon API requests from mobile clients. The data is kind of persistent and update frequency is very low (say once in a week). But the table design is pretty heavy which makes API requests to be served slowly
The Web Service is implemented with Yii + Postgre SQL.
Using memcached is a way to solve this problem? If yes, how can I manage, if the cached data becomes dirty?
Any alternative solution for this? Postgre has any built-in mechanism like MEMORY in MySQL?
How about redis?
You could use memcached, but again everybody would hit you database server. In your case, you are saying the query results are kind of persistent so it might make more sense to cache the JSON responses from your Web Service.
This could be done using a Reverse Proxy with a built in cache. I guess an example might help you the most how we do it with Jetty (Java) and NGINX:
In our setup, we have a Jetty (Java) instance serving an API for our mobile clients. The API is listening on localhost:8080/api and returning JSON results fetched from some queries on a local Mysql database.
At this point, we could serve the API directly to our clients, but here comes the Reverse Proxy:
In front of the API sits an NGINX webserver listening from 0.0.0.0:80/ (everywhere, port 80)
When a mobile client connects to 0.0.0.0:80/api the built-in Reverse Proxy tries to fetch the exact query string from it's cache. If this fails, it fetches it from localhost:8080/api, puts it in it's cache and serves the new value found in the cache.
Benefits:
You can use other NGINX goodies: automatic GZIP compression of the cached JSON files
SSL endpoint termination at NGINX.
NGINX workers might benefit you, when you have a lot more connections, all requesting data from the cache.
You can consolidate your service endpoints
Think about cache-invalidation:
You have to think about cache-invalidation. You can tell NGINX to hold on it's cache, say, for a week for all HTTP 200 request for localhost:8080/api, or 1 minute for all other HTTP status codes. But if there comes the time, where you want to update the API in under a week, the cache is invalid, so you have to delete it somehow or turn down the caching time to an hour or day (so that most people will hit the cache).
This is what we do: We chose to delete the cache, when it is dirty. We have another JOB running on the Server listening to an Update-API event triggered via Puppet. The JOB will take care of clearing the NGINX cache for us.
Another idea would be to add the clearing cache function inside your Web Service. The reason we decided against this solution is: The Web Service would have to know it runs behind a reverse proxy, which breaks separation of concerns. But I would say, it depends on what you are planning.
Another thing, which would make your Web Service more right would be to serve correct ETAG and cache-expires headers with each JSON file. Again, we did not do that, because we have one big Update Event, instead of small ones for each file.
Side notes:
You do not have to use NGINX, but it really easy to configure
NGINX and Apache have SSL support
There is also the famous Reverse Proxy (https://www.varnish-cache.org), but to my knowledge it does not do SSL (yet?)
So, if you were to use Varnish in front of your Web Service + SSL, you would use a configuration like:
NGINX -> Varnish -> Web Service.
References:
- NGINX server: http://nginx.com
- Varnish Reverse Proxy: https://www.varnish-cache.org
- Puppet IT Automation: https://puppetlabs.com
- NGINX reverse proxy tutorial: http://www.cyberciti.biz/faq/howto-linux-unix-setup-nginx-ssl-proxy/ http://www.cyberciti.biz/tips/using-nginx-as-reverse-proxy.html