I have a go app that i need to run multiple instances under separate subdomains, i have a working nomad consul setup and got the go app to run and is accessible via a fixed ip address and dedicated port. But i am stuck on how to make it work with the unique subdomains and working https.
So what i'm looking for is like
app1 runs on https://app1.example.com
app2 runs on https://app2.example.com
I tried to use traefic (got it running as a job), DNSmasq but i havent got the above to work.
Any help would be much appriciated.
Traefik supports integrating with Consul through its Consul Catalog provider. See https://learn.hashicorp.com/tutorials/nomad/load-balancing-traefik for an example of how to configure this when running Traefik on Nomad.
The example in that tutorial configures the tag traefik.http.routers.http.rule=Path('/myapp') on the service so that requests for /myapp are routed to the backend service instance. In your case, you'll need to modify this to match on the HTTP Host header so that you can route subdomains to different services. For example:
tags = [
"traefik.enable=true",
"traefik.http.routers.http.rule=Host(`app1.example.com`)",
]
See https://doc.traefik.io/traefik/routing/routers/#rule for a full list of supported rules.
Related
I have an application running in Openshift 4.6.
The pod is running, I can exec into it and check this, I can port-forward to it and access it.
when trying to access the application, I get the error message:
Application is not available The application is currently not serving
requests at this endpoint. It may not have been started or is still
starting.
Possible reasons you are seeing this page:
The host doesn't exist. Make sure the hostname was typed correctly and
that a route matching this hostname exists.
The host exists, but doesn't have a matching path. Check if the URL
path was typed correctly and that the route was created using the
desired path.
Route and path matches, but all pods are down. Make sure that the
resources exposed by this route (pods, services, deployment configs,
etc) have at least one pod running.
There could be multiple reasons for this. You don't really provide enough debugging details to get to the next steps. But I generally find it helps to work backwards through the request.
Can you access the pod via port-forward? You say you've already tested this, but I include it for completeness. But I also mention it to make sure that you are verifying that you are serving the protocol you expect. If you have HTTPS passthrough on the route, but you are serving HTTP from your pod, there will obviously be a problem.
Can you access the pod providing your service from outside the pod (but within the cluster)? e.g. create a debug pod and see if you can connect to your service with curl some other client. If this doesn't work, you may not be exposing the ports of your pod correctly. Check the pod definitions.
Can you access the service from outside the pod (but within the cluster)? e.g. from your debug pod, use the service directly. If this doesn't work, you may have the selector on your service wrong. Or some other problem with your service. Check the service definition.
Can you access the route from inside the cluster? e.g. from your debug pod, try to use the full route URL. If this doesn't work, you've narrowed it down to the route definition. Again, HTTPS vs HTTP can sometimes be a mistake here such as having HTTPS passthrough when your service doesn't support HTTPS. Check the route definition.
Finally, try accessing the route eternally. Which is sounds like you have already tried. But if you've narrowed it down such that your route works internally you've determined that the problem is something in the external network. It doesn't sound like this is your problem, but it's something to keep in mind.
I have spring boot admin server deployed in openshift with the help of fabric8 maven plugin
And also i have several applications deployed in openshift.
Spring boot admin server (SBAS) use spring cloud kubernetes discovery to discover services (applications) registered / running in namespace / cluster, which is automatic client discovery.
SBAS discovered as expected, its fine but some applications shown / registered in SBAS use http and some use https to check the health as like below
I have no idea, why SBAS use http for some apps and for https for some apps to check the health.
Since SBAS use https and port 8443 it shows applications are offline but those applications are exposed in http 8080 only
I have compared applications code and openshift configurations but i don't see any difference and how to fix this issue.
I am new to all above concepts could some one help me ?
I didn't find solution for this issue, but i did work around which helped me.
Since i am using only one port 8080, i have deleted other ports such as 8443 and 8778 via openshif yml as shown below. but you have you have to expose more ports this won't help.
I have a web app that does http and ws requests. I am trying to deploy it to Openshift v3. Hence, I need my requests to be mapped to ports 80 and 90 in the pod. However:
As mentioned in a related thread it is not possible for a route to expose multiple ports, so, I cannot just map requests to different services based on the port.
I tried setting one route mapping any port to a service with multiple ports, but I get a warning
Route has no target port, but service has multiple ports. The route
will round robin traffic across all exposed ports on the service
I cannot use different routes for http and ws, because the session cookie obtained for http would not be attached for web socket requests.
Solutions (?):
In the related thread Ingress Controller is suggested, but It seems that it can only be set up by a cluster administrator.
I could use two routes and set a separate cookie for each route, but this does not seem right -- why do I have to use 2 cookies for 2 domains, when essentially there is a single domain with a single authentication?
Switch to token authentication?
So, what am I missing? What would be the optimal way to handle this?
If any websocket endpoints are under a unique sub URL path, you could add a second route where which has a path definition for the sub URL path that the route applies to. You could then have requests under that sub URL path routed to the alternate port. You will need to have a definition for the alternate port on the service in addition to the primary port, or create a separate service for the alternate port. Would need to see your current service definition to be more specific. It is odd that you would be using ports 80 and 90 on the pod as that would imply you are running the container as root, which is not normal practice on OpenShift because of the security risks of running any container as root on a container hosting platform.
I am running a Ruby on Rails app (using Passenger in Nginx mode) on Google Container Engine. These pods are sitting behind a GCE network load balancer. My question is how to access the external Client IP from inside the Rails app.
The Github issue here seems to present a solution, but I ran the suggested:
for node in $(kubectl get nodes -o name | cut -f2 -d/); do
kubectl annotate node $node \
net.beta.kubernetes.io/proxy-mode=iptables;
gcloud compute ssh --zone=us-central1-b $node \
--command="sudo /etc/init.d/kube-proxy restart";
done
but I am still getting a REMOTE_ADDR header of 10.140.0.1.
On ideas on how I could get access to the real Client IP (for geolocation purposes)?
Edit: To be more clear, I am aware of the ways of accessing the client IP from inside Rails, however all of these solutions are getting me the internal Kubernetes IP, I believe the GCE network load balancer is not configured (or perhaps unable) to send the real client IP.
A Googler's answer to another version of my question verifies what I am trying to do is not currently possible with the Google Container Engine Network Load Balancer currently.
EDIT (May 31, 2017): as of Kubernetes v1.5 and up this is possible on GKE with the beta annotation service.beta.kubernetes.io/external-traffic. This was answered on SO here. Please note when I added the annotation the health checks were not created on the existing nodes. Recreating the LB and restarting the nodes solved the issue.
It seems as though this is not a rails problem at all, but one of GCE. You can try the first part of
request.env["HTTP_X_FORWARDED_FOR"]
Explanation
Getting Orgin IP From Load Balancer advises that https://cloud.google.com/compute/docs/load-balancing/http/ has the text
The proxies set HTTP request/response headers as follows:
Via: 1.1 google (requests and responses)
X-Forwarded-Proto: [http | https] (requests only)
X-Forwarded-For: <client IP(s)>, <global forwarding rule external IP> (requests only)
Can be a comma-separated list of IP addresses depending on the X-Forwarded-For entries appended by the intermediaries the client is
traveling through. The first element in the section
shows the origin address.
X-Cloud-Trace-Context: <trace-id>/<span-id>;<trace-options> (requests only)
Parameters for Stackdriver Trace.
I am running a multisite instance of Locomotive CMS on a scalable Openshift cartridge.
The issue I am having is that haproxy sends GET requests to the root of each Apache instance, returning an erroneous 404, because no host is specified.
Locomotive works fine, but needs a host to each request, so it will serve the appropriate website.
How can I workaround this problem?
You can try sshing into your gear and modifying the ~/haproxy/haproxy.cfg to check a different url instead of / to make sure that your application is up and running.