How to configure SRX200 router? - configuration

How to configure SRX200 router?
I want to setup a LAN (trusted network) with my own series (ex: 10.x.x.x)
Router should forward the request based on port(Port based forwarding).
i.e If any machine sent a request on a port, router should forward the request to a specific machine based on that port number. Router should translate the destination address to one specific IP (specified by us).

We could configure the juniper router in two ways.
1. Through cli prompt
2. Web interface
To Configure router with web interface, find the complete information from the following link.
http://forums.juniper.net/jnet/attachments/jnet/Learning/47/
The above document provides complete information.
You can configure the port based redirection using Destination NAT and IP based redirection with Static NAT.

Related

How to map requests to multiple ports in a pod in Openshift v3?

I have a web app that does http and ws requests. I am trying to deploy it to Openshift v3. Hence, I need my requests to be mapped to ports 80 and 90 in the pod. However:
As mentioned in a related thread it is not possible for a route to expose multiple ports, so, I cannot just map requests to different services based on the port.
I tried setting one route mapping any port to a service with multiple ports, but I get a warning
Route has no target port, but service has multiple ports. The route
will round robin traffic across all exposed ports on the service
I cannot use different routes for http and ws, because the session cookie obtained for http would not be attached for web socket requests.
Solutions (?):
In the related thread Ingress Controller is suggested, but It seems that it can only be set up by a cluster administrator.
I could use two routes and set a separate cookie for each route, but this does not seem right -- why do I have to use 2 cookies for 2 domains, when essentially there is a single domain with a single authentication?
Switch to token authentication?
So, what am I missing? What would be the optimal way to handle this?
If any websocket endpoints are under a unique sub URL path, you could add a second route where which has a path definition for the sub URL path that the route applies to. You could then have requests under that sub URL path routed to the alternate port. You will need to have a definition for the alternate port on the service in addition to the primary port, or create a separate service for the alternate port. Would need to see your current service definition to be more specific. It is odd that you would be using ports 80 and 90 on the pod as that would imply you are running the container as root, which is not normal practice on OpenShift because of the security risks of running any container as root on a container hosting platform.

Service with multiple ports/protocols per route (e.g. HTTP and HTTPS) possible in OpenShift?

I am currently evaluating OpenShift for use in our company.
We have a web application in a container, which exposes both port 80 http and port 443 https. Is it possible to run this container in OpenShift, using both ports over the SAME hostname? The OpenShift GUI lets me select only one port per service, if I try to create a route, and either http or https, not both. My use-case is, that my application is reachable on http://my-app as well as on https://my-app (in my opinion a quite common use-case)
It is not possible to have multiple routes with the same hostname and path. Only first such route will be admitted to the router.
The routes with paths will work as mentioned by #Graham
You can put all 3 below in a single project without problem:
example.com
example.com/hello
example.com/world
They can have different protocols. Adding a duplicate route with different protocol will not work.
Additionally if you have Project B, you won't be able to use example.com host again. So, none of the below will be accepted to a router:
example.com
example.com/hello
example.com/world
example.com/path
Which makes sense, as you don't want someone else to use your domain.
What usually happens when you have https exposed is that all http traffic is redirected to https.
You can achieve it by creating edge terminated route (over UI) and selecting Insecure Traffic: Redirect. There is also an option to set it to Allow.
More documentation and yaml examples if you would like to create route from command line: OpenShift Origin: Secured routes

Openshift 3 communication between deployments

I'm just learning OSE 3. I'd like to deploy two Node.js Web applications I have created. So I have created a Project with two Node.js deployments, which are now running in their own Pod.
My question is, how are they supposed to communicate ? say for example one application needs to redirect to the other, or include components from the other application.
Should I hardcode the route of each application in a configuration file or so ?
Thanks!
For internal communication between the two services, you can use the name of the service as the host name when making connections. This is possible because the name of the services are added to an internal DNS server so that a host name lookup on the name will yield the correct IP for the service at that time. When the service has multiple pods, an internal IP load balancer will automatically route the request to one of the pods.
For the question about redirects, that seems to suggest you have both services exposed publicly and want to have one service return a HTTP response that redirects the HTTP client to a URL which falls to the other service. What the redirect URL needs to be is going to depend on how you are exposing the services. That is, whether each service is exposed as a different hostname or you have used path based routing of OpenShift to overlay one at a sub URL of the other under the same host.
Either way, you probably want to use an environment variable passed in via the deployment configuration to indicate to the service triggering the redirect, to tell it what the URL prefix is that it needs to redirect to. You would manually set this up. This at least means you haven't hardwired it in your code.
If you mean something else by redirect, you will need to explain better what you mean.

Google Cloud HTTP Load Balancer can't connect to my instance

I have created a HTTP load balancer to basically redirect from port 80 to port 8080. The server on my instance is running on port 8080.
I can connect to the server directly but the LB is not able to connect to the instance, both accessing the LB's IP directly and also the health check always fails. The instance group the LB is using consist of just that single instance.
I read Google Compute Engine health checks failing
and the google-address-manager is running. However, when running ip route table list local there is no routing for my LB. The user in the above question is using Network load balancing and not HTTP load balancing (as I am) so I don't know if that is related?
Or perhaps it's related to a firewall? I have added my LB's ip address to a firewall rule that allows tcp:8080
Does anybode have any idea how can I fix this? I am not experienced with debian nor gcp.
Show I just try and run the route add command referenced in the above question? If so, how come the google-address-manager is not adding the route?
Thank you in advance!
You need to make sure that your port mapping on instance group is set to correct port, the 8080 in your case.
First, edit your instance group and change the port name and port to 8080:
Then, navigate to your http backend's settings and change the default port to the port name you've configured in your instance group.
Finally, make sure that your firewall rules allow access on port 8080 from 0.0.0.0/0 or at least from the IP address of HTTP load balancer (130.211.0.0/22)
I had the same issue and fixed it by adding a firewall rule for the health checker (which is not the same IP as your LB!). See https://cloud.google.com/compute/docs/load-balancing/health-checks?hl=en_US#http_and_https_load_balancing for instructions.
In my case, I did not configure the HTTP health check correctly.
I used "/" as path, but on my backend, "/" redirects to a login-page (HTTP 301), which responds with a HTTP 200.
The health check does not follow a redirect, every HTTP response code != 200 is assumed unhealthy (from Debugging Health Checks in Load Balancing on Google Compute Engine).
So, I changed my path to "/login", this fixed my issue.

Hadoop cluster on Google Compute Engine: Accessing master node via REST

I have deployed a hadoop cluster on google compute engine. I then run a machine learning algorithm (Cloudera's Oryx) on the master node of the hadoop cluster. The output of this algorithm is accessed via an HTTP REST API. Thus I need to access the output either by a web browser, or via REST commands. However, I cannot resolve the address for the output of the master node which takes the form http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091.
I have allowed http traffic and allowed access to ports 80 and 8091 on the network. But I cannot resolve the address given. Note this http address is NOT the IP address of the master node instance.
I have followed along with examples for accessing IP addresses of compute instances. However, I cannot find examples of accessing a single node of a hadoop cluster on GCE, that follows this form http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091. Any help would be appreciated. Thank you.
The reason you're seeing this is that the "HOSTNAME.c.PROJECT.internal" name is only resolvable from within the GCE network of that same instance itself; these domain names are not globally visible. So, if you were to SSH into your master node first, and then try to curl http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091 then you should successfully retrieve the contents, whereas trying to access from your personal browser will just fail to resolve that hostname into any IP address.
So unfortunately, the quickest way for you to retrieve those contents is indeed to use the external IP address of your GCE instance. If you've already opened port 8091 on the network, simply use gcutil getinstance CLUSTER_NAME-m and look for the entry specifying external IP address; then plug that in as your URL: http://[external ip address]:8091.
If you turned up the cluster using bdutil, a more involved but nicer way to access your cluster is by running the bdutil socksproxy command. This opens a dynamic-port-forwarding SSH tunnel to your master node as a SOCKS5 proxy, so that you can then configure your browser to use localhost:1080 as your proxy server, make sure to enable remote DNS resolution, and then visit your browser using the normal http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091 URL.