How to find the external IP? - openshift

I have a Python application which has been deployed to openshift.
I am using an external REST service in my application. In order to use this service, the developers of the REST service have to whitelist my IP because a Firewall blocks unauthorized IP addresses.
How can I find the external IP of my application? How can I find it in openshift? I tried a few OC commands, but I am not sure if I have to get the IP of the pod or the service.

Out of the box the traffic from internal cluster components will appear to external infrastructure like they are coming from whichever OpenShift compute host their pods are currently scheduled on.
Information on internal cluster networking and how traffic traverses from a process running inside a pod to the external network can be found at SDN: Packet Flow.
In your case you could have the external application whitelist all of the ip addresses of the compute hosts that are expected to run your application pods.
Alternately you could set up an EgressIP. This will cause all traffic originating from a specific OpenShift project to appear as if it is originating from a single ip address. You could then have your external application whitelist the EgressIP address.
Documentation for configuring EgressIP can be found in the official documentation under Enabling Static IPs for External Project Traffic

What you are searching for is the external IP of the Service. A Service acts as a load balancer for your pods but by default it only has a cluster-wide IP address. If you need a URL to access it from the outside, you can create a Route. For your purpose where you need an actual external IP address, you can assign the Service an external IP manually. Information on how to do this can be found in the official OpenShift Docs.

Related

Google Cloud - Adding additional Internal IP to VM

I'm trying to build a webserver in Google Cloud Platform that hosts multiple websites (GBP, IE, FR, DK etc.)
Generally, we assign a range of IPs to the server statically, set the bindings in IIS, then loadbalance using a virtual IP.
It seems near enough impossible to assign another internal IP in GCP. Lots of guides about additional external IPs, but we don't want a public facing webserver like this.
Anybody have any idea on how to add additional internal IPs to a VM / Instance?
Also, I have tried changing the internal address I have assigned to the Instance to static in network adapter settings, next thing I know I can't access my VM for love nor money, had to delete and re-create. If I go into advanced settings to add additional static IPs, w'ere set to DHCP apparently, so can't add additional IPs.
Thanks all.
Answer that I recieved from GCE discussion group, in Google Groups:
"You can add additional internal IP addresses to a VM instance. This is possible by enabling IP forwarding for the VM, creating a static network route, adding appropriate firewall rules, and setting additional internal IP addresses to network adapter of Windows. These steps are described in this article for Linux machines (https://cloud.google.com/compute/docs/networking#set_a_static_target_ip_address). The same steps are valid for Windows VMs. You will need to keep the initial internal IP address, subnet mask, gateway address and DNS settings of the adapter and manually enter them in properties of IPv4 of the network adapter. The below is a screenshot of my configuration on a VM instance (Windows 2008 R2) that perfectly works."
Update:
Now, you can create instances with multiple network interfaces On Google Compute Engine and assign IPs. For more information, refer to this public documentation link. However, currently it has following limitations:
Alias IP ranges are not supported on any network interface on a VM
that has multiple network interfaces enabled.
You cannot modify or delete the network interfaces after the VM has
been created.

Cannot access Google Cloud Compute Instance External IP

I have set up an Google Cloud Compute Instance:
Machine type
n1-standard-1 (1 vCPU, 3.75 GB memory)
CPU platform
Intel Haswell
Zone
us-east1-c
I can ssh in using the external address.
I have installed the vncserver and can access it on port 5901 from localhost as well as the internal IP.
I am trying to access it from the static, external IP address but it is not working.
I have configured the firewall to open to port to 0.0.0.0/0, but it is not reachable.
Can anyone help?
------after further investigation from the tips from the two answers (thanks, both!), I have a partial answer:
The Google Cloud Compute instance was set, by default, to not allow
HTTP traffic. I reset the configuration to allow HTTP traffic. I
then tried the troubleshooting tip to run a small HTTP service in
python. I was able to get a ressponse from the service over the
internet.
The summary of the current situation is as follows:
The external IP address can be reached
It is enabled and working for SSH
It is enabled and working for HTTP
It does not seem to allow traffic from vncserver
Any idea how to configure the compute instance to allow for vncserver traffic?
If you already verified that Google Firewall or your VM are not blocking packets, you must make sure that VNC service is configured to listen on the external IP address.
You can always use a utility like nmap outside Google project to reveal information on the port status.
enable http/https traffic form the firewall as per the need. it will work!!
The Google Cloud Compute instance was set, by default, to not allow HTTP traffic. I reset the configuration to allow HTTP traffic. I then tried the troubleshooting tip to run a small HTTP service in python. I was able to get a response from the service over the internet.
As such, the original question is answered, I can access Google Cloud Compute Instance External IP. My wider issue is still not solved, but I will post a new, more specific question about this issue
TLDR: make sure you are requesting http not https
In my case i was following the link from my CE instance's External Ip property which takes you directly to the https version and i didn't set up https, so that was causing the 'site not found' error.
Create an entry in your local ssh config file as below with mentioned local forward port. In my case its an example of yarn's IP, which I want to access in browser.
Host hadoop
HostName <External-IP>
User <Local-machine-username>
IdentityFile ~/.ssh/<private-key-for-above-user>
LocalForward 8089 <Internal-IP>:8088
In addition to having the firewall rules to allow HTTP traffic in both Google Cloud Platform and within the OS of the instance, make sure you install a web server such as Apache or Nginx.
After installing the web server, you connect to the instance using SSH and verify you do not get a failed connection with the following command:
$ sudo wget http://localhost
If the connection is positive, it means that you can access your external URL:
http://<IP-EXTERNAL-VM>
Usually there are two main things to check.
1. Port
By default, only port 80, 443 and ICMP are exposed. If your server is running on a different port, create a record for the same.
2. Firewall
Make sure you are allowing http and https traffic based on your need.
oua re
For me the problem was that I set up the traffic for the firewall rule to be 'Egress' instead of 'Ingress'.
If anyone already initiated 'https'
just disable it and check again.

Static ip address for outgoing (i.e out of azure) traffic from a service hosted in service fabric?

I have a need to communicate from services (hosted in azure service fabric) to communicate with our on-premises resources. I would like all outgoing traffic from azure to have a static ip address so that on-premises network team can create a firewall incoming firewall rule based on this static ip address.
Since the service(s) hosted in service fabric can move from node to node (without user's intervention) and auto-scale feature can add new nodes or tear down existing nodes, I am guessing I have to use something with configuration step. So I would like to know following:
1) What needs to be modified in the fabric cluster so that all outgoing traffic has static ip address?
2) How do I modify it at configuration step so that I don't worry about node to node switch as well as auto-scale feature?
All nodes get the same outbound IP address, the same public IP that is assigned to your cluster/load balances in Azure. You can specify the public IP address by using an ARM template to deploy your cluster, you specify a name of the public IP (and that public IP is assigned to your subscription and is then referenced in the ARM template for the cluster).

VPN Config Google Cloud

i need to know if the following scenario is possible using Google Cloud:
I need to have a IPSec VPN with a partner, the thing is that at their side they will allow only one of my hosts access their network, at their side they configure a ACL as follows: network-object host X.X.X.4.
So, is a must that in the negotiation of phase 2, Google Cloud send as local address the ip number allowed by their X.X.X.4, and not the network X.X.X.0/something, if that happens phase 2 will crash.
Is possible to configure the VPN using this requirement?
Regards,
Will.
You could try creating a /30 network in your project and hosts the VM that you would like to interact with the partner and setup the VPN tunnel
If you have another network, where other VM/Apps exists, setup a cross-vpn between the VPN tunnels in your project, just that they are in different network within the same project.

Hadoop cluster on Google Compute Engine: Accessing master node via REST

I have deployed a hadoop cluster on google compute engine. I then run a machine learning algorithm (Cloudera's Oryx) on the master node of the hadoop cluster. The output of this algorithm is accessed via an HTTP REST API. Thus I need to access the output either by a web browser, or via REST commands. However, I cannot resolve the address for the output of the master node which takes the form http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091.
I have allowed http traffic and allowed access to ports 80 and 8091 on the network. But I cannot resolve the address given. Note this http address is NOT the IP address of the master node instance.
I have followed along with examples for accessing IP addresses of compute instances. However, I cannot find examples of accessing a single node of a hadoop cluster on GCE, that follows this form http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091. Any help would be appreciated. Thank you.
The reason you're seeing this is that the "HOSTNAME.c.PROJECT.internal" name is only resolvable from within the GCE network of that same instance itself; these domain names are not globally visible. So, if you were to SSH into your master node first, and then try to curl http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091 then you should successfully retrieve the contents, whereas trying to access from your personal browser will just fail to resolve that hostname into any IP address.
So unfortunately, the quickest way for you to retrieve those contents is indeed to use the external IP address of your GCE instance. If you've already opened port 8091 on the network, simply use gcutil getinstance CLUSTER_NAME-m and look for the entry specifying external IP address; then plug that in as your URL: http://[external ip address]:8091.
If you turned up the cluster using bdutil, a more involved but nicer way to access your cluster is by running the bdutil socksproxy command. This opens a dynamic-port-forwarding SSH tunnel to your master node as a SOCKS5 proxy, so that you can then configure your browser to use localhost:1080 as your proxy server, make sure to enable remote DNS resolution, and then visit your browser using the normal http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091 URL.