Hi I have rest url configured in my application and I have deployed the application in openshift with 2 pods running. Everytime hitting thele url only one pod recieves the call and other doesn't. Any configuration need to be added in openshift Please let me know your suggestions
The object you are looking for is called a Service (documentation):
A Kubernetes service serves as an internal load balancer. It identifies a set of replicated pods in order to proxy the connections it receives to them. Backing pods can be added to or removed from a service arbitrarily while the service remains consistently available, enabling anything that depends on the service to refer to it at a consistent address. The default service clusterIP addresses are from the OpenShift Container Platform internal network and they are used to permit pods to access each other.
So in OpenShift, you typically have the following setup with a Route, a Service and multiple Pods:
+-----+
+---->+ Pod |
| +-----+
+-------+ +---------+ |
+------>+ Route +-------->+ Service +--+
+-------+ +---------+ |
| +-----+
+---->+ Pod |
+-----+
If you already have a Service defined, you'll need to check the spec.selector for which Pods are actually selected by the Service and where the Service forwards the request to.
Also, Routes allow Sticky Sessions, so maybe sticky sessions are enabled for your Route, you may need to check that.
Related
I've two pods in OpenShift 4. On the terminal of one pod a make a ping to the other pod. But the error message "Name or service not found" is returned.
What must be done for two pods to recognize each other ?
Ping does not take the https:// prefix.
ping / ICMP traffic within clusters is often not forwarded and could not work as expected, depending on the cluster network. So you should test with other tools such as curl to reach other Pods. When accessing Pods directly, you are typically also not using https://.
Also, typically you should not access other Pods directly but instead via a Service.
With the above being said, in your case the name is possibly wrong. You should be able to talk to other Pods using the following hostname:
<pod-ip-address>.<my-namespace>.pod.cluster-domain.example
So for example:
172-17-0-3.my-namespace.pod.cluster.local
When accessing a Service you would typically use the following hostnames:
my-svc (in the same namespace)
my-svc.my-namespace.svc.cluster-domain.example
I'm currently trying to build my services on kubernetes using istio and have trouble trying to whitelist all host IPs that are allowed to connect to the Mysql database through mysql.user table.
I always get the following error after a new deployment:
Host 'X.X.X.X' is not allowed to connect to this MySQL server
Knowing that every time i deploy my service a new pod IP always pops out and i have to add replace the old user with the new host IP. I would really like to avoid using '%' for the host.
Is there any way how i could just register the node IP instead to keep its persistence?
Both Kubernetes and Istio provide network-level protections and setting the allowed hosts to "all" is safe.
A Kubernetes network policy is probably the best cluster-level match for what you're looking for. You'd set the database itself to accept connections from all addresses, but then would set a network policy to refuse connections except from pods that have a specific set of labels. Since you control this by label, any new pods that have the appropriate set of labels will be automatically granted access without manual changes.
Depending on your needs, the default protection given by a ClusterIP service may be enough for you. If a service is ClusterIP but not any other type, it is unreachable from outside the cluster; there is no network path to make it accessible. This is often enough to prevent casual network snoopers from finding your database.
Istio's authorization system is a little bit more powerful and robust at a network level. It can limit calls by the Kubernetes service account of the caller, and uses TLS certificates rather than just IP addresses to identify the caller. However, it doesn't come enabled by default, and in my limited experience with it it's very easy to accidentally configure it to do things like block Kubernetes health checks or Prometheus metric probes. If you're satisfied with IP-level security this might be more power than you need.
I have a Python application which has been deployed to openshift.
I am using an external REST service in my application. In order to use this service, the developers of the REST service have to whitelist my IP because a Firewall blocks unauthorized IP addresses.
How can I find the external IP of my application? How can I find it in openshift? I tried a few OC commands, but I am not sure if I have to get the IP of the pod or the service.
Out of the box the traffic from internal cluster components will appear to external infrastructure like they are coming from whichever OpenShift compute host their pods are currently scheduled on.
Information on internal cluster networking and how traffic traverses from a process running inside a pod to the external network can be found at SDN: Packet Flow.
In your case you could have the external application whitelist all of the ip addresses of the compute hosts that are expected to run your application pods.
Alternately you could set up an EgressIP. This will cause all traffic originating from a specific OpenShift project to appear as if it is originating from a single ip address. You could then have your external application whitelist the EgressIP address.
Documentation for configuring EgressIP can be found in the official documentation under Enabling Static IPs for External Project Traffic
What you are searching for is the external IP of the Service. A Service acts as a load balancer for your pods but by default it only has a cluster-wide IP address. If you need a URL to access it from the outside, you can create a Route. For your purpose where you need an actual external IP address, you can assign the Service an external IP manually. Information on how to do this can be found in the official OpenShift Docs.
I am running a Ruby on Rails app (using Passenger in Nginx mode) on Google Container Engine. These pods are sitting behind a GCE network load balancer. My question is how to access the external Client IP from inside the Rails app.
The Github issue here seems to present a solution, but I ran the suggested:
for node in $(kubectl get nodes -o name | cut -f2 -d/); do
kubectl annotate node $node \
net.beta.kubernetes.io/proxy-mode=iptables;
gcloud compute ssh --zone=us-central1-b $node \
--command="sudo /etc/init.d/kube-proxy restart";
done
but I am still getting a REMOTE_ADDR header of 10.140.0.1.
On ideas on how I could get access to the real Client IP (for geolocation purposes)?
Edit: To be more clear, I am aware of the ways of accessing the client IP from inside Rails, however all of these solutions are getting me the internal Kubernetes IP, I believe the GCE network load balancer is not configured (or perhaps unable) to send the real client IP.
A Googler's answer to another version of my question verifies what I am trying to do is not currently possible with the Google Container Engine Network Load Balancer currently.
EDIT (May 31, 2017): as of Kubernetes v1.5 and up this is possible on GKE with the beta annotation service.beta.kubernetes.io/external-traffic. This was answered on SO here. Please note when I added the annotation the health checks were not created on the existing nodes. Recreating the LB and restarting the nodes solved the issue.
It seems as though this is not a rails problem at all, but one of GCE. You can try the first part of
request.env["HTTP_X_FORWARDED_FOR"]
Explanation
Getting Orgin IP From Load Balancer advises that https://cloud.google.com/compute/docs/load-balancing/http/ has the text
The proxies set HTTP request/response headers as follows:
Via: 1.1 google (requests and responses)
X-Forwarded-Proto: [http | https] (requests only)
X-Forwarded-For: <client IP(s)>, <global forwarding rule external IP> (requests only)
Can be a comma-separated list of IP addresses depending on the X-Forwarded-For entries appended by the intermediaries the client is
traveling through. The first element in the section
shows the origin address.
X-Cloud-Trace-Context: <trace-id>/<span-id>;<trace-options> (requests only)
Parameters for Stackdriver Trace.
I have multiple kubernetes clusters that have Google powered load balancers (ingress lbs).
So right now to access my k8s cluster service (s) I just have to ping the public IP given by the $ kubectl get service, cool.
My problem is that sometimes I need to tear down/create clusters, reconfigure services, those services might also need SSL certificates very soon, and my clusters'/services' builds needs to be easily reproducible too (for cloud devs!).
The question is simple: can I instead of having an ingress load balancer IP have an ingress load balancer hostname?
Something like ${LOAD_BALANCER_NAME}.${GOOGLE_PROJECT_NAME}.appspot.com would be uber awesome.
Kubernetes integration with google cloud DNS is a feature request for which there is no immediate timeline (it will happen, I cannot comment on when). You can however create DNS records with the static ip of a loadbalancer.
If I've understood your problem correctly, you're using an L4 loadbalancer (service.Type=LoadBalancer) and you want to be able to delete the service/nodes etc and continue using the same IP (because you have DNS records for it). In other words, you want a loadbalancer not tied to the service lifecycle. This is possible through an L7 loadbalancer [1] & [2], or by recreating the service with an existing IP [3].
Note that [1] divorces the loadbalancer from service lifetime, but if you take down your entire cluster you will lose the loadbalancer. [2] is tied to the Ingress resource, so if you delete your cluster and recreate it, start the loadbalancer controller pod, and recreate the same Ingress resource, it will use the existing loadbalancer. Also note that both [1] and [2] depend on a "beta" resource that will be released with kubernetes 1.1, I'd appreciate your feedback if you deploy them :)
[1] https://github.com/kubernetes/contrib/tree/master/service-loadbalancer
[2] https://github.com/kubernetes/contrib/pull/132
[3] github.com/kubernetes/kubernetes/issues/10323