Access labels during runtime in kubernetes - google-compute-engine

Is there a way for my application to access the labels assigned to the pod / service during runtime?
Either via client API or via ENV / passed variables to the docker container?

The Downward API is designed to automatically expose information about the pod's configuration to the pod using environment variables. As of Kubernetes 1.0 is only exposes the pod's name and namespace. Adding labels to the Downward API is being discussed in #560 but isn't currently implemented.
In the mean time, your application can query the Kubernetes apiserver and introspect it's configuration to determine what labels have been set.

Related

how to host name and IP address of the instances deployment from the deployment manager for a particular deployment session?

how to hostname and IP address of the instances deployment from the deployment manager for a particular deployment session?
I have seen it can be done via gcloud but I am looking for alternate via saving files through jinja
Also, would like to know if we can save via Jinja templates
need to know if there are any postscript available for gcloud deployment manager
for example, I have deployed 4 centos instances and now I need to create a config file using the above four instances and then go about starting services on all four.
I doubt it can be done through start-up script
You can simulate a creation of a vm instance reserving the desired IP, specifying the hostname and the start-up script where you start the services of your machines, then, check the REST file at the bottom of the page to see the actual labels used for that and where those should be used, but remember that for IP static assign you must reserve one or more first, for internals check this, for externals check this.
You can create an instance-templates based on this documentation[1] and deploying your VM/s with gcloud[2]. The Hostname and IP Address can be specified on the instance template itself:
gcloud deployment-manager deployments create [DEPLOYMENT_NAME]
--config [CONFIG.YAML]
I am not familiar with Jinja but based on Google doc [3], you can use it to create templates used by Deployment Manager.
You can also add a metadata resource in the template to use the startup-script[4]. Keep in mind that startup-script can simply download and execute a python/bash if it is come to be too complex.

Openshift Project hide Elasticsearch Route

I am new to OpenShift so apologies in advance if this question is not very clear.
I have a project starting in Openshift and will use the Elasticsearch provided docker image as a data store.
ElasiticSearch is bound only to local host by default when installed, and if I was running app on a server I would keep this configuration so as not to expose ElasticSearch interface as connectivity only required by the application, no need to expose outside of project.
If I make a route for Elasticsearch without changing it's default config, it is accessible to other Pods in project but also outside of the project, like the main application. Is it possible to make a route that is internal to the project only so that Elasticsearch interface is not accessible outside of the project or by other means ? Or a way to have a common local host address between pods/applications ?
I tried to group the services but still not available.
Any support to put me in right direction really appreciated.

How to access services in K8s from the internal non-K8s network?

Question: How can I provide reliable access from (non-K8s) services running in an GCE network to other services running inside Kubernetes?
Background: We are running a hosted K8s setup in the Google Cloud Platform. Most services are 12factor apps and run just fine within K8s. Some backing stores (databases) are run outside of K8s. Accessing them is easy by using headless services with manually defined endpoints to fixed internal IPs. Those services usually do not need to "talk back" to the services in K8s.
But some services running in the internal GCE network (but outside of K8s) need to access services running within K8s. We can expose the K8s services using spec.type: NodePort and talk to this port on any of the K8s Nodes IPs. But how can we automatically find the right NodePort and a valid Worker Node IP? Or maybe there is even a better way to solve this issue.
This setup is probably not a typical use-case for a K8s deployment, but we'd like to go this way until PetSets and Persistent Storage in K8s have matured enough.
As we are talking about internal services I'd like to avoid using an external loadbalancer in this case.
You can make cluster service IPs meaningful outside of the cluster (but inside the private network) either by creating a "bastion route" or by running kube-proxy on the machine you are connecting from (see this answer).
I think you could also point your resolv.conf at the cluster's DNS service to be able to resolve service DNS names. This could get tricky if you have multiple clusters though.
One possible way is to use an Ingress Controller. Ingress Controllers are designed to provide access from outside a Kubernetes cluster to services running inside the cluster. An Ingress Controller runs as a pod within the cluster and will route requests from outside the cluster to the correct services inside the cluster, based on the configured rules. This provides a secure and reliable way for non-Kubernetes services running in a GCE network to access services running in Kubernetes.

Bluemix using Node-RED bind to existing ClearDB MySQL service

I am using Node-RED on IBM Bluemix. I am trying to connect to MySQL hosted by ClearDB, but I cannot find a suitable node in the database category.
How can I bind to existing ClearDB service that I already have bound to another app?
You can take a look at this MySQL node for Node-RED in the flow and node library, it is an extension. The steps to add additional node types to the editor is explained in the Node-RED documentation in general, however it does not directly apply to Bluemix. For your Bluemix environment you would need to access and modify the environment. See this post on how to deploy your customized Node-RED environment to Bluemix.

Kubernetes on GCE

I am trying to setup Kubernetes on GCE. My question is , lets say there are 20 minions part of the cluster in Kubernetes, and two services deployed with type LoadBalancer with replicas as 2 each. So K8S will basically put 2 pods on two different minions per service. My question is, would the rest of the minions which are not running any pods also get the iptables rule in chain KUBE-PORTALS-CONTAINER and KUBE-PORTALS-HOST for these two services added? At least that is my observation but would like to confirm if this is just how Kubernetes works on GCE or this is how K8S behaves irrespective of where it is deployed. Is the reason that any service should be reachable from any minion no matter whether the minion is part of that service or not ? Let me know if there is a better community for this question ?
Would the rest of the minions which are not running any pods also get the iptables rule in chain KUBE-PORTALS-CONTAINER and KUBE-PORTALS-HOST for these two services added?
Yes. The load balancer can forward external traffic to any node in the cluster, so every node needs to be able to receive traffic for a service and forward it to the appropriate pod.
I would like to confirm if this is just how Kubernetes works on GCE or this is how K8S behaves irrespective of where it is deployed.
The iptables rule for services within the cluster is the same regardless of where Kubernetes is deployed. Making a service externally accessible differs slightly depending on where you deploy your cluster (e.g. on AWS you'd create the service as type NodePort instead of type LoadBalancer), so the iptables rule for services that are externalized can vary a bit.