The documentation says that only pods that are managed by a Replication Controller will be restarted after a Kubernetes cluster update on Google Container Engine.
What about the pods that are managed by a Deployment?
In this case the language is too precise. Any pods that are managed by a controller (Replication Controller, Replica Set, Daemon Set, Deployment, etc) will be restarted. The warning is for folks that have created Pods without a corresponding controller. Because nodes are replaced with new nodes (rather than upgraded in place), Pods without a controller ensuring that they remain running will just disappear.
Related
I'm trying to create a k8s cluster in oracle oci with VCN-native pod networking. I have separate subnets for my pods and nodes (followed example 4 here) and when oci tries to attach the secondary VNIC to the instance, it fails and the status never gets past "Attaching". However when I use the same subnet for both pods and nodes it attaches successfully. Anyone know what's going on?
I have downloaded code ready containers on windows for installing my openshift cluster. I need to deploy 3scale on it using the operator from operators hub but the operators hub page is empty.
Digging deeper I found that a few pods on the cluster are not running and show a state "ImagePullBackOff"
I deleted the pods in order to get them restarted but the error wont go away. I checked the event logs and all the screenshotted images are attached.
Pods Terminal logs
This is an error that I keep on getting when I start my cluster. Sometimes it comes up sometimes it starts normally but maybe this has something to do with it.
Quay.io Error
This is my fist time making a deployment on openshift cluster and setting up my cluster environment. So far I am not able to resolve the issue even after deleting and restarting the cluster.
I am trying to deploy a MySQL Docker Image to Kubernetes. I mostly managed all tasks, Docker Image up and running in Docker, one final thing is missing from Kubernetes deployment.
MySQL has one configuration which is stating which user can log on from which Host 'MYSQL_ROOT_HOST' to configure that for Docker is no problem, Docker Networking is using '172.17.0.1' for bridging.
The problem with Kubernetes, this must be the IP of the Pod trying to connect MySQL Pod and every time a Pod starts, this IP changes.
I try to put the Label of the Pod connecting to the MySQL Pod but it is still looking the IP of the Pod instead of DNS name.
Do you have an idea how I can overcome this problem, I can't even figure out how this should work if I set AutoScaling for the Pod that is trying to connect MySQL, replicas will all have a different IP.
Thx for answers....
As #RyanDowson and #siloko mentioned, you should use Service, Ingress or Helm Charts for these purposes.
Additional information you can find on Service, Ingress and Helm Charts pages.
Seems openshift console has problem showing the pod created by DeploymentConfig, after labeling the pod.
My Use Case:
I have two ldap server, one as master service, and another as slave service(much like mysql master, slave), but these two service will serve only as internal service, just for replication between them(as pod ip is not accessible from each other).
And I plan to exposed another service(e.g. ldap-outer-service) service, for outer usage, which will include pods in master and slave service.
Deploy Steps:
After master service is ready, I deploy the slave service. This slave service will be replicated with master service. Only after the replication is successfully setup and data is initialized with master service, the POD in slave service should be added to outer service
Issue:
After slave service is ready, I label the pod in the slave service, to be the same as outer service, then "oc describe service ldap-outer-service" shows endpoint is there, but the openshift console just doesn't show the pod.
I'm newbie at kubernetes and I'm having problem to understand how I can run persistent pods (Cassandras ones or mysql ones) in ubuntu servers.
Correct me if I'm wrong, kubernetes can scale up or down the pods when it sees that we need more CPU but we are not talking about static code but data that are present in other nodes. So what will do the pod when it receive the request from the balancer? Also, kubernetes has the power to destroy nodes when it sees that the traffic has reduced, so how we can not lose data and not disturb the environment?
You should use volumes to map a directory in the container to persistent disks on the host or other storage