Seems openshift console has problem showing the pod created by DeploymentConfig, after labeling the pod.
My Use Case:
I have two ldap server, one as master service, and another as slave service(much like mysql master, slave), but these two service will serve only as internal service, just for replication between them(as pod ip is not accessible from each other).
And I plan to exposed another service(e.g. ldap-outer-service) service, for outer usage, which will include pods in master and slave service.
Deploy Steps:
After master service is ready, I deploy the slave service. This slave service will be replicated with master service. Only after the replication is successfully setup and data is initialized with master service, the POD in slave service should be added to outer service
Issue:
After slave service is ready, I label the pod in the slave service, to be the same as outer service, then "oc describe service ldap-outer-service" shows endpoint is there, but the openshift console just doesn't show the pod.
Related
I have a openshift cluster on IBM cloud. I want to connect to the worker nodes using SSH via Putty but documentation says,
SSH by password is unavailable on the worker nodes.
Is there a way to connect to those?
If you use OpenShift v4 on IBM cloud, you may access your worker nodes using oc debug node/<target node name> instead of SSH. oc debug node command launches a temporary pod for the terminal session on the target node. You can check and run linux commands like usual SSH session through the Pod. Try it.
SSH access to worker nodes in OpenShift is disabled for security reasons. The documentation suggests to use DaemonSets for actions to be performed on worker nodes.
I have been working on a project in Mysql Master slave replication. I want to setup a master slave replication between AWS and GCP where AWS has the AWS RDS as the master and the slave or replica is in the GCP side. But I want to create this replica on GCP side without publicly exposing the master instance on AWS. That means this should happen in a private network.
I have found solutions where we can create proxy for the master instance and then create replica on the GCP side using the Cloud SQL migration services. But this is not what what I want. I don't want to assign a proxy to the master instance.
The replica creation process should be within a private network.
What should I do next? Help.
Also, please do let me know if the question is still unclear.
Create a Transit Gateway between AWS VPC and GCP private network.
https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html
If private network on the master (AWS) is a must, then this won't be possible. The documentation about using Cloud SQL as External replica is clear on the requirements for the source:
Ensure that the source database server meets these configuration requirements:
An externally accessible IPv4 address and TCP port.
I have 3 masters in my Openshift cluster. After changing identity provider in master-config.yaml and restarting master-api and master-controller in one, it doesn't take any effect unless I copy this configuration to all masters in the cluster. I'm wondering why?
I think it is affected by master service HA architecture and they have each configuration so it needs to sync if they are changed. For example, the controller service access to a elected leader at a time. And as I remember, this architecture is changed as of v3.10, the configuration file managed as one configmap which shared with same node group, so node would not require to restart to take effec, but master service should needs to restart each master services if the configuration changed.
The documentation says that only pods that are managed by a Replication Controller will be restarted after a Kubernetes cluster update on Google Container Engine.
What about the pods that are managed by a Deployment?
In this case the language is too precise. Any pods that are managed by a controller (Replication Controller, Replica Set, Daemon Set, Deployment, etc) will be restarted. The warning is for folks that have created Pods without a corresponding controller. Because nodes are replaced with new nodes (rather than upgraded in place), Pods without a controller ensuring that they remain running will just disappear.
I would like to create a kubernetes cluster to deploy mysql databases, like a mysql farm. These databases should be accessible from internet.
All databases on the same node will have the port 3306 listening, the kube-proxy or the DNS addon could redirect each request to an specific container?
I would like to create url's like myDB1.example.com:3306, myDB2.example.com:3306 that goes to an specific container.
I'm deploying this environment in AWS.
It's possible to create this cluster?
Yes. Starting point would be a (customized) MySQL Docker image with EBS backed volumes and you'd be using it in an Replication Controller to handle failover. On top of that you would have a Service that provides a stable and routable interface to the outside world. Optionally, put an AWS Elastic Load Balancer in front of it.