So it is possible to use kubectl taint and its counterpart tolerations to restrict Kubernetes pods to/from being scheduled onto specific nodes. However I can not currently find a way to configure Google Cloud so that a taint setting will persist across node creation. Is it possible?
Article about taint
I'm guessing not, yet. Taints and tolerations are in alpha and alpha features are only supported on GKE temporary clusters. Even in alpha, I'm not sure to what degree taints and tolerations actually work. There are a lot of changes being made at the moment and this feature should move to beta and be usable in 1.6.
Please see:
https://github.com/kubernetes/features/issues/108
https://github.com/kubernetes/kubernetes/issues/25320
Now GKE supports node taints which will be persisted and you don't need to run kubectl taint command. Please check https://cloud.google.com/container-engine/docs/node-taints for more information on this.
Related
So I want to deploy a master-slaves MySQL cluster in k8s. I found 2 ways that seem popular:
The first one is to use statefulsets directly from k8s official document: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/
The second one is to use operator, i.e. https://github.com/oracle/mysql-operator
Which way is most commonly used?
Also, in statefulsets, if my MySQL master dies, will k8s automatically promote the slave to be the master?
Lastly, when my logic backend app performs an operation (CRUD) to MySQL cluster, how does k8s know which pod to route to, i.e. write operation can only be sent to master while read is sent to all?
Users can deploy and maintain a set of highly available MySQL services in k8s based on StatefulSets, the process is relatively complex. This process requires users to familiarize themselves with various k8s resource objects, learn many MySQL operation details and maintain a set of complex management scripts. Kubernetes Operators are designed to reduce the threshold for deploying complex applications on k8s.
Operator hides the orchestration details of complex applications and greatly reduces the threshold to use them in k8s. If you need to deploy other complex applications, we recommend that you use the Operator.
Speaking about master election while using StatefulSet.
Electing potential slave to be a master is not an automatic process - you have to configure this manually using Xtrabackup - here is more information - setting_up_replication.
Take a look: cloning-existing-data, starting-replication, mysql-statefulset-operator.
Useful tools: vitess for better MySQL networking management and percona-xtradb-cluster that provides superior performance, scalability and instrumentation.
I am deploying active-active all in one in 2 separate servers with wso2-am 2.6.0 and wso2 analytics 2.6.0. I am configuring my servers by this link. In part 4 and 5 about rsync mechanism I have some questions:
1.how can I figure out that my server is working rsync or sync??
2.What will happen in future if I don't use rsync now and also don't use configuration on part 4 and 5 ?
1.how can I figure out that my server is working rsync or sync??
It is not really clear what are you asking for.. rsync is just a command to synchronize files in folders.
What is the rsync used for - when deploying an API, the gateway creates or updates a few synapse sequences or apis in the filesystem (repository/deployment/server) and these file updates need to be synchronized to all gateway nodes.
I personally don't advice using rsync, the whole issue is that you need to invoke regularly the rsynccommand to synchronize the files created by a master node. That creates certain delay for service availability and most important, if something goes wrong and you want to use another node as the master, you need to switch the rsync direction, which is not really automated process.
We usually keep it simple using a shared filesystem (nfs, gluster, ..) and then we have all active-active setup (ok, setting up HA NFS or glusterFS is not particulary simple, but that's usually job of the infra guys)
2.What will happen in future if I don't use rsync now and also don't use configuration on part 4 and 5 ?
In the case the filesystems between gateways is not synced or shared - you deploy an api from the publisher to a single gateway node, but other gateway nodes won't create the synapse sequences and api artefacts. As a result the other nodes won't pass the client request to the backend
My project is currently hosted by an independent cloud provider.
I am using 2 Virtual Machines, with Linux:
one hosts a Go application
one hosts a MySql database
I would now like to move to the Google Cloud Platform.
Do you think does it make sense to move to Google Cointainer Engine (GKE), rather than to the Google Compute Engine (which would have the same virtual machine model (IaaS) I am using with the current provider)?
I have never used Kubernetes and Docker. How easy would it be to make the migration? Am I going to complicate my life uselessly?
How difficult is the configuration for my simple model?
I have never used Kubernetes and Docker.
Moving to a platform that you have no experience with doesn't sound like a great idea. Instead, why not start by doing some tutorials about Docker and then Kubernetes?
After that, you might try Minikube (https://kubernetes.io/docs/getting-started-guides/minikube/) locally to start writing some manifests for the components (which sound like maybe a DaemonSet or single Pod with PersistentVolume for MySQL and a Deployment for the Go application).
Once you have the pieces working locally, then it would probably make more sense to think about migrating. You would have a much better understanding of what you are getting into and if it is something you would want to undertake.
What's the difference between OpenShift and Kubernetes and when should you use each? I understand that OpenShift is running Kubernetes under the hood but am looking to determine when running OpenShift would be better than Kubernetes and when OpenShift may be overkill.
In addition to the additional API entities, as mentioned by #SteveS, Openshift also has advanced security concepts.
This can be very helpful when running in an Enterprise context with specific requirements regarding security.
As much as this can be a strength for real-world applications in production, it can be a source of much frustration in the beginning.
One notable example is the fact that, by default, containers run as root in Kubernetes, but run under an arbitrary user with a high ID (e.g. 1000090000) in Openshift. This means that many containers from DockerHub do not work as expected. For some popular applications, The Red Hat Container Catalog supplies images with this feature/limitation in mind. However, this catalog contains only a subset of popular containers.
To get an idea of the system, I strongly suggest starting out with Kubernetes. Minikube is an excellent way to quickly setup a local, one-node Kubernetes cluster to play with. When you are familiar with the basic concepts, you will better understand the implications of the Openshift features and design decisions.
OpenShift includes a distribution of Kubernetes, so if you don't need any of those added features of OpenShift you can choice to ignore them such as: Web Console, Builds, advanced deployment models and much, much more.
Here's a summary of items available on the OpenShift website.
Kubernetes comes with Ingress Rules but Openshift comes with Routes
Kubernetes has IngressController but Openshift has Router as HAProxy
To swtich namespace in cli for openshift is very easy but in
kubernetes you need to create contex and switch between context
Openshift UI has more interactive and informative then Kubernetes
To bake docker image inside Openshift has BuildConfig but kubernetes
don't has any thing you need to build image and push to registry
Openshift has Pipeline where u don't need any jenkins to deploy any
app but Kubernetes don't has.
The easiest way to differentiate between them is to understand that while vanilla K8S is community project, OpenShift is more focused towards making it a enterprise ready product. Resources like Imagestreams, BC, Builds, DC, Routes etc along with leveraging functionalities like S2I, Router etc make it easier for Developers and admin alike to use OCP for development, deployment and lifecycle management. You can refer to the URL https://cloud.redhat.com/learn/topics/kubernetes/ for getting more information on key differences between them.
OCP makes your life much easier by giving easy actions using CLI command OC and fine grained webconsole.
You can try OCP and get first hand experience of the features using https://developers.redhat.com/developer-sandbox
where you can quick get access to sandboxed environment in a shared cluster.
I'm building my staging environment using docker-compose, with application that was previously ran in Google Cloud using Kubernetes.
My application was configured, using ENV properties provided inside Kubernetes container, and now after switching to docker-composite, I have different naming convention for linked services.
I can think of few solutions, for my problem:
Change my application, to support alternative configurations, so it would support both docker-composite & Kubernetes
Create aliases in docker-compose or Kubernetes so that configuration would always be available in single format in both environments, and I would not need to touch my application configurations.
Maybe some other way, which I don't see
I want to go with the 2nd solution, but I don't know how exactly to configure it. Have ideas?
You could use the environment section to define 'docker-compose' variables like PARAM1=${PARAM2}. In this case, docker-compose will have the same variables that Kubernetes has.