Does the Oracle Cloud Kubernetes CSI implementation depend on the flex-volumes?
In other words, in order to use the OCI CSI (i.e. csi-oci-node driver and csi-oci-controller driver), do I need to deploy oci-block-volume-provisioner and oci-flexvolume-driver?
Ref: https://github.com/oracle/oci-cloud-controller-manager#setup-and-installation
It is not required to deploy oci-block-volume-provisioner and oci-flexvolume-driver to use OCI CSI driver.
Please refer https://github.com/oracle/oci-cloud-controller-manager/blob/master/container-storage-interface.md for steps on setting up OCI CSI driver.
You can also check the fully-managed, scalable, and highly available Container Engine for Kubernetes(OKE) service from OCI which comes pre-installed with the OCI CSI driver & the associated StorageClass at https://www.oracle.com/cloud-native/container-engine-kubernetes/ & https://docs.oracle.com/en-us/iaas/Content/ContEng/Concepts/contengoverview.htm.
Deploying of OCI-block-volume-provisioned and oci-flexvolume-drive in order to use OCI CSI is not mandatory but recommended.
OCI Flexvolume Driver: It enables mounting of OCI block storage volumes to Kubernetes Pods via the Flexvolume plugin interface.
OCI Volume Provisioner: The OCI Volume Provisioner enables dynamic provisioning of storage resources when running Kubernetes on Oracle Cloud Infrastructure. It uses the OCI Flexvolume Driver to bind storage resources to Kubernetes nodes. The volume provisioner offers support for Block Volume.
Related
We can now create dataproc clusters using compute engine or GKE. What are the major advantages of creating a cluster on GKE vs Compute Engine. We have faced problem of insufficient resources in zone error multiple times while creating cluster on compute engine. Will it solve this issue if we use GKE for cluster and what are the cost difference between them.
To solve the error message “insufficient resources in zone”, You may refer to this GCP documentation.
To answer your question, what's the difference between dataproc cluster on GKE vs GCE.
In GKE, You can directly create a Dataproc cluster and do your deployments.
You may also check the advantages of GKE in documentation or check the GKE features below:
Run your apps on a fully managed Kubernetes cluster with GKE autopilot.
Start quickly with single-click clusters and scale up to 15000 nodes
Leverage a high-availability control plane including multi-zonal and regional clusters
Eliminate operational overhead with industry-first four-way auto scaling
Secure by default, including vulnerability scanning of container images and data encryption
While in GCE, you have to manually install Kubernetes, then create your Dataproc cluster and do your deployments.
I have the latest version of OpenShift installed on-prem. I am trying to find a way to dynamically provision PVCs, what is the best solution available for on-prem version of OpenShift? I have been looking into minIO and longhorn but I could not integrate with my OnPrem solution yet.
Can anyone provide some insight here?
Thanks,
Tintu
If you have an NFS server somewhere, it is very easy to setup a nfs dynamic provisionner: Check NFS client provisionner project here
Another approach is to install and use the CSI driver corresponding to your storage devices
Kubernetes CSI drivers are here (There is also a beta version of the NFS CSI driver...)
ANd BTW, Minio provides "object stirage" with an AWS S3 compatible interface, there is no such thing as "dynamic object storage" allocation in OCP/K8S...
Who has installed Openshift4 on Virtualbox VMs? How to bypass BMC limitations (BMC is required in install-config.yaml)?
https://docs.openshift.com/container-platform/4.6/installing/installing_bare_metal_ipi/ipi-install-installation-workflow.html#configuring-the-install-config-file_ipi-install-configuration-files
The prerequisites for an OCP 4.6 IPI (Installer Provisioned Infrastructure) install requires BMC access to each node.
With this setup a UPI (User Provisioned Infrastucture) deployment would be a better fit. You would need to set up the VMs and DNS entries before starting the deployment, as described in https://docs.openshift.com/container-platform/4.6/installing/installing_bare_metal/installing-bare-metal.html#installation-requirements-user-infra_installing-bare-metal.
I have a openshift cluster on IBM cloud. I want to connect to the worker nodes using SSH via Putty but documentation says,
SSH by password is unavailable on the worker nodes.
Is there a way to connect to those?
If you use OpenShift v4 on IBM cloud, you may access your worker nodes using oc debug node/<target node name> instead of SSH. oc debug node command launches a temporary pod for the terminal session on the target node. You can check and run linux commands like usual SSH session through the Pod. Try it.
SSH access to worker nodes in OpenShift is disabled for security reasons. The documentation suggests to use DaemonSets for actions to be performed on worker nodes.
I used to test installed mongodb-shard on kubernetes by helm,but I found that those helm charts are not really a qualified mongodb shard.These helm charts can correctly create Pods with names like mongos-1 mongod-server-1 mongod-shard-1,this seems to be a correct shard cluster configuration, but the appropriate mongos, mongod server instance is not created on the corresponding Pod.They just create a normal mongod instance on the corresponding pod, and there is no connection between them.Do I need to add scripts to execute commands similar to rs.addShard(config)?Encountered the same problem when installing mysql cluster using helm.
What I want to know is, is it not appropriate to install mysql/mongodb cluster on kubernetes in general scenarios?Is the database installed independently or deployed on Kubernetes?
Yes, you can deploy MongoDB instances on Kubernetes clusters.
Use standalone instance if you want to test and develop and replica set for production like deployments.
Also to make things easier you can use MongoDB Enterprise Kubernetes Operator:
The Operator enables easy deploys of MongoDB into Kubernetes clusters,
using our management, monitoring and backup platforms, Ops Manager and
Cloud Manager. By installing this integration, you will be able to
deploy MongoDB instances with a single simple command.
This guide has references to the official MongoDB documentation with more necessary details regarding:
Install Kubernetes Operator
Deploy Standalone
Deploy Replica Set
Deploy Sharded Cluster
Edit Deployment
Kubernetes Resource Specification
Troubleshooting Kubernetes Operator
Known Issues for Kubernetes Operator
So basically all you need to know in this topic.
Please let me know if that helped.