Openshift4 installation on Virtualbox - openshift

Who has installed Openshift4 on Virtualbox VMs? How to bypass BMC limitations (BMC is required in install-config.yaml)?
https://docs.openshift.com/container-platform/4.6/installing/installing_bare_metal_ipi/ipi-install-installation-workflow.html#configuring-the-install-config-file_ipi-install-configuration-files

The prerequisites for an OCP 4.6 IPI (Installer Provisioned Infrastructure) install requires BMC access to each node.
With this setup a UPI (User Provisioned Infrastucture) deployment would be a better fit. You would need to set up the VMs and DNS entries before starting the deployment, as described in https://docs.openshift.com/container-platform/4.6/installing/installing_bare_metal/installing-bare-metal.html#installation-requirements-user-infra_installing-bare-metal.

Related

OpenShift OnPrem Dynamic PVC provisioning

I have the latest version of OpenShift installed on-prem. I am trying to find a way to dynamically provision PVCs, what is the best solution available for on-prem version of OpenShift? I have been looking into minIO and longhorn but I could not integrate with my OnPrem solution yet.
Can anyone provide some insight here?
Thanks,
Tintu
If you have an NFS server somewhere, it is very easy to setup a nfs dynamic provisionner: Check NFS client provisionner project here
Another approach is to install and use the CSI driver corresponding to your storage devices
Kubernetes CSI drivers are here (There is also a beta version of the NFS CSI driver...)
ANd BTW, Minio provides "object stirage" with an AWS S3 compatible interface, there is no such thing as "dynamic object storage" allocation in OCP/K8S...

Is mysql/mongodb cluster suitable for installation on kubernetes?

I used to test installed mongodb-shard on kubernetes by helm,but I found that those helm charts are not really a qualified mongodb shard.These helm charts can correctly create Pods with names like mongos-1 mongod-server-1 mongod-shard-1,this seems to be a correct shard cluster configuration, but the appropriate mongos, mongod server instance is not created on the corresponding Pod.They just create a normal mongod instance on the corresponding pod, and there is no connection between them.Do I need to add scripts to execute commands similar to rs.addShard(config)?Encountered the same problem when installing mysql cluster using helm.
What I want to know is, is it not appropriate to install mysql/mongodb cluster on kubernetes in general scenarios?Is the database installed independently or deployed on Kubernetes?
Yes, you can deploy MongoDB instances on Kubernetes clusters.
Use standalone instance if you want to test and develop and replica set for production like deployments.
Also to make things easier you can use MongoDB Enterprise Kubernetes Operator:
The Operator enables easy deploys of MongoDB into Kubernetes clusters,
using our management, monitoring and backup platforms, Ops Manager and
Cloud Manager. By installing this integration, you will be able to
deploy MongoDB instances with a single simple command.
This guide has references to the official MongoDB documentation with more necessary details regarding:
Install Kubernetes Operator
Deploy Standalone
Deploy Replica Set
Deploy Sharded Cluster
Edit Deployment
Kubernetes Resource Specification
Troubleshooting Kubernetes Operator
Known Issues for Kubernetes Operator
So basically all you need to know in this topic.
Please let me know if that helped.

Is there any tool in GCP to patch the Compute Instance?

I have some SUSE, RedHat and Cent OS VM's in Google Cloud. Now I want to patch these servers. Is there any GCP in-built tool or third party tool need to use ?
#Jannatul, you've asked about "GCP in-built tool or third party tool" in your question.
The answer to the first part of the question regards "GCP in-built tool" is "No". The OS deployment images in GCE are kept updated, but after deployment it's up-to-you how to keep VM instances patched. At this time Google does not provide any cloud service for that purpose since such a tool is out of scope of IaaS that the GCE actually is.
As for the second part ("third party tool"), an approach to Linux patching is not GCP-specific, it should be similar to that you use in the private datacenter. Since you use commercial Linux'es, including Red Hat and Suse, that vendors' solutions could work for your needs: for example Suse Manager or Red Hat Satellite (both originate from Spacewalk and support various Linux clients), as well as open-source Spacewalk Project solution itself.
GCP now has a built-in VM patching service, a part of VM Manager suite: https://cloud.google.com/compute/docs/os-patch-management
Users can get patch compliance reports and perform manual or automatic scheduled updates of Ubuntu, Debian, RHEL, SLES, Windows VMs.
Service is free for the first 100 VMs.

Connect to CUPS web interface of Ubuntu instance on Google Compute Engine

I have installed CUPS on a Ubuntu Xenial VM instance of Google compute engine. I have changed settings as described here and here and restarted the CUPS server.
I'm trying to access the web interface through http(s)://my-external-ip:631/admin but that is not reachable. Pinging to the External IP works.
How can I troubleshoot or what extra configuration might be needed in the CUPS configuration or VM instance configuration?

Installing Cloudera Manager on Google Compute Engine

I'm trying to install Cloudera Manager in a Google Compute Engine Ubuntu 12-04 instance. Everything works fine until the installation step.
The error occur when tries to detect the Cloudera Manager Server. It seems an error with the hostname... the report error is the next one:
could not contact scm server at 236.193.155.104.bc.googleusercontent.com:7182, giving up
waiting for rollback request
Screenshot:
Please someone help me with this! I'm working on it for too long time and I feel that it is not complicated to resolve..
Many thanks in advance!
Add the firewall exception for your instances to allow tcp connections on port 7182 through them.The following is the command that allows all the tcp connections.
gcloud compute firewall-rules create cloudera-manager --allow tcp
Reference: Hadoop on GCE http://github.com/scalding-io/hadoop-on-gce/blob/master/build-cluster
To help with deploying Cloudera Manager, I wrote these scripts which may be of use to you, as they simplify the installation and deployment of Cloudera Manager on Google Compute Engine.
Also, you can now use Cloudera Director to deploy and provision Hadoop clusters on Google Cloud Platform. Cloudera Director is a single server binary which can deploy and manage other VMs to install Cloudera Manager as well as the rest of the Cloudera Hadoop distribution automatically with an easy-to-use GUI.