How to upload huge files to Ceph/GlsuterFS through openshift/k8s PVC - openshift

Now I wanna run a machine learning pod in openshift, but I need to upload some data like training set to the pod, and better to the PV when considering persistence. Is there some apis helpful on this?

Attach PV to pod. Then you you can use kubectl cp.
For example
kubectl cp /tmp/foo_dir <some-pod>:/your_pv/bar_dir
/your_pv should be specified in Pods spec.volumeMounts to use your PVC.

Related

Monitoring the progress of specified pods with oc cli

I want to know is there a way to monitor the progress of a particular pod instead of seeing all pods?
For example I scale consleapp
oc scale dc consleapp --replicas=3
After that command I want to watch the progress of only consoleapp and ensure the pods are active.
I thought you'd be able to run this command
oc get pods consoleapp -watch but it does not work. Is there a way for me to monitor the progress of this? very similar to oc rollout status deploymentconfig/consoleapp --watch but without rolling out a new deployment.
When you run oc get pods consoleapp, you're asking for a pod named consoleapp, which of course doesn't exist. If you want to watch all the pods managed by the DeploymentConfig, you can use the --selector (-l) option to select pods that match the selector in your DeploymentConfig.
If you have:
spec:
selector:
name: consoleapp
Then you would run:
oc get pod -l name=consoleapp -w
If your selector has multiple labels, combine them with commas:
oc get pod -l app=consoleapp,component=frontend -w
NB: DeploymentConfigs are considered deprecated and can generally be replaced by standard Kubernetes Deployments:
Both Kubernetes Deployment objects and OpenShift Container Platform-provided DeploymentConfig objects are supported in OpenShift Container Platform; however, it is recommended to use Deployment objects unless you need a specific feature or behavior provided by DeploymentConfig objects.
(from "Understanding Deployment and DeploymentConfig objects")

OpenShift single node PersistentVolume with hostPath requires privileged pods, how to set as default?

I am fairly new to OpenShift and have been using CRC (Code Ready Containers) for a little while, and now decided to install the single server OpenShift on bare metal using the Assisted-Installer method from https://cloud.redhat.com/blog/deploy-openshift-at-the-edge-with-single-node-openshift and https://console.redhat.com/openshift/assisted-installer/clusters/. This has worked well and I have a functional single-server.
As a single server in a test environment (without NFS available) I need/want to create PersistentVolumes with hostPath (localhost storage) - these work flawlessly in CRC. However on the full install, I run into an issue when mounting PVC's to pods as the pods were not running privileged. I edited the deployment config and added the lines below (within the containers hash)
- resources: {}
...
securityContext:
privileged: true
... however still had errors as the restricted SCC has 'allowPrivilegedContainer: false'. I have done a horrible hack of changing this to true, so adding the lines above to the deployment yaml works. However there must be an easier way as none of these hacks seem present in CRC. I checked and CRC pods run restricted, the restricted SCC has privileged set to false, and the Persistent Volume is also using hostPath. I also do not have to edit the deployment yaml as above in CRC - it just works (tm).
Guidance here shows that the containers must run privileged, however the containers in CRC are running restricted and the SCC still has 'allowPrivilegedContainer: false'.
https://docs.openshift.com/container-platform/4.8/storage/persistent_storage/persistent-storage-hostpath.html
An example app creation as below (from the RedHat DO280 course) works without any massaging of privileges or deployment config in CRC, but on a real OS server requires the massaging above. As my server is purely for testing, I would like to make it easier without doing the hackjob and deployment changes above.
oc new-app --name mysql --docker-image registry.access.redhat.com/rhscl/mysql-57-rhel7:5.7
oc create secret generic mysql --from-literal password=r3dh4t123
oc set env deployment mysql --prefix MYSQL_ROOT_ --from secret/mysql
oc set volumes deployment/mysql --name mysql-storage --add --type pvc --claim-size 2Gi --claim-mode rwo --mount-path /var/lib/mysql/data
oc get pods -l deployment=mysql
oc get pvc
Any help appreciated.
EDIT: I have overcome this now by enabling nfs-server and adding entries to /etc/exports. However I'm still interested to understand how CRC manages the above issue when using hostPath
The short answer to this is: don't use hostPath.
You are using hostPath to make use of arbitrary disk space available on the underlying host's volume. hostPath can also be used to read/write any directory path on the underlying host's volume -- which, as you can imagine, should be used with great care.
Have a look at this as an alternative -- https://docs.openshift.com/container-platform/4.8/storage/persistent_storage/persistent-storage-local.html

How to enable mutual SSL verification mode in Redhat-SSO image for OpenShift

I am using the template sso72-x509-postgresql-persistent, which is based on Redhat-SSO and Keycloak, to create an application in OpenShift.
I am going to enable its mutual SSL mode, so that a user has to only provide his certificate instead of user name and password in his request. The document (https://access.redhat.com/documentation/en-us/red_hat_single_sign-on/7.2/html-single/server_administration_guide/index#x509) told me to edit the standalone.xml file to add configuration sections. It worked fine.
But the template image sso72-x509-postgresql-persistent had problem with this procedure, because after it was deployed on the OpenShift, any changes on the files within the docker have been lost after restart of the docker.
Is there anyway to enable the mutual SSL mode through another level matter like commandline or API instead of editting a configuration file, except making my own docker image?
Ok, I'm including this anyway. I wasn't able to get this working due to permissions issues (the mounted files didn't persist the same permissions as before, so the container continued to fail. But a lot of work went into this answer, so hopefully it points you in the right direction!
You can add a Persistent Volume (PV) to ensure your configuration changes survive a restart. You can add a PV to your deployment via:
DON'T DO THIS
oc set volume deploymentconfig sso --add -t pvc --name=sso-config --mount-path=/opt/eap/standalone/configuration --claim-mode=ReadWriteOnce --claim-size=1Gi
This will bring up your RH-SSO image with a blank configuration directory, causing the pod to get stuck in Back-off restarting failed container. What you should do instead is:
Backup the existing configuration files
oc rsync <rhsso_pod_name>:/opt/eap/standalone/configuration ~/
Create a temporary, busybox deployment that can act as an intermediary for uploading the configuration files. Wait for deployment to complete
oc run busybox --image=busybox --wait --command -- /bin/sh -c "while true; do sleep 10; done"
Mount a new PV to the busybox deployment. Wait for deployment to complete
oc set volume deploymentconfig busybox --add -t pvc --name=sso-volume --claim-name=sso-config --mount-path=/configuration --claim-mode=ReadWriteOnce --claim-size=1Gi
Edit your configuration files now
Upload the configuration files to your new PV via the busybox pod
oc rsync ~/configuration/ <busybox_pod_name>:/configuration/
Destroy the busybox deployment
oc delete all -l run=busybox --force --grace-period=0
Finally, you attach your already created and ready-to-go persistent configuration to the RH SSO deployment
oc set volume deploymentconfig sso --add -t pvc --name=sso-volume --claim-name=sso-config --mount-path=/opt/eap/standalone/configuration
Once your new deployment is...still failing because of permission issues :/

How to restart pod in OpenShift?

I updated a file (for debug output) in a running pod, but it isn't getting recognized. I was going to restart the pod to get it to take but I only see oc stop and not oc start or oc restart. How would I force a refresh of files in the pod?
I am thinking maybe it is a Ruby thing (like opcache in PHP). But figured a restart of the pod would handle it. Just can't figure out how to restart a pod.
You need to do your changes in the deployment config but not in the pod. Because OpenShift treats pods as largely immutable; changes cannot be made to a pod definition while it is running. https://docs.openshift.com/enterprise/3.0/architecture/core_concepts/pods_and_services.html#pods
If you make some changes in deployment config and save them, pod will restart and and your changes will take effect:
oc edit dc "deploy-config-example"
If you change something in volumes or configmaps you need to delete pod for his restart:
oc delete pod "name-of-your-pod"
And pod will restart. Or better still trigger a new deployment by running:
oc rollout latest "deploy-config-example"
Using oc rollout is better because it will re-deploy all pods if you have a scaled application, and you don't need to identify each pod and delete it.
You can scale deployments down (to zero) and then up again:
oc get deployments -n <your project> -o wide
oc get pods -n <your project> -o wide
oc scale --replicas=0 deployment/<your deployment> -n <your project>
oc scale --replicas=1 deployment/<your deployment> -n <your project>
watch oc get pods -n <your project> # wait until your deployment is up again
If you want to do it using GUI :
Login to ocp
Click workloads -> Deployment Configs
Find the pod you want to restart.
On the right side, click on the 3 dots.
Click start rollout.
If you delete your pod, or scale it to 0 and to 1 again you might lose some clients, because you are basically stopping and restarting your application. But in rollout, your existing pod waits for the new pod to get ready and then deletes itself. So I guess rollout is safer than deleting or scaling 0/1.
Thanks Noam Manos for your solution.
I've used "Application Console" in Openshift. I've navigated to Applications - Deployment - #3 (check for your active deployment) to see my pod with up and down arrows. Currently, I've 1 pod running. So, I've clicked on down arrow to scale down to 0 pod. Then, I clicked on up arrow to scale up to 1 pod.
Follow the below steps
login to open shift
click on monitor tab
select the component for which you want to restart the pod
click the action drop down ( right top corner )
delete the existing pod
new pod automatically generated.
You also can go to DeploymentConfig and choose option "Start rollout" from actions.
And if nothing helps, there is also such thing as
Workloads -> ReplicationControllers
they controll replica numbers.
You delete such controller, and then another such controller is created which creates your new pod.

How do you create a deployment configuration in OpenShift? Is it automatic for new-app based on a docker image?

I'm creating a new-app based on an image stream that corresponds to a docker image in a private OpenShift docker registry. The command is:
oc new-app mynamespace/my-image:latest -n=my-project
Question 1: Does this command automatically create a deployment configuration (dc) that can be referrenced as dc/my-image? Is this deployment configuration associated with my-project?
Question 2: What is the oc command to create a deployment configuration? The OpenShift developer guide has a section titled Creating a Deployment Configuration, but surprisingly it does not say how to create a DC or give any examples. It just shows a JSON structure and says DCs can be managed with the oc command.
Yes, your command will create stuff in the specified project. You can check what objects are created using the oc get command. i.e. to check what DCs you have, you'd do oc get dc or oc get deploymentconfigs.
Other useful commands are oc describe - similar to get but more information. oc status -v - see more broad information about project including warnings and errors.
You create DC and any other resource types using the oc create command. e.g. you copy the example DC off the URL you link to and put it into a file. Finally you do oc create -f mydc.yaml. Both YAML and JSON are supported.
As you see some commands can create DCs by themselves without you providing them with YAML or JSON. You can later modify existing resources with oc edit service/my-app. There is the oc patch command suitable for scripting.
You can see existing resource YAML doing oc get dc/myds -o yaml. Same with any other resource. Keep in mind you are presently using the desired project or use the -n option as you are doing in your example.
Not that hard once you understand some basics and learn to use the oc describe and oc logs command to debug issues with your images/pods. e.g. oc describe pod/my-app-1-asdfg, oc logs my-app-1-asdfg, oc logs -f dc/my-app.
HTH