How to display the content of the volume? - openshift

How to display the content of the openshift volume? (files that are in, the total space used etc.).
The only information I've managed to find in the docs is to oc rsh into the running POD and use ls, which of course is no way a viable solution if no pod using the volume is running and can't be started because of some issues with the volume...

For the moment there's no "volume file explorer" or whatever interface in Openshift.
Currently you always need to attach the volume to a running pod and list files within.
If you're using glusterfs (and are cluster/storage admin) all volumes are also mounted inside the storage pods , so you can get a complete overview within the storage pods.

I don't know these ways are fit for you, but I just list the availabilities as follows.
As far as I remember, if the pod can be created based on docker image, then you can run without run the application like this.
oc run tmp-pod --image=your-docker-registry.default.svc/yourapplication -- tail -f /dev/null
You are using PersistentVolume(PV/PVC pair) for your volume, then you can display the volume after mounting temporarily the PV to temporary pod as follows.
oc run tmp-pod --image=registry.access.redhat.com/rhel7 -- tail -f /dev/null
oc set volume dc/tmp-pod --add -t pvc --name=new-registry --claim-name=new-registry --mount-path=/mountpath
You can see the volume contents mounted above configuration via tmp-pod, and you can remove above temporary pod simply after checking.
I hope it help you.

The solution proposed by #Daein Park to display the PersistentVolume(PV/PVC pair) content was not working for me. The command oc run tmp-pod does not create a dc deploymentConfig and it seems impossible to set a volume to a pod.
My solution was to use the following command:
oc run tmp-pod --image=dummy --restart=Never --overrides='{"spec":{"containers":[{"command":["tail","-f","/dev/null"],"image":"registry.access.redhat.com/rhel7","name":"tmp-pod","volumeMounts":[{"mountPath":"/mountpath","name":"volume"}]}],"volumes":[{"name":"volume","persistentVolumeClaim":{"claimName":"pv-clain"}}]}}'
NOTE2: The --image=dummy is only provided to make the oc run command happy, anyway the image field is overridden the json.
Finally, to list the content of the mounted volume:
oc rsh tmp-pod ls /mountpath
As the json content is not easy to read in the command line, here is what it is provided to the --overrides parameter:
{
"spec": {
"containers": [{
"command": ["tail", "-f", "/dev/null"],
"image": "registry.access.redhat.com/rhel7",
"name": "tmp-pod",
"volumeMounts": [{
"mountPath": "/mountpath",
"name": "volume"
}
]
}
],
"volumes": [{
"name": "volume",
"persistentVolumeClaim": {
"claimName": "pv-clain"
}
}
]
}
}

Related

How to properly run a container with containerd's ctr using --uidmap/gidmap and --net-host option

I'm running a container with ctr and next to using user namespaces to map the user within the container (root) to another user on the host, I want to make the host networking available for the container. For this, I'm using the --net-host option. Based on a very simple test container
$ cat Dockerfile
FROM alpine
ENTRYPOINT ["/bin/sh"]
I try it with
sudo ctr run -rm --uidmap "0:1000:999" --gidmap "0:1000:999" --net-host docker.io/library/test:latest test
which gives me the following error
ctr: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"sysfs\\\" to rootfs \\\"/run/containerd/io.containerd.runtime.v2.task/default/test/rootfs\\\" at \\\"/sys\\\" caused \\\"operation not permitted\\\"\"": unknown
Everything works fine if I either
remove the --net-host flag or
remove the --uidmap/--gidmap arguments
I tried to add the user with the host uid=1000 to the netdev group, but still the same error.
Do I maybe need to use networking namespaces?
EDIT:
Meanwhile found out that it's an issue within runc. In case I use user namespaces by adding the following to the config.json
"linux": {
"uidMappings": [
{
"containerID": 0,
"hostID": 1000,
"size": 999
}
],
"gidMappings": [
{
"containerID": 0,
"hostID": 1000,
"size": 999
}
],
and additionally do not use a network namespace, which means leaving out the entry
{
"type": "network"
},
within the "namespaces" section, I got the following error from runc:
$ sudo runc run test
WARN[0000] exit status 1
ERRO[0000] container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"sysfs\\\" to rootfs \\\"/vagrant/test/rootfs\\\" at \\\"/sys\\\" caused \\\"operation not permitted\\\"\""
container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"sysfs\\\" to rootfs \\\"/vagrant/test/rootfs\\\" at \\\"/sys\\\" caused \\\"operation not permitted\\\"\""
Finally found the answer from this issue in runc. It's basically a restriction within the kernel that a user that does not own the network namespace does not have the CAP_SYS_ADMIN capability and without that can't mount sysfs. Since the user on the host that the root user within the container is mapped to did not create the host network namespace, it does not have CAP_SYS_ADMIN there.
From the discussion in the runc issue, I do see the following options for now:
remove mounting of sysfs.
Within the config.json that runc uses, remove the following section within "mounts":
{
"destination": "/sys",
"type": "sysfs",
"source": "sysfs",
"options": [
"nosuid",
"noexec",
"nodev",
"ro"
]
},
In my case, I also couldn't mount /etc/resolv.conf. By removing these 2, the container did run fine and had host network access. This does not work with ctr though.
setup a bridge from the host network namespace to the network space of the container (see here and slirp4netns).
use docker or podman if possible that seem to use slirp4netns for this purpose. There is an old moby issue that also might be interesting.

Configuring Apache drill for Cassandra

I am trying to configure Cassandra with Drill. I used the same approach given on the link: https://drill.apache.org/docs/starting-the-web-ui/.
I used the following code for New Storage Plugin:
{
"type": "cassandra",
"hosts": [
"127.0.0.1"
],
"port": 9042,
"username": "<username>",
"password": "<password>",
"enabled": false
}
I have attached the Screenshot here.
But I'm getting the following error:
Please retry: Error (invalid JSON mapping)
How can I resolve this?
All the code :
Git: https://github.com/yssharma/drill/tree/cassandra-storage
Patch: https://gist.github.com/yssharma/2581ae8a97c559b2677f
1. Get Drill: Lets get the Drill source
$ git clone https://github.com/apache/drill.git
2. Get Cassandra Storage patch/Download the Patch file from:
https://reviews.apache.org/r/29816/diff/raw/
3. Apply the patch on top of Drill
$ cd drill
$ git apply --check ~/Downloads/DRILL-92-CassandraStorage.patch
$ git apply ~/Downloads/DRILL-92-CassandraStorage.patch
4. Build Drill with Cassandra Storage & export distribution to /opt/drill
$ mvn clean install -DskipTests
$ mkdir /opt/drill
$ tar xvzf distribution/target/*.tar.gz --strip=1 -C /opt/drill
5. Start Sqlline.
That it we have finished with the Drill build and installation – and its time we can start using Drill.
$ cd /opt/drill
$ bin/sqlline -u jdbc:drill:zk=local -n admin -p admin
Drill-Sqlline
Hit ‘show schemas‘ to view existing schemas.
Drill-Sqlline-schemas
6. Drill Web interface
You should be able to see the Drill web interface on localhost:8047, or whatever your host/port is.
Use this as your config:
{
"type": "cassandra",
"config": {
"cassandra.hosts": [
"127.0.0.1",
"127.0.0.2"
],
"cassandra.port": 9042
},
"enabled": true
}
Also, if this doesnt work, know that they are working on a plugin for it now: https://github.com/apache/drill/pull/1960
I'll give an update here as well. We're doing some serious refactoring of the how Drill works with storage plugins. Specifically, we're working to incorporate the Calcite Adapter1 for Cassandra. The reason for this is that the hard part of storage plugins isn't the connection, it's the optimizations. Calcite already does query planning for Drill and already implemented a bunch of these adapters which means that the work of figuring out all the optimizations (AKA pushdowns) is largely done.
In the case of Cassandra/Scylla, this is particularly important because some filters should be pushed down to Cassandra, and some should absolutely not be pushed down. The adapters also include aggregate pushdowns--something which no Drill plugins currently do. Again the point of this is that once we commit this, the connector should work VERY will with Cassandra/Scylla. We have one for ElasticSearch that is very near completion and once that's done the Cassandra plugin is next. If you have any suggestions/comments or other feedback, please post on the pull request linked above.
** UPDATE 11 April 2021: Cassandra/Scylla Plugin Now Merged in Drill 1.19.0-SNAPSHOT **

oc-command to forward local-ports to remote debug ports based on service-name instead of pod-name

To minimize the setup-time for attaching a debug session to the remote pod (microservice deployed on OpenShift) using intelliJ,
I am trying to get the most out of the 'Before launch'-setting of the Remote Debug-Configuration.
I use 2 steps before attaching the debugger to the JVM Socket with following command-line arguments (this setup works but needs editing every new deploy);
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:8000
step 1:
external tools: oc with arguments:
login
https://url.of.openshift.environment
--username=<login>
--password=<password>
step 2:
external tools: oc with arguments:
port-forward
microservice-name-65-6bhz8 -> this needs to be changed after every deploy
8000
3000
3001
background info:
this is the info in the service his YAML under spec>containers>env:
- name: JAVA_TOOL_OPTIONS
value: >-
-agentlib:jdwp=transport=dt_socket,server=y,address=8000,suspend=n
-Dcom.sun.management.jmxremote=true
-Dcom.sun.management.jmxremote.port=3000
-Dcom.sun.management.jmxremote.rmi.port=3001
-Djava.rmi.server.hostname=127.0.0.1
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
As the name of the pod changes every (re-)deploy I am trying to find a oc-command which can be used to port-forward without having to provide the pod-name.(eg. based on the service-name)
Or a completely other solution that allows me to hit 1 button to setup a debug-session (preferably in intelliJ).
> Screenshot IntelliJ settings
----------------------------- edit after tips -------------------------------
For now I made a small batch-script which does the trick:
Feel free to help on a even faster solution
(I'm checking https://openshiftdo.org/)
or other intelliJent solutions
set /p _username=Type your username:
set /p _password=Type your password:
oc login replace-with-openshift-console-url --username=%_username% --password=%_password%
oc project replace-with-project-name
oc get pods --selector app=replace-with-app-name -o jsonpath={.items[?(#.status.phase=='Running')].metadata.name} > temp.txt
set /p PODNAME= <temp.txt
del temp.txt
oc port-forward %PODNAME% 8000 3000 3001
Your going to need the pod name in order to port forward but of course you can fetch that programatically consistantly so you don't need to update in place every time.
There are a number of ways you can do this, via jsonpath, go template, bash etc. An example would be to use the following, replacing your app name as required:
oc get pod -l app=replace-me -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}'

Openshift: How to alert/publish a message if deployment/build fails

In our deployment process it is crucial that we are informed, when a deployment fails. The deployment is rolling, but an information through slack would be nice anyway. Would it be possible through lifecycles or what other possibilities do exist?
The deployment status is logging out as events logs from OpenShift usually.
Do you use OpenShift logging component EFK stack ? Then additionally consider to install EventRouter, it collects OpenShift events logs as eventrouter pod's logs.
You can pick up the deployment event messages from the logs and trigger the alert by custom script or monitoring system's log tailing feature and so on.
Refer Specifying Logging Ansible Variables
for ansible variable details.
openshift_logging_install_eventrouter
openshift_logging_eventrouter_nodeselector
openshift_logging_eventrouter_namespace
...
You can pass customParams to the deployment process and do a curl if openshift-deploy fails.
"strategy": {
"type": "Rolling",
"timeoutSeconds": 180,
"customParams": {
"command": [
"/bin/sh",
"-c",
"set -e && if ! openshift-deploy; then curl -i -X POST -d '{\"text\": \"Deployment of ${application} failed!\"}' ${webhook} && exit 1; else echo \"Deployment complete\"; fi"
]
}

Using environment properties with files in elastic beanstalk config files

Working with Elastic Beanstalk .config files is kinda... interesting. I'm trying to use environment properties with the files: configuration option in an Elastc Beanstalk .config file. What I'd like to do is something like:
files:
"/etc/passwd-s3fs" :
mode: "000640"
owner: root
group: root
content: |
${AWS_ACCESS_KEY_ID}:${AWS_SECRET_KEY}
To create an /etc/passwd-s3fs file with content something like:
ABAC73E92DEEWEDS3FG4E:aiDSuhr8eg4fHHGEMes44zdkIJD0wkmd
I.e. use the environment properties defined in the AWS Console (Elastic Beanstalk/Configuration/Software Configuration/Environment Properties) to initialize system configuration files and such.
I've found that it is possible to use environment properties in container-command:s, like so:
container_commands:
000-create-file:
command: echo ${AWS_ACCESS_KEY_ID}:${AWS_SECRET_KEY} > /etc/passwd-s3fs
However, doing so will require me to manually set owner, group, file permissions etc. It's also much more of a hassle when dealing with larger configuration files than the Files: configuration option...
Anyone got any tips on this?
How about something like this. I will use the word "context" for dev vs. qa.
Create one file per context:
dev-envvars
export MYAPP_IP_ADDR=111.222.0.1
export MYAPP_BUCKET=dev
qa-envvars
export MYAPP_IP_ADDR=111.222.1.1
export MYAPP_BUCKET=qa
Upload those files to a private S3 folder, S3://myapp/config.
In IAM, add a policy to the aws-elasticbeanstalk-ec2-role role that allows reading S3://myapp/config.
Add the following file to your .ebextensions directory:
envvars.config
files:
"/opt/myapp_envvars" :
mode: "000644"
owner: root
group: root
# change the source when you need a different context
#source: https://s3-us-west-2.amazonaws.com/myapp/dev-envvars
source: https://s3-us-west-2.amazonaws.com/myapp/qa-envvars
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Access:
type: S3
roleName: aws-elasticbeanstalk-ec2-role
buckets: myapp
commands:
# commands executes after files per
# http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
10-load-env-vars:
command: . /opt/myapp_envvars
Per the AWS Developer's Guide, commands "run before the application and web server are set up and the application version file is extracted," and before container-commands. I guess the question will be whether that is early enough in the boot process to make the environment variables available when you need them. I actually wound up writing an init.d script to start and stop things in my EC2 instance. I used the technique above to deploy the script.
Credit for the “Resources” section that allows downloading from secured S3 goes to the May 7, 2014 post that Joshua#AWS made to this thread.
I am gravedigging but since I stumbled across this in the course of my travels, there is a "clever" way to do what you describe–at least in 2018, and at least since 2016. You can retrieve an environment variable by key with get-config:
/opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_VAR_KEY
And likewise all environment variables with (as JSON or --output YAML)
/opt/elasticbeanstalk/bin/get-config environment
Example usage in a container command:
container_commands:
00_store_env_var_in_file_and_chmod:
command: "/opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_KEY | install -D /dev/stdin /etc/somefile && chmod 640 /etc/somefile"
Example usage in a file:
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/00_do_stuff.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
YOUR_ENV_VAR=$(source /opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_VAR_KEY)
echo "Hello $YOUR_ENV_VAR"
I was introduced to get-config by Thomas Reggi in https://serverfault.com/a/771067.
I assume that AWS_ACCESS_KEY_ID and AWS_SECRET_KEY are known to you prior to the app deployment.
You can create the file on your workstation and submit it to Elastic Beanstalk instance with the code on $ git aws.push
$ cd .ebextensions
$ echo 'ABAC73E92DEEWEDS3FG4E:aiDSuhr8eg4fHHGEMes44zdkIJD0wkmd' > passwd-s3fs
In .config:
files:
"/etc/passwd-s3fs" :
mode: "000640"
owner: root
group: root
container_commands:
10-copy-passwords-file:
command: "cat .ebextensions/passwd-s3fs > /etc/passwd-s3fs"
You might have to play with the permissions or execute cat as sudo. Also, I put the file into .ebextensions for example, it can be anywhere in your project.
Hope it helps.