how to detach singularity container gracefully? - containers

Depending of the type of container you use C-p C-q or C-ad works most of time but does not seems to make the trick to detach the container gracefully.

you need to configure Container Instances for detach process to main parent PID.
ie.
$ singularity instance start image.sif name_instance
this will be exec the script inside the %startscript segment, and put it on a new namespace with PID 1.

Related

Can an OpenShift CronJob benefit from ImageStream?

I have my CronJob working fine without the use of an image stream.
The job runs every 15 minutes and always pulls a tagged image, e.g. my-cron:stable.
Since the image is always pulled and the schedule tells the cluster when to run my job, what do I gain from knowing that there's an updated version of my image?
If the image changes and there's a running instance of my job, I want the job to complete using the current version of the image.
In the next scheduled run the updated image is pulled (AlwaysPull). So it seems I don't gain much tracking changes to an image stream for cron jobs.
ImageStream triggers only BuildConfigs and DeploymentConfigs, as per https://docs.openshift.com/container-platform/4.7/openshift_images/image-streams-manage.html .
Upstream kubernetes doesn't have a concept of ImageStream, so there is no triggering for 'vanilla' resource types. CronJob is used both in openshift and kubernetes (apiVersion: batch/v1beta1), and AFAIK the only way to access an imagestream is to use full path to internal registry, which is not that convenient. Your cronjob won't restart or won't be stopped for some reason, if imagestream is updated, because from kubernetes standpoint the image is pulled only when cronjob has been triggered, and after that it just waits for a job to complete.
As i see it - you are not gaining much from using imagestreams, because one of the main points, ability to use triggers, is not usable for cronjobs. The only reason to use it in CronJobs is if you are pushing directly to internal registry for some reason, but that's a bad practice too.
See following links for reference:
https://access.redhat.com/solutions/4815671
How to specify OpenShift image when creating a Job
Quoting redhat solution here:
Resolution
When using an image stream inside the project to run a cronjob,
specify the full path of the image:
[...]
spec:
jobTemplate:
spec:
template:
spec:
containers:
image: docker-registry.default.svc:5000/my-app-namespace/cronjob-image:latest
name: cronjob-image
[...]
Note that you can also put the ':latest' (or a specific tag) after the
image.
In this example, the cronjob will use the imagestream cronjob-image
from project my-app-namespace:
$ oc get is -n my-app-namespace [...]
imagestream.image.openshift.io/cronjob-image
docker-registry.default.svc:5000/my-app-namespace/cronjob-image
latest 27 minutes ago
Root Cause
The image was specified without its full path to the internal docker
registry. If the full path is not used (i.e. putting only
cronjob-image, OpenShift won't be able to find it.[...]
By using an ImageStream reference, you can avoid having to include the container image repository hostname hostname and port, and the project name in your Cron Job definition.
The docker repository reference looks likes this:
image-registry.openshift-image-registry.svc:5000/my-project/my-is:latest
The value of the equivalent annotation placed on a Cron Job looks like this:
[
{
"from": {
"kind": "ImageStreamTag",
"name": "my-is:latest"
},
"fieldPath": "spec.jobTemplate.spec.template.spec.containers[?(#.name==\"my-container\")].image"
}
]
On the one hand, this is longer. On the other hand, it includes less redundant information.
So, compared to other types of kubernetes resources, Image Streams don't add a great deal of functionality to Cron Jobs. But you might benefit from not having to hardcode the project name if for instance you kept the Cron Job YAML in Git and wanted to apply it to several different projects.
Kubernetes-native resources which contain a pod can be updated automatically in response to an image stream tag update by adding the image.openshift.io/triggers annotation.
This annotation can be placed on CronJobs, Deployments, StatefulSets, DaemonSets, Jobs, ReplicationControllers, etc.
The easiest way to do this is with the oc command.
$ oc set triggers cronjob/my-cronjob
NAME TYPE VALUE AUTO
cronjobs/my-cronjob config true
$ oc set triggers cronjob/my-cronjob --from-image=my-is:latest -c my-container
cronjob.batch/my-cronjob updated
$ oc set triggers cronjob/my-cronjob
NAME TYPE VALUE AUTO
cronjobs/my-cronjob config true
cronjobs/my-cronjob image my-is:latest (my-container) true
The effect of the oc set triggers command was to add the annotation to the CronJob, which we can examine with:
$ oc get cronjob/my-cronjob -o json | jq '.metadata.annotations["image.openshift.io/triggers"]' -r | jq
[
{
"from": {
"kind": "ImageStreamTag",
"name": "my-is:latest"
},
"fieldPath": "spec.jobTemplate.spec.template.spec.containers[?(#.name==\"my-container\")].image"
}
]
This is documented in Images - Triggering updates on image stream changes - but the syntax in the documentation appears to be wrong, so use oc set triggers if you find that the annotation you write by hand doesn't work.

Rancher - Is it possible to spin-up / recreate an entire namespace available in one environment on a new environment

Rancher: v2.2.4
In Rancher GUI, I see on one of our environment (Dev) and it contains a namespace 'n1'. This namespace under different sections (i.e. Workloads, LoadBalancers, ConfigMaps, Volumes etc) have few entries (containers/settings etc).
I want to create the same namespace on a new environment where Rancher is running. This environment lets say is (Test). After getting all the required docker images (sudo docker image pull <server:port>/<imagename:imageversion>), do I need to download YAMLs of all these entries under each sections and import them to the target environment? (possibly changing volumes-id, container image entries i.e. name: <server:port>/<imagename:imageversion> locations (if any), controller-uid to keep the one on the target (TEST) environement)? My understanding is, if I create a new workload/add anything under a respective section, the label/annotations will generate a fresh controller-id value! so, I'm wondering before importing the YAML, if I should leave the controller-uid entry value blank (not sure if it'll barf).
Is there a simple way to spin up/create an entire namespace 'n1' on TEST environment (i.e. namespace replica of n1 Dev in Test) with auto-generating the necessary Storage bits (volume classes/volumes and persistent volumes - all of these have some Vol ID/name/uid associated with each entity), Deployments bits (uid/controller-uids) etc?
What's an efficient way to do this so that I don't have to manually download YAMLs (from Dev) and import them in Test at each component level (i.e. Volumes YAMLs, Volume Class YAML, Workloads/Deployment YAMLs etc - one by one)?
You can use the following to grab all resources from a namespace and apply them in a new namespace.
#!/bin/bash
SourceNamespace="SourceNS"
TargetNamespace="TargetNS"
TempDir="./tmp"
echo "Grabbing all resources in $SourceNamespace"
for APIResource in `kubectl api-resources --verbs=list --namespaced -o name`
do
kubectl -n "$SourceNamespace" get "$APIResource" -o yaml > "$TempDir"/"$APIResource".yaml
done
echo "Deploying all resources in $TargetNamespace"
for yaml in `ls $TempDir`
do
kubectl apply -n "$TargetNamespace" -f "$TempDir"/"$yaml"
done

Run container without assigning a public IP using devops

I want to run a docker container (using Bluemix DevOps Services) without assigning a public IP. Wondering how to do that...its always assigning a public IP.
Thx
The current default deploy script (you can see the git in the script box) for a single container is https://github.com/Osthanes/deployscripts/blob/master/deploycontainer.sh
Looking at that, the port field is optional, but if not set, it defaults it to 80, like you're seeing. Simplest solution would be to point it to an unused port and ignore it, or you could fork the script and modify the git to clone your fork instead.
To not assign a public ip - one way is to switch from the default 'red_black' deployment strategy to 'simple'. A side effect is that simple does not clean up the previous deploy, so if you want it to still do that, add an additional instance of the job on that same stage, with the strategy set to 'clean', and that will remove old instances. As before, if you choose to fork the scripts, you can change that behavior in yours to whatever you like.
The public IP when you create a container on the IBM container service is optional.
You only need to bind the IP when you want to use it from the Internet.
What tool in devops are you using a maybe it is missing an option.
Ralph

Change lxc container directory

Can I change the directory where lxc containers are initialized and kept? Now they are created under /var/cache/lxc, and I would like to have them in another directory, on another partition where I have more space. Changint the mounting point of the partition is not an option as it's already used for something else.
Yes you can. The /var/cache/lxc prefix is hardcoded into every /usr/share/lxc/templates/ template. You can change the path over there.
In case you're building LXC from sources, then the path is actually #LOCALSTATEDIR#/cache/lxc/ where #LOCALSTATEDIR# is by default --prefix= + /var or --localstatedir you pass to ./configure.
As for /var/lib/lxc, the default path to containers, specific container, and path to container's dir type of datastore could be configured at multiple levels:
lxc.lxcpath in /etc/lxc/lxc.conf, consult man lxc.system.conf for details.
lxc-* tools accepts -P flag to specify alternate container path.
lxc-create -B dir backing store has optional --dir ROOTFS flag.
Also, I highly recommend a series of blog posts by Stéphane Graber and Containers storage specifically.
The least painful would be probably just mount -o bind a directory on the partition with space to /var/lib/lxc or /var/lib/lxd whichever is your case. This works from /etc/fstab too.
For debian template (and some others) you can use environment variable, for example:
LXC_CACHE_PATH=/usr/share/lxc/cache

How should I install custom packages in an lxc container?

I would like to start a container with the basic ubuntu template - but I'd like it to automatically install a couple of extra packages - or ideally run a bash script.
It seems like I should be using hooks, and when I create a container pass in a configuration file which sets a particular hook as my bash script. But I can't help but think there must be an easier way?
Recent versions of the lxc-ubuntu template supports a --packages option which lets you get extra packages in there.
Otherwise, you can indeed use a start hook to run stuff inside the container.
If using the ubuntu-cloud template, you could also pass it a cloud-init config file which can do that kind of stuff for you.
Or if you just want to always do the same kind of configuration, simply create an ubuntu container, start it, customize it to your liking and from that point on, just use lxc-clone instead of lxc-create to create new containYou can indeeders based on the one you customized.