Rancher - Is it possible to spin-up / recreate an entire namespace available in one environment on a new environment - namespaces

Rancher: v2.2.4
In Rancher GUI, I see on one of our environment (Dev) and it contains a namespace 'n1'. This namespace under different sections (i.e. Workloads, LoadBalancers, ConfigMaps, Volumes etc) have few entries (containers/settings etc).
I want to create the same namespace on a new environment where Rancher is running. This environment lets say is (Test). After getting all the required docker images (sudo docker image pull <server:port>/<imagename:imageversion>), do I need to download YAMLs of all these entries under each sections and import them to the target environment? (possibly changing volumes-id, container image entries i.e. name: <server:port>/<imagename:imageversion> locations (if any), controller-uid to keep the one on the target (TEST) environement)? My understanding is, if I create a new workload/add anything under a respective section, the label/annotations will generate a fresh controller-id value! so, I'm wondering before importing the YAML, if I should leave the controller-uid entry value blank (not sure if it'll barf).
Is there a simple way to spin up/create an entire namespace 'n1' on TEST environment (i.e. namespace replica of n1 Dev in Test) with auto-generating the necessary Storage bits (volume classes/volumes and persistent volumes - all of these have some Vol ID/name/uid associated with each entity), Deployments bits (uid/controller-uids) etc?
What's an efficient way to do this so that I don't have to manually download YAMLs (from Dev) and import them in Test at each component level (i.e. Volumes YAMLs, Volume Class YAML, Workloads/Deployment YAMLs etc - one by one)?

You can use the following to grab all resources from a namespace and apply them in a new namespace.
#!/bin/bash
SourceNamespace="SourceNS"
TargetNamespace="TargetNS"
TempDir="./tmp"
echo "Grabbing all resources in $SourceNamespace"
for APIResource in `kubectl api-resources --verbs=list --namespaced -o name`
do
kubectl -n "$SourceNamespace" get "$APIResource" -o yaml > "$TempDir"/"$APIResource".yaml
done
echo "Deploying all resources in $TargetNamespace"
for yaml in `ls $TempDir`
do
kubectl apply -n "$TargetNamespace" -f "$TempDir"/"$yaml"
done

Related

Can an OpenShift CronJob benefit from ImageStream?

I have my CronJob working fine without the use of an image stream.
The job runs every 15 minutes and always pulls a tagged image, e.g. my-cron:stable.
Since the image is always pulled and the schedule tells the cluster when to run my job, what do I gain from knowing that there's an updated version of my image?
If the image changes and there's a running instance of my job, I want the job to complete using the current version of the image.
In the next scheduled run the updated image is pulled (AlwaysPull). So it seems I don't gain much tracking changes to an image stream for cron jobs.
ImageStream triggers only BuildConfigs and DeploymentConfigs, as per https://docs.openshift.com/container-platform/4.7/openshift_images/image-streams-manage.html .
Upstream kubernetes doesn't have a concept of ImageStream, so there is no triggering for 'vanilla' resource types. CronJob is used both in openshift and kubernetes (apiVersion: batch/v1beta1), and AFAIK the only way to access an imagestream is to use full path to internal registry, which is not that convenient. Your cronjob won't restart or won't be stopped for some reason, if imagestream is updated, because from kubernetes standpoint the image is pulled only when cronjob has been triggered, and after that it just waits for a job to complete.
As i see it - you are not gaining much from using imagestreams, because one of the main points, ability to use triggers, is not usable for cronjobs. The only reason to use it in CronJobs is if you are pushing directly to internal registry for some reason, but that's a bad practice too.
See following links for reference:
https://access.redhat.com/solutions/4815671
How to specify OpenShift image when creating a Job
Quoting redhat solution here:
Resolution
When using an image stream inside the project to run a cronjob,
specify the full path of the image:
[...]
spec:
jobTemplate:
spec:
template:
spec:
containers:
image: docker-registry.default.svc:5000/my-app-namespace/cronjob-image:latest
name: cronjob-image
[...]
Note that you can also put the ':latest' (or a specific tag) after the
image.
In this example, the cronjob will use the imagestream cronjob-image
from project my-app-namespace:
$ oc get is -n my-app-namespace [...]
imagestream.image.openshift.io/cronjob-image
docker-registry.default.svc:5000/my-app-namespace/cronjob-image
latest 27 minutes ago
Root Cause
The image was specified without its full path to the internal docker
registry. If the full path is not used (i.e. putting only
cronjob-image, OpenShift won't be able to find it.[...]
By using an ImageStream reference, you can avoid having to include the container image repository hostname hostname and port, and the project name in your Cron Job definition.
The docker repository reference looks likes this:
image-registry.openshift-image-registry.svc:5000/my-project/my-is:latest
The value of the equivalent annotation placed on a Cron Job looks like this:
[
{
"from": {
"kind": "ImageStreamTag",
"name": "my-is:latest"
},
"fieldPath": "spec.jobTemplate.spec.template.spec.containers[?(#.name==\"my-container\")].image"
}
]
On the one hand, this is longer. On the other hand, it includes less redundant information.
So, compared to other types of kubernetes resources, Image Streams don't add a great deal of functionality to Cron Jobs. But you might benefit from not having to hardcode the project name if for instance you kept the Cron Job YAML in Git and wanted to apply it to several different projects.
Kubernetes-native resources which contain a pod can be updated automatically in response to an image stream tag update by adding the image.openshift.io/triggers annotation.
This annotation can be placed on CronJobs, Deployments, StatefulSets, DaemonSets, Jobs, ReplicationControllers, etc.
The easiest way to do this is with the oc command.
$ oc set triggers cronjob/my-cronjob
NAME TYPE VALUE AUTO
cronjobs/my-cronjob config true
$ oc set triggers cronjob/my-cronjob --from-image=my-is:latest -c my-container
cronjob.batch/my-cronjob updated
$ oc set triggers cronjob/my-cronjob
NAME TYPE VALUE AUTO
cronjobs/my-cronjob config true
cronjobs/my-cronjob image my-is:latest (my-container) true
The effect of the oc set triggers command was to add the annotation to the CronJob, which we can examine with:
$ oc get cronjob/my-cronjob -o json | jq '.metadata.annotations["image.openshift.io/triggers"]' -r | jq
[
{
"from": {
"kind": "ImageStreamTag",
"name": "my-is:latest"
},
"fieldPath": "spec.jobTemplate.spec.template.spec.containers[?(#.name==\"my-container\")].image"
}
]
This is documented in Images - Triggering updates on image stream changes - but the syntax in the documentation appears to be wrong, so use oc set triggers if you find that the annotation you write by hand doesn't work.

How to load user-specific configuration for CMake project

I like to use a configuration file that sets several cached variables. The purpose is to reuse it for every projects running on a machine or to select different library versions for testing or special purpose.
I can achieve it with a CMake file like this one:
set(path_to_lib_one path/to/lib/one)
set(option1 dont_want_to_bother_setting_this_option)
set(option2 that_option_have_to_be_set_again)
And call include(myConfigfile).
But I would like to know if their is a cache-like way of doing it and what are the best practices to manage user/setup specific configurations.
Use the initial cache option offered by CMake. You store your options in the right format (set withCACHE`) and call
cmake -C <cacheFile> <pathToSourceDir>
Self-contained example
The CMakeLists.txt looks like
project(blabla)
cmake_minimum_required(VERSION 3.2)
message("${path_to_lib_one} / ${option1} / ${option2}")
and you want to pre-set the three variables. The cacheFile.txt looks like
set(path_to_lib_one path/to/lib/one CACHE FILEPATH "some path")
set(option1 "dont_want_to_bother_setting_this_option" CACHE STRING "some option 1")
set(option2 42 CACHE INT "and an integer")
and your CMake call (from a directory build below the source directory)
cmake -C cacheFile.txt ..
The output is
loading initial cache file ../cacheFile.txt
[..]
path/to/lib/one / dont_want_to_bother_setting_this_option / 42
Documentation:
https://cmake.org/cmake/help/latest/manual/cmake.1.html#options
Load external cache files
Additionally, CMake offer a way to read in a cache file, that was created by another project. The command is load_cache. You can use it to just load the variables from the external cache or to copy them to the cache of the current project.
Documentation: https://cmake.org/cmake/help/latest/command/load_cache.html

Bitbake append file to reconfigure kernel

I'm trying to reconfigure some .config variables to generate a modified kernel with wifi support enabled. The native layer/recipe for the kernel is located in this directory:
meta-layer/recipes-kernel/linux/linux-yocto_3.19.bb
First I reconfigure the native kernel to add wifi support (for example, adding CONFIG_WLAN=y):
$ bitbake linux-yocto -c menuconfig
After that, I generate a "fragment.cfg" file:
$ bitbake linux-yocto -c diffconfig
I have created this directory into my custom-layer:
custom-layer/recipes-kernel/linux/linux-yocto/
I have copied the "fragment.cfg file into this directory:
$ cp fragment.cfg custom-layer/recipes-kernel/linux/linux-yocto/
I have created an append file to customize the native kernel recipe:
custom-layer/recipes-kernel/linux/linux-yocto_3.19.bbappend
This is the content of this append file:
FILESEXTRAPATHS_prepend:="${THISDIR}/${PN}:"
SRC_URI += "file://fragment.cfg"
After that I execute the kernel compilation:
$ bitbake linux-yocto -c compile -f
After this command, "fragment.cfg" file can be found into this working directory:
tmp/work/platform/linux-yocto/3.19-r0
However none of the expected variables is active on the .config file (for example, CONFIG_WLAN is not set).
How can I debug this issue? What is supposed I'm doing wrong?
When adding this configuration you want to use append in your statement such as:
SRC_URI_append = "file://fragment.cfg"
After analyzing different links and solutions proposed on different resources, I finally found the link https://community.freescale.com/thread/376369 pointing to a nasty but working patch, consisting in adding this function at the end of append file:
do_configure_append() {
cat ${WORKDIR}/*.cfg >> ${B}/.config
}
It works, but I expected Yocto managing all this stuff. It would be nice to know what is wrong with the proposed solution. Thank you in advance!
If your recipe is based on kernel.bbclass then fragments will not work. You need to inherit kernel-yocto.bbclass
You can also use merge_config.sh scripts which is present in kernel sources. I did something like this:
do_configure_append () {
${S}/scripts/kconfig/merge_config.sh -m -O ${WORKDIR}/build ${WORKDIR}/build/.config ${WORKDIR}/*.cfg
}
Well, unfortunately, not a real answer... As I haven't been digging deep enough.
This was working alright for me on a Daisy-based build, however, when updating the build system to Jethro or Krogoth, I get the same issue as you.
Issue:
When adding a fragment like
custom-layer/recipes-kernel/linux/linux-yocto/cdc-ether.cfg
The configure step of the linux-yocto build won't find it. However, if you move it to:
custom-layer/recipes-kernel/linux/linux-yocto/${MACHINE}/cdc-ether.cfg
it'll work as expected. And it's a sligthly less hackish way of getting it to work.
If anyone comes by, this is working on jethro and sumo:
FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
SRC_URI_append = " \
file://fragment.cfg \
"
FILESEXTRAPATHS documentation says:
Extends the search path the OpenEmbedded build system uses when looking for files and patches as it processes recipes and append files. The directories BitBake uses when it processes recipes are defined by the FILESPATH variable, and can be extended using FILESEXTRAPATHS.

Torque qsub : Change the ouput/error file destination doesn't work

If I did : qsub myscript.sh
Then it creates in the script path: myscript.sh.e12 and myscript.sh.o12 files.
But if I do : qsub -o /tmp/my.out myscript.sh
Then there is nothing in /tmp and in the script path only the myscript.sh.e12 file.
The output file is lost during the move. I don't know why.
I also tried with #PBS -o in pbs file but same result.
Thanks for your help.
Torque 2.5.7
RHEL 6.2
short answer: don't write output to /tmp/, write to some space you own, preferably with a unique path.
long answer: /tmp/ is ambiguous. Remember: the whole point of using a distributed resource manager is to run a job over multiple, or at least multiply assignable, compute resources. But each such device will almost certainly have its own /tmp/, and
you have no way of knowing to which one your job was written
you may have no rights on the arbitrary_device:/tmp/ on which one your job was written
So don't write output to /tmp/.

Change lxc container directory

Can I change the directory where lxc containers are initialized and kept? Now they are created under /var/cache/lxc, and I would like to have them in another directory, on another partition where I have more space. Changint the mounting point of the partition is not an option as it's already used for something else.
Yes you can. The /var/cache/lxc prefix is hardcoded into every /usr/share/lxc/templates/ template. You can change the path over there.
In case you're building LXC from sources, then the path is actually #LOCALSTATEDIR#/cache/lxc/ where #LOCALSTATEDIR# is by default --prefix= + /var or --localstatedir you pass to ./configure.
As for /var/lib/lxc, the default path to containers, specific container, and path to container's dir type of datastore could be configured at multiple levels:
lxc.lxcpath in /etc/lxc/lxc.conf, consult man lxc.system.conf for details.
lxc-* tools accepts -P flag to specify alternate container path.
lxc-create -B dir backing store has optional --dir ROOTFS flag.
Also, I highly recommend a series of blog posts by Stéphane Graber and Containers storage specifically.
The least painful would be probably just mount -o bind a directory on the partition with space to /var/lib/lxc or /var/lib/lxd whichever is your case. This works from /etc/fstab too.
For debian template (and some others) you can use environment variable, for example:
LXC_CACHE_PATH=/usr/share/lxc/cache