Change lxc container directory - containers

Can I change the directory where lxc containers are initialized and kept? Now they are created under /var/cache/lxc, and I would like to have them in another directory, on another partition where I have more space. Changint the mounting point of the partition is not an option as it's already used for something else.

Yes you can. The /var/cache/lxc prefix is hardcoded into every /usr/share/lxc/templates/ template. You can change the path over there.
In case you're building LXC from sources, then the path is actually #LOCALSTATEDIR#/cache/lxc/ where #LOCALSTATEDIR# is by default --prefix= + /var or --localstatedir you pass to ./configure.
As for /var/lib/lxc, the default path to containers, specific container, and path to container's dir type of datastore could be configured at multiple levels:
lxc.lxcpath in /etc/lxc/lxc.conf, consult man lxc.system.conf for details.
lxc-* tools accepts -P flag to specify alternate container path.
lxc-create -B dir backing store has optional --dir ROOTFS flag.
Also, I highly recommend a series of blog posts by Stéphane Graber and Containers storage specifically.

The least painful would be probably just mount -o bind a directory on the partition with space to /var/lib/lxc or /var/lib/lxd whichever is your case. This works from /etc/fstab too.

For debian template (and some others) you can use environment variable, for example:
LXC_CACHE_PATH=/usr/share/lxc/cache

Related

Rancher - Is it possible to spin-up / recreate an entire namespace available in one environment on a new environment

Rancher: v2.2.4
In Rancher GUI, I see on one of our environment (Dev) and it contains a namespace 'n1'. This namespace under different sections (i.e. Workloads, LoadBalancers, ConfigMaps, Volumes etc) have few entries (containers/settings etc).
I want to create the same namespace on a new environment where Rancher is running. This environment lets say is (Test). After getting all the required docker images (sudo docker image pull <server:port>/<imagename:imageversion>), do I need to download YAMLs of all these entries under each sections and import them to the target environment? (possibly changing volumes-id, container image entries i.e. name: <server:port>/<imagename:imageversion> locations (if any), controller-uid to keep the one on the target (TEST) environement)? My understanding is, if I create a new workload/add anything under a respective section, the label/annotations will generate a fresh controller-id value! so, I'm wondering before importing the YAML, if I should leave the controller-uid entry value blank (not sure if it'll barf).
Is there a simple way to spin up/create an entire namespace 'n1' on TEST environment (i.e. namespace replica of n1 Dev in Test) with auto-generating the necessary Storage bits (volume classes/volumes and persistent volumes - all of these have some Vol ID/name/uid associated with each entity), Deployments bits (uid/controller-uids) etc?
What's an efficient way to do this so that I don't have to manually download YAMLs (from Dev) and import them in Test at each component level (i.e. Volumes YAMLs, Volume Class YAML, Workloads/Deployment YAMLs etc - one by one)?
You can use the following to grab all resources from a namespace and apply them in a new namespace.
#!/bin/bash
SourceNamespace="SourceNS"
TargetNamespace="TargetNS"
TempDir="./tmp"
echo "Grabbing all resources in $SourceNamespace"
for APIResource in `kubectl api-resources --verbs=list --namespaced -o name`
do
kubectl -n "$SourceNamespace" get "$APIResource" -o yaml > "$TempDir"/"$APIResource".yaml
done
echo "Deploying all resources in $TargetNamespace"
for yaml in `ls $TempDir`
do
kubectl apply -n "$TargetNamespace" -f "$TempDir"/"$yaml"
done

How to pass directives to snappy_ec2 created clusters

We have a need to set some directives in the snappy config files for the various components (servers, locators, etc).
The snappy_ec2 scripts do a good job at creating all of the config's and keeping them in sync across the cluster, but I need to find a serviceable method to add directives to the auto generated scripts.
What is the preferred method using this script?
Example: Add the following to the 'servers' file:
-gemfirexd.disable-getall-local-index=true
Or perhaps I should add these strings to an environments file such as
snappy-env.sh
TIA
-doug
Have you tried adding the directives directly in the servers (or locators or leads) file and placing this file under (SNAPPY_DIR)/ec2/deploy/home/ec2-user/snappydata/? The script would read the conf files under this dir at the time of launching the cluster.
You'll need to specify it for each server you want to launch, with the name of server as shown below. See 'Specifying properties' section in README, if you have not already done so. e.g.
{{SERVER_0}} -heap-size=4096m -locators={{LOCATOR_0}}:9999,{{LOCATOR_1}}:9888 -J-Dgemfirexd.disable-getall-local-index=true
{{SERVER_1}} -heap-size=4096m -locators={{LOCATOR_0}}:9999,{{LOCATOR_1}}:9888 -J-Dgemfirexd.disable-getall-local-index=true
If you want it to be applied for all the servers, simply put it in snappy-env.sh as you mentioned (as SERVER_STARTUP_OPTIONS) and place the file under directory mentioned above.
We could have read the conf files directly from (SNAPPY_DIR)/conf/ instead of making users copy it to above location, but we may release the ec2 scripts as a separate package, in future, so that the users do not have to download the entire distribution.

Bitbake append file to reconfigure kernel

I'm trying to reconfigure some .config variables to generate a modified kernel with wifi support enabled. The native layer/recipe for the kernel is located in this directory:
meta-layer/recipes-kernel/linux/linux-yocto_3.19.bb
First I reconfigure the native kernel to add wifi support (for example, adding CONFIG_WLAN=y):
$ bitbake linux-yocto -c menuconfig
After that, I generate a "fragment.cfg" file:
$ bitbake linux-yocto -c diffconfig
I have created this directory into my custom-layer:
custom-layer/recipes-kernel/linux/linux-yocto/
I have copied the "fragment.cfg file into this directory:
$ cp fragment.cfg custom-layer/recipes-kernel/linux/linux-yocto/
I have created an append file to customize the native kernel recipe:
custom-layer/recipes-kernel/linux/linux-yocto_3.19.bbappend
This is the content of this append file:
FILESEXTRAPATHS_prepend:="${THISDIR}/${PN}:"
SRC_URI += "file://fragment.cfg"
After that I execute the kernel compilation:
$ bitbake linux-yocto -c compile -f
After this command, "fragment.cfg" file can be found into this working directory:
tmp/work/platform/linux-yocto/3.19-r0
However none of the expected variables is active on the .config file (for example, CONFIG_WLAN is not set).
How can I debug this issue? What is supposed I'm doing wrong?
When adding this configuration you want to use append in your statement such as:
SRC_URI_append = "file://fragment.cfg"
After analyzing different links and solutions proposed on different resources, I finally found the link https://community.freescale.com/thread/376369 pointing to a nasty but working patch, consisting in adding this function at the end of append file:
do_configure_append() {
cat ${WORKDIR}/*.cfg >> ${B}/.config
}
It works, but I expected Yocto managing all this stuff. It would be nice to know what is wrong with the proposed solution. Thank you in advance!
If your recipe is based on kernel.bbclass then fragments will not work. You need to inherit kernel-yocto.bbclass
You can also use merge_config.sh scripts which is present in kernel sources. I did something like this:
do_configure_append () {
${S}/scripts/kconfig/merge_config.sh -m -O ${WORKDIR}/build ${WORKDIR}/build/.config ${WORKDIR}/*.cfg
}
Well, unfortunately, not a real answer... As I haven't been digging deep enough.
This was working alright for me on a Daisy-based build, however, when updating the build system to Jethro or Krogoth, I get the same issue as you.
Issue:
When adding a fragment like
custom-layer/recipes-kernel/linux/linux-yocto/cdc-ether.cfg
The configure step of the linux-yocto build won't find it. However, if you move it to:
custom-layer/recipes-kernel/linux/linux-yocto/${MACHINE}/cdc-ether.cfg
it'll work as expected. And it's a sligthly less hackish way of getting it to work.
If anyone comes by, this is working on jethro and sumo:
FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
SRC_URI_append = " \
file://fragment.cfg \
"
FILESEXTRAPATHS documentation says:
Extends the search path the OpenEmbedded build system uses when looking for files and patches as it processes recipes and append files. The directories BitBake uses when it processes recipes are defined by the FILESPATH variable, and can be extended using FILESEXTRAPATHS.

How should I install custom packages in an lxc container?

I would like to start a container with the basic ubuntu template - but I'd like it to automatically install a couple of extra packages - or ideally run a bash script.
It seems like I should be using hooks, and when I create a container pass in a configuration file which sets a particular hook as my bash script. But I can't help but think there must be an easier way?
Recent versions of the lxc-ubuntu template supports a --packages option which lets you get extra packages in there.
Otherwise, you can indeed use a start hook to run stuff inside the container.
If using the ubuntu-cloud template, you could also pass it a cloud-init config file which can do that kind of stuff for you.
Or if you just want to always do the same kind of configuration, simply create an ubuntu container, start it, customize it to your liking and from that point on, just use lxc-clone instead of lxc-create to create new containYou can indeeders based on the one you customized.

Make: Redo some targets if configuration changes

I want to reexecute some targets when the configuration changes.
Consider this example:
I have a configuration variable (that is either read from environment variables or a config.local file):
CONF:=...
Based on this variable CONF, I assemble a header file conf.hpp like this:
conf.hpp:
buildConfHeader $(CONF)
Now, of course, I want to rebuild this header if the configuration variable changes, because otherwise the header would not reflect the new configuration. But how can I track this with make? The configuration variable is not tied to a file, as it may be read from environment variables.
Is there any way to achieve this?
I have figured it out. Hopefully this will help anyone having the same problem:
I build a file name from the configuration itself, so if we have
CONF:=a b c d e
then I create a configuration identifier by replacing the spaces with underscores, i.e.,
null:=
space:= $(null) #
CONFID:= $(subst $(space),_,$(strip $(CONF))).conf
which will result in CONFID=a_b_c_d_e.conf
Now, I use this $(CONFID) as dependency for the conf.hpp target. In addition, I add a rule for $(CONFID) to delete old .conf files and create a new one:
$(CONFID):
rm -f *.conf #remove old .conf files, -f so no error when no .conf files are found
touch $(CONFID) #create a new file with the right name
conf.hpp: $(CONFID)
buildConfHeader $(CONF)
Now everything works fine. The file with name $(CONFID) tracks the configuration used to build the current conf.hpp. If the configuration changes, then $(CONFID) will point to a non-existant .conf file. Thus, the first rule will be executed, the old conf will be deleted and a new one will be created. The header will be updated. Exactly what I want :)
There is no way for make to know what to rebuild if the configuration changed via a macro or environment variable.
You can, however, use a target that simply updates the timestamp of conf.hpp, which will force it to always be rebuilt:
conf.hpp: confupdate
buildConfHeader $(CONF)
confupdate:
#touch conf.hpp
However, as I said, conf.hpp will always be built, meaning any targets that depend upon it will need rebuilt as well. A much more friendly solution is to generate the makefile itself. CMake or the GNU Autotools are good for this, except you sacrifice a lot of control over the makefile. You could also use a build script that creates the makefile, but I'd advise against this since there exist tools that will allow you to build one much more easily.