How to add configuration (e.g. environment variables) for jupyter notebook server in kubeflow - configuration

Currently I'm using kubeflow v0.7 and when I create a new Jupyter notebook server, there is a configuration section, but I can't select or add any configurations there. Is there any place that I can add configurations for a new notebook server? Or do I have to include anything I need in the image (which is not ideal for my case)?
Thanks.

OK found something in the official docs
Specify one or more additional configurations as a list of PodDefault labels. To make use of this option, you must create a PodDefault manifest. In the PodDefault manifest, you can specify configurations including volumes, secrets, and environment variables. Kubeflow matches the labels in the configurations field against the properties specified in the PodDefault manifest. Kubeflow then injects these configurations into all the notebook Pods on this notebook server.
Also, here is the PodDefault doc. It looks like Kubeflow is using PodDefault to inject those configurations (e.g. environment variables).
Edited: I tried PodDefault and it works perfectly. For people who want more detailed docs about PodDefault, you can check PodPreset which is basically the same.

Related

Openshift 4.6 Node and Master Config Files

Where is the Openshift Master and Node Host Files in v4.6
Previously hosted below in v3
Master host files at /etc/origin/master/master-config.yaml
Node host files at /etc/origin/node/node-config.yaml
You can check your current kubelet configuration using the following procedures instead of the configuration file on the node hosts like OCPv3. Because the kubelet configuration was managed dynamically as of OCPv4.
Further information is here, Generating a file that contains the current configuration.
You can check it using above reference procedures(Generate the configuration file) or oc CLI as follows.
$ oc get --raw /api/v1/nodes/${NODE_NAME}/proxy/configz | \
jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"'
These files no longer exist in the same for as in OCP 3. To change anything on the machines themselves, you'll need to create MachineConfigs, as CoreOS is an immutable Operating System. If you change anything manually on the filesystem and reboot the machine, your changes will typically be reset.
To modify Worker Nodes, often the setting you are looking for can be configured via a kubeletConfig: Managing nodes - Modifying Nodes. Note that only certain settings can be changed, others cannot be changed at all.
For the Master Config, it depends on what you want to do, as you will potentially change the setting via a machineConfigPool or for example edit API Server setting via oc edit apiserver cluster. So it depends on what you actually want to change.

How do I correctly start using .readthedocs.yml

I have a basic ReadTheDocs repository. As per the advice of the build page, I sought to use a .readthedocs.yml to configure it:
Configure your documentation builds! Adding a .readthedocs.yml file to your project is the recommended way to configure your documentation builds. You can declare dependencies, set up submodules, and many other great features.
I added a basic .readthedocs.yml:
version: 2
sphinx:
builder: dirhtml
fail_on_warning: true
and got a build failure:
Problem in your project's configuration. Invalid "sphinx.builder": .readthedocs.yml: Your project is configured as "Sphinx Html" in your admin dashboard, but your "sphinx.builder" key does not match.
This was surprising as it seemed contrary to the guidance in the admin dashboard at https://readthedocs.org/dashboard/PROJECTNAME/advanced/ which led me to assume that I could set whatever I liked in the admin dashboard, but it would be overridden by my .readthedocs.yml (which is the behaviour I expected and wanted):
These settings can be configured using a configuration file. That's the recommended way to set up your project. Settings in the configuration file override the settings listed here.
I updated the setting in the admin dashboard to match the .readthedocs.yml and then got a build error:
Sphinx error:
master file /home/docs/checkouts/readthedocs.org/user_builds/PROJECT_NAME/checkouts/latest/source/contents.rst not found
which looks like https://github.com/readthedocs/readthedocs.org/issues/2569 (RTD not finding Sphinx configuration) - but it's not clear why that's happening because prior to adding .readthedocs.yml, the project built just fine.
I'm struggling to model what's actually going on here:
The config file isn't acting as an "overlay" / "override" onto the web settings - as per the first error, some forms of disagreement are a build failure
It's almost like if the config file exists, the web config is ignored - this would explain the contents.rst issue arising, but this isn't consistent with the first error
Adding a python.install entry to .readthedocs.yml eventually got the site building, but it's still not clear to me if I'm generally doing the right thing, and/or how successful future config changes will be.
The reason you're getting the error is that the sphinx version you're using locally doesn't match with the version readthedocs is using at the time you initiated the build process.
See here: You can use a requirements.txt file to use the same version of sphinx you use locally. I had the same issue. I've solved it by simply adding my version Sphinx==3.1.2
Also, I added a .readthedocs.yml file in my project directory where docs/ resides, pointing to where the conf.py because
I was using an extension sphinxcontrib.napoleon which readthedocs build process fails to recognize.
Wanted readthedocsbuild process to use a specific version on Sphinx.
# .readthedocs.yml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 1
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/source/conf.py
# Build documentation with MkDocs
#mkdocs:
# configuration: mkdocs.yml
# Optionally build your docs in additional formats such as PDF
formats:
- pdf
# Optionally set the version of Python and requirements required to build your docs
python:
version: 3.7
install:
- requirements: docs/requirements.txt
a
and added all the dependencies needed to generate the documentation in docs/requirement.txt
Babel==2.8.0
imagesize==1.2.0
readme-renderer==26.0
Sphinx==3.1.2
sphinx-argparse==0.2.5
sphinx-rtd-theme==0.5.0
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==1.0.3
sphinxcontrib-images==0.9.2
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-napoleon==0.7
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.4

Have `oc` follow a cluster depending on directory

I use the oc tool for several different clusters.
Since I am usually keeping local yaml files for any OpenShift objects I view/modify, either ad hoc or due to some config management scheme of the individual cluster, I have a separate directory on my machine for each cluster (which, in turn, is of coursed versioned in git). Let's call them ~/clusters/a/, ~/clusters/b/ etc.
Now. When I cd around on my local machine, the oc command uses the global ~/.kube/config to find the cluster I logged in last, to. Or in other words, oc does not care at all about which directory I am in.
Is there a way to have oc store a "local" configuration (i.e. in ~/clusters/a/.kube_config or something like that), so that when I enter the ~/clusters/a/ directory, I am automatically working with that cluster without having to explicitely switch clusters with oc login?
You could set the KUBECONFIG environment variable to specify different directories for configuration for each cluster. You would need to set the environment variable to respective directories in each separate terminal session window.
https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable
To expand on Graham's answer, KUBECONFIG can specify a list of config files which will be merged if more than one exist. The first to set a particular value wins, as described in the merging rules.
So you can add a local config with just the current-context, e.g. ~/clusters/a/.kube_config could be
current-context: projecta/192-168-99-100:8443/developer
and ~/clusters/b/.kube_config:
current-context: projectb/192-168-99-101:8443/developer
Obviously need to adjust this for your particular cluster using the format
current-context: <namespace>/<cluster>/<user>
Then set KUBECONFIG with a relative path and the global config
export KUBECONFIG=./.kube_config:~/.kube/config
Note that if ./.kube_config does not exist it will be ignored.
The current-content will then be overridden by the one defined in the local .kube_config, if one exists.
I tested this locally with 2 minishift clusters and it seemed to work ok. Have not tested what the behaviour is when setting config though.

OpenShift repo not included in path

I started a Django 1.7 OpenShift instance. When I have python print all of the paths from sys.path I do not see OPENSHIFT_REPO_DIR (/var/lib/openshift/xxxxx/app-root/runtime/repo).
When I use https://github.com/jfmatth/openshift-django17 to create a project I do see OPENSHIFT_REPO_DIR in the path.
Looking through the example app above I don't see anywhere that this is specifically added to the path. What am I missing?
To clarify:
I have to add the following to my wsgi.py:
import os
import sys
ON_PASS = 'OPENSHIFT_REPO_DIR' in os.environ
if ON_PASS:
x = os.path.abspath(os.path.join(os.environ['OPENSHIFT_REPO_DIR'], 'mysite'))
sys.path.insert(1, x)
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings")
OPENSHIFT_REPO_DIR is not in my path as I would expect. When I used the example git above, I did not have to add anything to the path.
A little while back I had issues with some of the pre-configured OpenShift environment variables not appearing until I restarted my application.
For what its worth, I started up a brand new Django gear, printed the environment variables to the application log, and verified that I do see OPENSHIFT_REPO_DIR (and all other env vars) properly.
This issue appears to be caused by trying to use the standard file structure layout that django produces when you use startproject. Openshift appears to need a flatter file structure. As soon as I moved wsgi up to a sibling of mysite it resolved the issue.

PhpStorm: how to use project root variable or relative path in PhpUnit configuration?

I would like to setup PhpUnit in PhpStorm. I press 1. Edit Configurations... and would like to enter this parameter in field 2.
I am using phpunit.xml as configuration file and all want to use a relative path like:
phpunit.xml
or use project root variable like
$PROJECT_ROOT/phpunit.xml
But both options are not working for me.
Based on your screenshot (the place where you want to use it): use full path -- in project settings such path is stored relative to the project root anyway (unless you specify some file which is outside of the project, of course) and the full path then reconstructed when needed (e.g. when shown to you or when used as a parameter during tests execution).
I don't think you'll be able to achieve what you want via the project's Run/Debug configurations. What might help you is the Default configuration file setting in your default project settings, which can be used to define the PHPUnit configuration file to use by default, so you don't need to specify it via the Use alternative configuration file option in your Run/Debug configuration.
To set this, open your Default Settings window, then navigate to Languages & Frameworks -> PHP -> PHPUnit. In the Test Runner section tick the Default configuration file checkbox and specify the location where you keep your configuration file. If this file will always be in the same path relative to your project root, you can use the $PROJECT_DIR$ variable to define the project root. So if your PHPUnit configuration file is always in the root of your project, you might set this to something like $PROJECT_DIR$/phpunit.xml. When you create a new project, its Default configuration file variable will be set to the file offset from your project root, and you won't need to use the Use alternative configuration file option in your Run/Debug configuration.
If you're opening the same project in different locations on the same machine this should work for new projects without any problem, if you want to share this configuration across machines, you might need to try PHPStorm's Exporting and Importing Settings functionality.
I'm not sure if this directly solves your problem, and it's a few months late anyway, but maybe this will be useful for someone else who stumbles across this question... The above instructions were correct for my 8.0.3 installation on Linux.