I have a single manfiest to deploy a set of services. Some of these services use same environment variables. Is there a way I can group common configuration across apps in manifest?
May be you can try this:
create a file called env-mainfest.yml with content:
---
env
name: value
and use below in your manifest files for deployment
inherit: env-mainfest.yml
Isn't a good idea to have spring_active_profiles set with env: specific settings for all services and just create an application-system.properties file and use spring profiles OR something similar if not spring based?
create base manifest file with common env variables for all services.
Create service specific manifest file like, manifest-.yml
Inherit base yml file in service specific manifest file.
Start application specific file with inheriting base yml file
---
inherit: manifest.yml
<application specific env properties>
Related
Currently I'm using kubeflow v0.7 and when I create a new Jupyter notebook server, there is a configuration section, but I can't select or add any configurations there. Is there any place that I can add configurations for a new notebook server? Or do I have to include anything I need in the image (which is not ideal for my case)?
Thanks.
OK found something in the official docs
Specify one or more additional configurations as a list of PodDefault labels. To make use of this option, you must create a PodDefault manifest. In the PodDefault manifest, you can specify configurations including volumes, secrets, and environment variables. Kubeflow matches the labels in the configurations field against the properties specified in the PodDefault manifest. Kubeflow then injects these configurations into all the notebook Pods on this notebook server.
Also, here is the PodDefault doc. It looks like Kubeflow is using PodDefault to inject those configurations (e.g. environment variables).
Edited: I tried PodDefault and it works perfectly. For people who want more detailed docs about PodDefault, you can check PodPreset which is basically the same.
My web application will be deployed to Weblogic application servers on Windows and Linux/Unix in different environments. The log file location, appenders and log levels will vary between the different deployments and we would like to be able to change the logging configuration during runtime (by exchanging the config file), so I cannot embed a log4j2.xml (or whatever other config file) into my deployment. And since I'm running on Application servers I cannot control, I've got no chance to add environment variables to point to another configuration Location.
Currently, my log4j2.xml resides in the classpath of my application and is being packaged into my war file. Is there any way to tell Log4J2 to use a configuration file e. g. relative to the application root (like Log4J's configureAndWatch(fileLocation) method)?
I found lots of examples of how to configure Log4J2, but everything I found about the config file location points to the applications class path.
I finally found a solution for my problem. I added a file named
log4j2.component.properties
to my project (in src/main/resources). This file contains a property pointing to the location of my log4j2 configuration file:
log4j.configurationFile=.//path//on//my//application//server//someLog4j2ConfigFile.xml
This causes log4j2 to read that file and configure itself from it's content.
profile.properties file not found in Source code in repository?
Is it possible using environment variable in openshift?
If yes, how can set -Dkeycloak.profile.feature.scripts=enabled in Openshift environment?
Environment Variables are a first class concept in Openshift. There are many ways to use them:
You can set them directly on your BuildConfig to ”bake them into” your containers. This isn't best practices as then they won't change when you move them through environments but may be necessary to configure your build or set things that won't change (e.g. set the port number node.js uses to match the official node.js image with ”PORT=8080”)
You can put such variables into either ConfigMap or Secret configuration objects to easily share them between many similar BuildConfig
You can set them directly on DeploymentConfig so that they are set for every pod that is launched by that deployment. This is a fairly common way of setting up application specific environment variables. Its not a good idea to use this for settings that are shared between multiple applications as you would have to change common variables in many places.
You can set them up in ConfigMaps and Secrets and apply them to multiple DeploymentConfigs. That way you can manage them in one place.
Its common to see devs use a .env file that is named in .gitignore so not in git. In the past I have written scripts to load that into a Secret within openshift then use envFrom to set that secret on the deployment. Then have an .env.staging and .env.live that we git secret encrypt into git.
The problem with .env files is that they tend to get messy and have unused junk after a while. So we broke the file into one Secret to be database creds, separate Secrets for each api creds, a ConfigMap for app specific settngs. A ConfigMap for shared settings.
These days we use Helmfile to load all our config from git based on git webhooks. All the config is yaml in a git repo (with secret yaml encrypted). If you merge a change to the config git repo a webhook handler decrypts the config and runs Helmfile to update the settings in openshift. I am in the process of open sourcing everything including using a chatbot to manage releases (optional) over on GitHub
I should also say that openshift automatically creates many environment variables to help you configure you apps. In each project a lot of variables are set in every pod telling you the details of all the services you have setup in that project.
Openshift also sets up internal dns entries for your services. This means that if App A uses App B you don't have to configure A with a URL for B yourself. Rather there will be a dns entry for B and you can use the env vars that openshift sets on A to work out the dns entry to and the port number to use (e.g. dns entry includes project name and that is automatically set as an env var by openshift). So our apps can find a redis service running in the same project using that technique.
I want to set specific env variables depending on the namespace.
The goal is to have one config yaml file for different namespaces and set different env variables/config maps for dev, qa and prod depending on the namespace which the config file is applied to.
Afaik, kubernetes doesn't come with this capability out of the box.
There are two ways to get around this:
Deploy a standard ConfigMap that contains everything any deployment needs. Make your app recognise the namespace and use the appropriate variable.
Deploy a sidecar app that generates a namespace-specific ConfigMap from a template. This sidecar will need access to kube-apiserver to deploy new ConfigMap manifests automatically.
I am trying to adopt Log4j2 to my project. Since my Java Application is packeted in a JAR file. I don't want "log4j2.xml" configuration packaged inside of JAR file. I am trying to learn how configuration file works from "http://logging.apache.org/log4j/2.x/manual/configuration.html"
But seems there is no clear instruction regarding altering the configuration file path of the Log4j2.
After googling about this topic I found something like "Referencing log4j config file within executable JAR" Referencing log4j config file within executable JAR, But this solution is not available any more according to "http://logging.apache.org/log4j/2.x/manual/migration.html" (if I understand it correctly).
So I am wondering if someone have any idea about this issue.
Thanks
You can set the system property to specify the configuration path.
set the
"-Dlog4j.configurationFile="D:\learning\blog\20130115\config\LogConfig.xml"
in VM arguments. replace
"D:\learning\blog\20130115\config\LogConfig.xml"
to your configuration path.
Put the log4j2.xml file in resource directory in your project so that the log4j will locate files under class path automatically.
Loading log4j2.xml file from the customized location-
You can use the System property/ VM arguments- Dlog4j.configurationFile=file:/path/to/file/log4j2.xml
This will work for any web application.
For some legacy applications, you can create a class for loading log4j2.xml/ log4j2.properties from the custom location on the machine like- D:/property/log4j2.xml
Using any of these approach,during application startup, the log4j2.xml file from the src/resources folder will be overridden by the custom location log4j.xml file.
Try using -Dlogging.config=Path_to_your_file