Typesafe Config Environment Variables - configuration

Attempting to use ${HOSTNAME} in a config file does not work! According to the documentation, config files should resolve environment variables as mentioned in the docs:
substitutions fall back to environment variables if they don't resolve in the config itself, so ${HOME} would work as you expect. Also, most configs have system properties merged in so you could use ${user.home}.
Is there a way to get hostname into the config file?
Reproduction
Add host.name=${HOSTNAME} to an application.conf file, then try and access it from anywhere. For example try adding
Logger.info(s"Hostname is ${current.configuration.getString("host.name").getOrElse("NOT-FOUND")}")
to the Global.scala.
Environment
This was run on a RHEL6 environment where echo $HOSTNAME produces precise32 so the environment variable exists, this is not the program hostname.

The solution seems to be passing in the hostname via a system property as -Dhost.name=$HOSTNAME or -Dhost.name=$(hostname). I'd imagine in windows it would be something else, but this works for *NIX environments.
Unless anyone can come up with something cleaner this will be the accepted answer.

This probably isn't working because $HOSTNAME doesn't seem to actually be an environment variable:
jamesw#T430s:~$ echo $HOSTNAME
T430s
jamesw#T430s:~$ export|grep HOSTNAME
jamesw#T430s:~$
So it must be some other special bash thing.

You should see if calling System.getenv("HOSTNAME") returns a non-null value. If not, then HOSTNAME is not an env variable according to the java runtime which is what is important for mapping that to a config property in typesafe config. I tried this with HOSTNAME and even though I could echo it in bash, it was not available in java as a env substitution. I changed it to USER and everything worked as expected.

Related

How do I pass an environment variable into spark-submit

I am using apache-spark 1.2.0 and would like my user's Linux environment variable $MY_KEY to be made available to my Java job when executed using master=local
In Java land this could be passed in using a -D parameter, but I cannot get this recognized when my driver is launched using spark-submit
I have tried adding this to conf/spark-defaults.conf, but spark will not resolve the environment variable $MY_KEY when it executes my Java job (I see this in my logs)
spark.driver.extraJavaOptions -Dkeyfile="${MY_KEY}"
I have tried adding the same as an argument when calling spark-submit, but this doesn't work either.
The same problem with adding it to conf/spark-env.sh
The only way I have got this to work is by editing the bin/spark-submit script directly which defeats the purpose of having it read from the existing environment variable and will get overwritten when I upgrade spark.
So it looks to me like spark-submit ignores your current users' environment variables and only allows a restricted subset of variables to be defined in it's conf files. Does anyone know how I can resolve this?

How to get production configuration variables when executing in another environment

In laravel configuration variables can be accessed like this
Config::get('xxx')
By default it returns the configuration values for the current environment. How do you get configuration data for other environments?
A call to Config::get() will already get you information of your current environment, if you are in dev it will be dev, if you are in production it will be the production data.
If you need to get a configuration information about another environment you can do by:
return Config::get('environment/config.name');
Example:
return Config::get('testing/cache.driver');
If you need to get things from your production environment while being in any other one, I'm afraid you'll have to create a 'production' folder inside your config folder and put those things there:
app/config/production/database.php
Add to that particular file only what you need to read outside from your environment:
'default' => 'postgresql',
And then you'll have access to it:
return Config::get('production/database.default');
You should be able to just do Config::get('xxx');
Unless you are overriding it in your testing/local/develop environments it should always be available. If you are, just remove it from the other envs.
I cannot see why you would define a config variable in another environment but then still need the production config.

what is difference between environment variable types

I'm trying to use OpenShift.
I'm confusing between three writing ways:
${env.OPENSHIFT_MYSQL_DB_HOST}
${OPENSHIFT_MYSQL_DB_HOST}
and
$OPENSHIFT_MYSQL_DB_HOST
Could you show me what the difference between them is?
${env.OPENSHIFT_MYSQL_DB_HOST}
is only applicable in standalone.xml for JBoss application. env. references environment variables and ${} without env references system properties. So
${OPENSHIFT_MYSQL_DB_HOST}
in standalone.xml is referencing a system property. In a bash script though it would be referencing the environment variable because OpenShift sources all env variables for cartridge scripts. Likewise
$OPENSHIFT_MYSQL_DB_HOST
is just another way to reference a variable in bash. In bash $var and ${var} are interchangeable except when variable demarkation is an issue. $varblah is not the same as ${var}blah for example.

JBoss - Moving the modules directory around

Wondering if it's possible to move the module directory in a JBoss 7 install to a non-default loco.
Does anyone know of a config param to specify where to pick it up?
Kinda like a conf-dir, bin-dir type of thing.
Thanks,
Aaron.
Yes, it's actually possible. As the documentation states, from within the standard launch scripts users are able to manipulate the module path by setting the $JBOSS_MODULEPATH environment variable. (If not set, $JBOSS_MODULEPATH is set to $JBOSS_HOME/modules). The module path is provided to the running process via the -mp command line argument that is set in the standard scripts.

What order of reading configuration values?

For the python program I am writing I would like to give the opportunity of configuring it in three different ways. Environment variables, configuration files and command line arguments.
Logically I think command line arguments should always have the highest priority. I am a bit doubting whether environment variables should have precedence over configuration files? And will it matter whether configuration files are system wide, user specific or given as argument on the command line?
(Note that my platform is Unix/Linux)
The standard that I know is first look for a command line parameter, if not found environment var, then local config file then global config file.
So when a package is installed somewhere. It will have a default config file. This can be changed with a local config file. Which can be overrridden with a environ parameter and then the command line param has the highest precedence.
If a config file is declared on the command line then its contents will take precedence over environ params or any other config files. But command line params will take precedence over it.
But remember that the search path still exists. If the package is looking for a var it looks for.
Command line.
Config file thats name is declared on the command line.
Environment vars
Local config file (if exists)
Global config file (if exists)
I think many command line compilers and the Boost library config pak works in a similar fashion
AWS CLI is in line with the accepted answer:
Precedence of options:
If you specify an option by using one of the environment variables described in this topic, it overrides any value loaded from a profile in the configuration file.
If you specify an option by using a parameter on the CLI command line, it overrides any value from either the corresponding environment variable or a profile in the configuration file.