I would like to change the default Kubernetes log format to "json" for system logs at "create" or preferably at "runtime".
The documentation designates the --logging-format=json for this purpose. However, I have been unable to identify what command-line call I would use to specific this flag.
I have tried kube-apiserver and kind cmd-line with no luck so far.
I am currently using Kind, but any reference related to Kube would be fine.
Questions:
Does this need be at "create" for the cluster?
If runtime is ok, what cmd util would I use to change a cluster configuration at runtime?
Do I need to apply a configuration with "kubectl" with a yaml file?
Related
As far as I can see, starting with Micronaut 3.8, there has been added another option to configure referenced Logback's
XML config file within application.yml file(s), as described also in current
Micronaut documentation.
This results in sometimes weird logging behaviour!
I've described my observations, details, my course of investigation, and reference to a stripped down Micronaut test
project in
another StackOverflow thread.
To sum it up:
Option to reference custom Logback XML configuration file also within application.yml file
does NOT affect log messages issued before final setup of Micronaut's ApplicationContext, like often the case
with frameworks like OR mappers (even if they get set up in response to configs laid down in application.yml
file itself), and
interferes with option to reference Logback XML config file via JVM CLI switch (like
-Dlogback.configurationFile=logback-json-format.xml). To me it seems that both config options (JVM CLI switch
and application.yml) have to be in sync to gain expected behaviour.
So I propose to
make reference to Logback XML configurations in application.yml files count from "the beginning" (if at all
possible), and
establish a priority between both config options, like "JVM CLI switch supersedes config in application.yml"
in this regard, and/or
make io.micronaut.logging.impl.LogbackLoggingSystem be aware of mentioned JVM CLI switch.
Otherwise I'm not sure if the option to configure custom Logback XML file within application.yml is that valuable in
the light of all the "troubles".
I am trying to configure wso2 by modifing its configuration file named "carbon.xml", but no matter what change I do to "carbon.xml", even adding a single "white space" or modifying a comment it's enough for the wso2 server to reset carbon.xml file to it´s original "out of the box" state.
I tryied to protect the file carbon.xml by dropping write permissions, but in this case wso2 server refuses to start, it aborts execution and displays an error complaining that it was not able to "write new configuration" !!!
Does any one know how to solve this?
I found the answer, In wso2 version 5.9 there is a new centralized configuration file, named "deployment.toml". Configurations must be done in this file and then wso2 propagates changes to the respective configurations files, like carbon.xml or catalina-server.xml, for example.
If you delete "deployment.toml" wso2 will fallback to previos behavior.
With the new 4.5.0 carbon-kernel release, all WSO2 products such as APIM 3.0.0, IS 5.9.0 introduced a new config model. According to the new config model, there is a centralized configuration file (deployment.toml) where users add the configurations, then those configurations will be added to the respective .xml files.
This new config model was introduced in order to simplify the configuration (previously there were a lot of configuration files) and to increase the user experience. Please follow this documentation to refer further information on this new config model
Related documents:
https://wso2.com/blogs/thesource/2019/10/simplifying-configuration-with-WSO2-identity-server
https://is.docs.wso2.com/en/next/references/new-configuration-model/
If you have a deployment.toml file, the changes directly made into the xml files will be overiden during the server startup. Deleting the deployment.toml file will use the old config model. But it is not a recommended approach.
I am begining to work with ORIENTDB and I have the following question.
Is it mandatory to set both environment variables? I was hoping to work with the studio without them, just setting the XML with my own environment variables.
Is there anyway to use custom variables programatically in my Java program?
Regards.
You don't need to set environment variables to work with OrientDB unless you're planning to run it from outside of its /bin directory, such as a service.
OrientDB Docs | Windows Service
OrientDB Docs | Unix Service
I finally understood what is missing.
When the "OServerPluginManager" is at startup, it uses the ORIENTDB_HOME setted + "plugins" to check the directory and register the plugins.
But between setting the plugin directory using the environment variable and registering the plugins, there is a overriding of properties checking the Server Property "plugin.directory".
So adding the property at server level with the directory where the plugins are will fix the problem.
I am using apache-spark 1.2.0 and would like my user's Linux environment variable $MY_KEY to be made available to my Java job when executed using master=local
In Java land this could be passed in using a -D parameter, but I cannot get this recognized when my driver is launched using spark-submit
I have tried adding this to conf/spark-defaults.conf, but spark will not resolve the environment variable $MY_KEY when it executes my Java job (I see this in my logs)
spark.driver.extraJavaOptions -Dkeyfile="${MY_KEY}"
I have tried adding the same as an argument when calling spark-submit, but this doesn't work either.
The same problem with adding it to conf/spark-env.sh
The only way I have got this to work is by editing the bin/spark-submit script directly which defeats the purpose of having it read from the existing environment variable and will get overwritten when I upgrade spark.
So it looks to me like spark-submit ignores your current users' environment variables and only allows a restricted subset of variables to be defined in it's conf files. Does anyone know how I can resolve this?
All right all you activemq guru's out there...
Currently activemq require a configuration file before it runs. It appears from its debug output message:
$ ./activemq start -h
INFO: Using default configuration (you can configure options in one of these file: /etc/default/activemq /home/user_name/.activemqrc)
That you can only put it in one of those two locations. Anybody know if this is the case? Is there some command line parameter to specify its location?
Thanks!
-roger-
Yes, it is possible. Here are 3 possible answers.
If classpath is setup properly:
activemq start xbean:myconfig.xml
activemq start xbean:file:./conf/broker1.xml
Not using the classpath:
activemq start xbean:file:C:/ActiveMQ/conf/broker2.xml
reference:
http://activemq.apache.org/activemq-command-line-tools-reference.html
I have not been able to find the answer to this and I struggled with this myself for a while, but I've found a bit of a workaround. When you use bin/activemq create, you can create a runnable instance that will have its own bin, conf, and data directories. Then you have more control over that runnable insance and the .activemqrc becomes less important.
See this for detail on the create option : http://activemq.apache.org/unix-shell-script.html
Try this:
bin/activemq start xbean:/home/user/activemq.xml
Note that if the xml file includes other files like jetty.xml then it needs to be in that dir also.
If using a recent 5.6 SNAPSHOT you can set the env var ACTIVEMQ_CONF to point to the location where you have the config files
in the /bin/activemq script under # CONFIGURATION # For using instances, you can add or remove any file destinations you'd like.
Be very though since it ignores the others at the first occurrency of a file, read more here
Unix configuration
happy coding !