I am new to openshift and fluentd world.
My project is deployed on openshift and right now my project's console logs are routed to graylog with the help of fluentd( looks like a default configuration ). But, I also have a bunch of other log files sitting under a different folder structure, and I want those files to be routed to graylog as well. How do I tell fluent-d to go look for files sitting under a different a pod ? Eventually I either need to add another file path or somehow route all my log files to /var/log/containers. How can this be achieved ?
My current configuration
<source>
#type tail
path /var/log/containers/*.log
pos_file /var/log/es-containers.log.pos
time_format %Y-%m-%dT%H:%M:%S
tag raw.kubernetes.*
format json
keep_time_key true
read_from_head true
exclude_path []
read_lines_limit 500
</source>
Multiple path can be achieved like the below example:
path /path/to/a/*,/path/to/b/c.log
However if you want to fetch log from different Pods - create a shared volume for containers (emptyDir) inside Pods and then fetch the logs from the same directory.
Related
I have built a python module to access internal data files that can be accessed on multiple systems as we have mirrors of our data release. I use this config.py file to help identify all the paths. Many of the scripts include accessing this path info but I don't see a reason why readthedocs needs to build it. How can I get it to ignore these paths?
There are many other modules that do other things with the data and I have found read-the-docs to be a nice reference for new users. Unfortunately, my readthedocs builds have been failing for ages as a result of trying to find some of the local files.
https://readthedocs.org/projects/hetdex-api/builds/18207723/
FileNotFoundError: [Errno 2] No such file or directory: '/home/docs/checkouts/readthedocs.org/user_builds/hetdex-api/checkouts/latest/docs/hdr3/survey/amp_flag.fits'
Where is the Openshift Master and Node Host Files in v4.6
Previously hosted below in v3
Master host files at /etc/origin/master/master-config.yaml
Node host files at /etc/origin/node/node-config.yaml
You can check your current kubelet configuration using the following procedures instead of the configuration file on the node hosts like OCPv3. Because the kubelet configuration was managed dynamically as of OCPv4.
Further information is here, Generating a file that contains the current configuration.
You can check it using above reference procedures(Generate the configuration file) or oc CLI as follows.
$ oc get --raw /api/v1/nodes/${NODE_NAME}/proxy/configz | \
jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"'
These files no longer exist in the same for as in OCP 3. To change anything on the machines themselves, you'll need to create MachineConfigs, as CoreOS is an immutable Operating System. If you change anything manually on the filesystem and reboot the machine, your changes will typically be reset.
To modify Worker Nodes, often the setting you are looking for can be configured via a kubeletConfig: Managing nodes - Modifying Nodes. Note that only certain settings can be changed, others cannot be changed at all.
For the Master Config, it depends on what you want to do, as you will potentially change the setting via a machineConfigPool or for example edit API Server setting via oc edit apiserver cluster. So it depends on what you actually want to change.
I use the oc tool for several different clusters.
Since I am usually keeping local yaml files for any OpenShift objects I view/modify, either ad hoc or due to some config management scheme of the individual cluster, I have a separate directory on my machine for each cluster (which, in turn, is of coursed versioned in git). Let's call them ~/clusters/a/, ~/clusters/b/ etc.
Now. When I cd around on my local machine, the oc command uses the global ~/.kube/config to find the cluster I logged in last, to. Or in other words, oc does not care at all about which directory I am in.
Is there a way to have oc store a "local" configuration (i.e. in ~/clusters/a/.kube_config or something like that), so that when I enter the ~/clusters/a/ directory, I am automatically working with that cluster without having to explicitely switch clusters with oc login?
You could set the KUBECONFIG environment variable to specify different directories for configuration for each cluster. You would need to set the environment variable to respective directories in each separate terminal session window.
https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable
To expand on Graham's answer, KUBECONFIG can specify a list of config files which will be merged if more than one exist. The first to set a particular value wins, as described in the merging rules.
So you can add a local config with just the current-context, e.g. ~/clusters/a/.kube_config could be
current-context: projecta/192-168-99-100:8443/developer
and ~/clusters/b/.kube_config:
current-context: projectb/192-168-99-101:8443/developer
Obviously need to adjust this for your particular cluster using the format
current-context: <namespace>/<cluster>/<user>
Then set KUBECONFIG with a relative path and the global config
export KUBECONFIG=./.kube_config:~/.kube/config
Note that if ./.kube_config does not exist it will be ignored.
The current-content will then be overridden by the one defined in the local .kube_config, if one exists.
I tested this locally with 2 minishift clusters and it seemed to work ok. Have not tested what the behaviour is when setting config though.
I need to configire an apereo/cas in a few of days.
First I build the cas.war 4.2.2 according to https://github.com/apereo/cas-overlay-template. And then I deployed it in tomcat 8.0.36. After I start up the tomcat, I can login by the sample user(casuser: Mellon), but I can't find the cas.log file in tomcat/logs folder and other place by find / -name cas.log.
I have copied the log4j2.xml to /etc/cas/ as per the reference. Besides, I can't any error in tomcat/logs.
Did some one solve this problem or have a clue?
By the way, the log4j2 xml is available at https://github.com/apereo/cas-overlay-template/blob/master/etc/log4j2.xml.
Your log4j file describes where the log should be found. You'll file the location inside a file appender.
Your logging configuration doesn't specify a path so the files are going to end up in whatever the current directory is when you start tomcat or whatever tomcat sets the working directory to. That probably isn't what you want.
I am using IBM Integration Bus v.9
I try to read configuration from file, like this tutorial.
Based on the documentation, I've already set up my environment variable in Windows like this :
MQSI_FILENODES_ROOT_DIRECTORY to C:\MQSIFileInput
In the File Read Node properties, i set input directory to "config" (without apos), because the file located in C:\MQSIFileInput\config directories.
When I run, i got error "The directory config is not a valid directory name". What am I missing here?
Do I need to set up another configuration to read the file properly?
Thank you.
The MQSI_FILENODES_ROOT_DIRECTORY variable needs to be visible to the ExecutionGroup process at startup, so first thing to check is how did you set the env var and did you restart the broker?
Due to the way that processes are forked on windows the process for setting env vars is usually something like:
Stop broker
Close Broker Command Prompt
Modify mqsiprofile.cmd to include variable
Open new Broker Command Prompt
Verify env var is set ie/ echo %MQSI_FILENODES_ROOT_DIRECTORY%
Start Broker
The directory also needs to be readable by the Broker's process ID (and writable if you will be deleting the file or moving it to a backout dir etc).