Karaf how to exclude specified bundle from log - json

I would like to exclude my bundle from root karaf log. The JSON sended by this bundle is too large, and the log is no more readeable.
I suppose that I should change osgi:* in line :
rootLogger=INFO,out,osgi:*
Which value I should put there ?
Edit, the problem is more complicated that I thought.
The JSON sont injected in logs by org.apache.cxf.cxf-rt-features-logging. It is used also by other bundle. I vould like remove only JSON sending and receiving by my bundle.
How can I do this ?

If you want to exclude a specific bundle from logging, just turn off logging for the bundle within the pax logging config.
log4j.logger.mybundle = OFF
If you want to fine tune CXF message logging please check http://cxf.apache.org/docs/message-logging.html.
Some things to note:
The logger name is .. karaf by default only cuts it to just the type.
A lot of the details are in the MDC values
You need to change your pax logging config to make these visible.
You can use the logger name to fine tune which services you want to log this way. For example set the debug level to WARN for noisy
services to avoid that they are logged or log some services to another
file.

Related

Interfering options to reference Logback config file, starting with Micronaut 3.8

As far as I can see, starting with Micronaut 3.8, there has been added another option to configure referenced Logback's
XML config file within application.yml file(s), as described also in current
Micronaut documentation.
This results in sometimes weird logging behaviour!
I've described my observations, details, my course of investigation, and reference to a stripped down Micronaut test
project in
another StackOverflow thread.
To sum it up:
Option to reference custom Logback XML configuration file also within application.yml file
does NOT affect log messages issued before final setup of Micronaut's ApplicationContext, like often the case
with frameworks like OR mappers (even if they get set up in response to configs laid down in application.yml
file itself), and
interferes with option to reference Logback XML config file via JVM CLI switch (like
-Dlogback.configurationFile=logback-json-format.xml). To me it seems that both config options (JVM CLI switch
and application.yml) have to be in sync to gain expected behaviour.
So I propose to
make reference to Logback XML configurations in application.yml files count from "the beginning" (if at all
possible), and
establish a priority between both config options, like "JVM CLI switch supersedes config in application.yml"
in this regard, and/or
make io.micronaut.logging.impl.LogbackLoggingSystem be aware of mentioned JVM CLI switch.
Otherwise I'm not sure if the option to configure custom Logback XML file within application.yml is that valuable in
the light of all the "troubles".

Change Kubernetes log format to json at create or runtime?

I would like to change the default Kubernetes log format to "json" for system logs at "create" or preferably at "runtime".
The documentation designates the --logging-format=json for this purpose. However, I have been unable to identify what command-line call I would use to specific this flag.
I have tried kube-apiserver and kind cmd-line with no luck so far.
I am currently using Kind, but any reference related to Kube would be fine.
Questions:
Does this need be at "create" for the cluster?
If runtime is ok, what cmd util would I use to change a cluster configuration at runtime?
Do I need to apply a configuration with "kubectl" with a yaml file?

CICS Bundle and JSON webservice

I am doing a POC for a client account ,where I am trying to setup and Request -response model of JSON web-service where CICS acts as client. I have created 2 bundles separately for placing request jsbinds and response jsbind files. now the problem is that only one of my bundle is active (either request or response) and every time i have to discard one bundle and need to install the other one . is there a way i can install multiple bundles simultaneously in a CICS region ? or can the bundle be discarded and another bundle be installed by the application program dynamically it self
You can absolutely install multiple CICS bundles simultaneously in a CICS region.
The first thing to check is the CICS regions job log for messages explaining why the second bundle failed to install (or failed to enable). The messages will likely start with DFHRL.
If you have installed each of the bundles successfully (albeit independently), then it could be something as simple as a naming clash. Make sure each bundle has a unique name.
This Redbooks publication (especially chapter 11) should be useful:
Implementing IBM CICS JSON Web Services for Mobile Applications
Also, make sure the bundle-id is unique. The bundle-id is generated from the bundle directory name, and can be found inside the META-INF/cics.xml file.
The CICS region job log will mention "The CICS resource lifecycle manager has failed to create the BUNDLE resource ", but does not give a reason as to why creation has failed.
There is, however, a line stating "BUNDLE resource is being created with BUNDLEID and version .". You could check to see if the bundle-ids are the same for both bundles.

is it possible to start activemq with a configuration file that's not in one of the default locations?

All right all you activemq guru's out there...
Currently activemq require a configuration file before it runs. It appears from its debug output message:
$ ./activemq start -h
INFO: Using default configuration (you can configure options in one of these file: /etc/default/activemq /home/user_name/.activemqrc)
That you can only put it in one of those two locations. Anybody know if this is the case? Is there some command line parameter to specify its location?
Thanks!
-roger-
Yes, it is possible. Here are 3 possible answers.
If classpath is setup properly:
activemq start xbean:myconfig.xml
activemq start xbean:file:./conf/broker1.xml
Not using the classpath:
activemq start xbean:file:C:/ActiveMQ/conf/broker2.xml
reference:
http://activemq.apache.org/activemq-command-line-tools-reference.html
I have not been able to find the answer to this and I struggled with this myself for a while, but I've found a bit of a workaround. When you use bin/activemq create, you can create a runnable instance that will have its own bin, conf, and data directories. Then you have more control over that runnable insance and the .activemqrc becomes less important.
See this for detail on the create option : http://activemq.apache.org/unix-shell-script.html
Try this:
bin/activemq start xbean:/home/user/activemq.xml
Note that if the xml file includes other files like jetty.xml then it needs to be in that dir also.
If using a recent 5.6 SNAPSHOT you can set the env var ACTIVEMQ_CONF to point to the location where you have the config files
in the /bin/activemq script under # CONFIGURATION # For using instances, you can add or remove any file destinations you'd like.
Be very though since it ignores the others at the first occurrency of a file, read more here
Unix configuration
happy coding !

Parsed Console Output Error in Hudson

im running Hudson continuous integration for db unit.
when i run the job the console output is displaying the SUCCESS, but then why do the Parsed Console Output keep returning this error:
ERROR:Failed to parse console log :
log-parser plugin ERROR: Cannot parse log: Can't read parsing rules file:
i already installed the parse-log plugin & i already restarted the Hudson..
i installed the plugin using remote PC
any help and suggestion is appreciated. Thanks!
1) Place the Parser Rule File in the JENKINS_HOME location.
2) Configure that log parser console output in the Global COnfiguration settings and Name it.
3) Add this option in the Post Build Actions and Select the Name
ok silly me..
i forgot to configure the global configuration in hudson that link to the parser rule file..
problem solved.
I'm posting this in case anyone else has a specific case of this problem. This issue started when upgrading from 1.509.2 to 1.554.3... I had the parsing rules file in the win\system folder which was a known issue when running Jenkins as a service. Well I guess they fixed it by this version. I moved the parsing rules back into Jenkins Home folder and it worked fine again.