How to Configuring Logging in Jetty via config file? - configuration

How do I get jetty to turn down the level of logging from the default of INFO?
I'm actually trying to run the default Apache Solr installation, which ships with jetty, but dumps a lot of information to the console, and I'd only like to see warnings.
I don't want to go hack up the code, I just would like to be able to drop a config file somewhere, but I've been googling for a while, and all I find are obsolete methods or programmatic methods.
Thanks!
edit: -D options would be great, too!

Short answer: java -DDEBUG -jar start.jar
Long answer: (taken from http://docs.codehaus.org/display/JETTY/Debugging)
"Jetty has it's own builtin logging facade that can log to stderr or slf4j (which in turn can log to commons logging, log4j, nlog4j and java logging). Jetty logging looks for a slf4j jar on the classpath. If found, slf4j is used to control logging otherwise stderr is used. The org.mortbay.log.Log class is used to coordinate logging and the following system parameters may be used to control logging:"
org.mortbay.log.class: Specify an implementation of org.mortbay.log.Logger to use
DEBUG: If set, debug logs will be produced, else only INFO and WARN logs will be generated
VERBOSE: If set, verbose logging is produced, including ignored exceptions
IGNORED: If set (jetty 6.1.10 and later), ignored exceptions are logged (independent of DEBUG and VERBOSE settings
Here I undestand that by the "system parameters", in the above cited text, they mean "Java system properties".

If you run jetty 6 as a daemon, the logging config file is:
/usr/share/jetty/resources/log4j.properties
(Where /usr/share/jetty is your $jetty.home.) And to turn down the default log level in that log4jproperties file, change the rootLogger entry:
log4j.rootLogger=WARN, stdout

Find the file logging.properties under your JAVA_HOME directory
Change the default global logging level from
.level= INFO
to
.level= WARNING

Related

Getting logs/more information during start-build command execution

Jenkins pipeline is building Docker images. OpenShift plugin(s) are used for the same.
An example command:
openshift.selector(BUILD_CONFIG_NAME, "${appBcName}").startBuild("--from-dir=${artifactPath}", '--wait','--follow')
While this works smoothly most of the time, whenever this command fails due to some underlying platform issues, almost no information is seen in the Jenkins build job console:
[Pipeline] }
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] ............................................................
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] Uploading finished
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] Error from server (BadRequest): unable to wait for build amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd-857 to run: timed out waiting for the condition
[Pipeline] }
ERROR: Error running start-build on at least one item: [buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd];
{err=, verb=start-build, cmd=oc --server=https://api.scp-west-zone02-z01.net:6443 --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt --namespace=sb-1166-amld5-car-service-se --token=XXXXX start-build buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd --from-dir=./build/libs --wait --follow -o=name , out=Uploading directory "build/libs" as binary input for the build ...
............................................................
Uploading finished
Error from server (BadRequest): unable to wait for build amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd-857 to run: timed out waiting for the condition
, status=1}
[Pipeline] // catchError
I need more verbosity, detailed error information. I checked the start-build command reference, and I thought --build-loglevel [0-5] might help here. When I used it, I got a warning that since I am using source type as 'Binary' in the BuildConfig, logging isn't supported(seriously???)
NOTE: the selector returned when -F/--follow is supplied to startBuild() will be inoperative for the various selector operations.
Consider removing those options from startBuild and using the logs() command to follow the build output.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] WARNING: Specifying --build-loglevel with binary builds is not supported.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] WARNING: Specifying environment variables with binary builds is not supported.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] Uploading directory "build/libs" as binary input for the build ...
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] ..
How do I get more logs, info. while executing the start-build command?
I was facing the same problem, I just used something like:
def build = openshift.selector(BUILD_CONFIG_NAME, "${appBcName}").startBuild("--from-dir=${artifactPath}", '--wait','--follow')
build.logs('-f')
And so far it seems to work, I got the logs from my openshift build in my Jenkins pipeline. Now I'll try to get the logs only if build does not Complete, to reduce the overall logs.
(for future searchers like me ^^)

apache drill on cluster start error

I install apache drill on a cluster with 3 nodes.
When I use the following command to start it,it will not really running.
bin/drillbit.sh start
error
I don't know how to solve it and want you help.
The zookeeper is running without problems.
Then I check the log, and it show the following infos:
Exception in thread "main" org.apache.drill.exec.exception.DrillbitStartupException: Failure while initializing values in Drillbit.
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:287)
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:271)
at org.apache.drill.exec.server.Drillbit.main(Drillbit.java:267)
Caused by: org.apache.drill.exec.exception.DrillbitStartupException: Problem in finding the native library of JPAM (Pluggable Authenticator Module API). Make sure to set Drillbit JVM option 'java.library.path' to point to the directory where the native JPAM exists.:no jpam in java.library.path
I check the java.library.path, it is the following:
/home/hadoop/bigdata/hadoop-2.7.2/lib/native/::/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
So, I add the following setting:
declare -x DRILL_JAVA_LIB_PATH="/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib"
However, it not work and turn out the same problem like before.
The declare -x DRILL_JAVA_LIB_PATH snippet you provided will not point drill to the pam library. Please follow all the instructions in the Drill docs here https://drill.apache.org/docs/using-jpam-as-the-pam-authenticator/
Note: you will have to perform those steps on all 3 nodes of your cluster.

HikariCP Pool makes Logback's insertFromJNDI configuration stop working

I have two Spring MVC applications that share a commons.jar library. This library includes logback logging library (logback 1.2.3 and slf4j 1.7.25) and the logback.xml file.
Both wars include this line in their web.xml file:
<env-entry>
<env-entry-name>applicationName</env-entry-name>
<env-entry-type>java.lang.String</env-entry-type>
<env-entry-value>nameOfApplicationA|nameOfApplicationB</env-entry-value>
</env-entry>
Each application generates its own log file including hostname, for example: HOST1-nameOfApplicationA.log. Logback configuration is as follows:
<insertFromJNDI env-entry-name="java:comp/env/applicationName" as="APP_NAME" />
<appender name="ROLLING_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_PATH}/${HOSTNAME}-${APP_NAME}.log</file>
...
</appender>
Everything was working OK (Spring MVC 4.3.7.RELEASE, Hibernate 4, C3P0 latest), but we decided to upgrade to Hibernate 5.2.10 and change to HikariCP 2.6.1. After that, logback was no longer able to resolve java:comp/env/applicationName:
ERROR in ch.qos.logback.classic.joran.action.InsertFromJNDIAction - [java:comp/env/applicationName] has null or empty value
Resulting in both applications using the same file name HOST1-APP_NAME_IS_UNDEFINED.log.
As we changed at the same time Hibernate and HikariCP we went back to C3P0 to check the root cause, and can confirm that the new version of Hibernate has nothing to do. The change was developed in its own branch so no other change seems to affect (anyway, when returning to C3P0 it works).
I've been doing some tracing in Hikari's and Logback's code but I'm not able to see anything. I'm stuck, no idea of what to look.
Plan B is insert in each war its own logback.xml but I would like to avoid it and understand the problem as it may affect other parts of the application.
Both wars are deployed together in an Apache Tomcat/8.0.38 server. Tried also 8.5.12. It also happens if only one of the wars is deployed alone.
Although I found no solution, #brettw identified the problem (see https://github.com/brettwooldridge/HikariCP/issues/873), and got a workaround.
It seems that because HikariCP depends on slf4j, and HikariCP is also being initialized and registered into JNDI, is that causing Logback to initialize before the <env-entry> entries have registered.
The test made was initalize Hikari datasource with "org.apache.naming.factory.BeanFactory" factory instead of "com.zaxxer.hikari.HikariJNDIFactory". This way it works correctly.

How to check if log4j2 has been configured or not

We're all familiar with this message when you don't provide a configuration for log4j2:
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
How can I check if log4j2 is not yet configured so that I can initialize with a default configuration if needed?
I ended up creating my own ConfigurationFactory and register it with
-Dlog4j.configurationFactory
That way your ConfigurationFactory will know whether it's been invoked yet or not.
On the first call from your app to log4j2 code, log4j will configure itself before the method returns. If no config file is found, log4j will auto-configure with a default configuration which logs only errors to the console. So from your application's point of view there is never a time that log4j is not configured.
One idea is to check if a log4j2.xml file is in the classpath (using Class.getResource), and if it isn't call System.setProperty("log4j.configurationFile", pathToYourConfig). Note that this must be done before the first call to the log4j2 API.
An alternative:
Once you have a LoggerContext you can call
context.setConfigLocation(configLocation) where configLocation is a URI.
That will force a reconfiguration.

Play TypeSafe Activator fails to start - IllegalArgumentException "Failed to download new template catalog properties"

Moving from play 2.2.x to latest activator last night. Downloaded minimal 1.2.10, extracted it in program file (x86)\typesafe... and put the directory into the system path variable. cloned my repository, and when i executed activator run it downloaded the required modules and my app is up and running. All great so far. run works!
Then I tried to create a new app, and activator fails, with the following trace:
Checking for a newer version of Activator (current version 1.2.10)...
... our current version 1.2.10 looks like the latest.
Found previous process id: 9632
FOUND REPO = activator-local # file:////C:/Program%20Files%20(x86)/Typesafe/activator-1.2.10-minimal/repository
Play server process ID is 9760
[info] play - Application started (Prod)
[info] play - Listening for HTTP on /127.0.0.1:8888
[info] a.e.s.Slf4jLogger - Slf4jLogger started
[WARN] [10/30/2014 10:47:13.972] [default-akka.actor.default-dispatcher-2] [ActorSystem(default)] Failed to download new template ca
talog properties: java.lang.IllegalArgumentException: requirement failed: Source file 'C:\Users\admin\.activator\1.2.10\templates\in
dex.db_6e0565f0c8826b17.tmp' is a directory.
[ERROR] [10/30/2014 10:47:13.972] [default-akka.actor.default-dispatcher-2] [akka://default/user/template-cache] Could not find a te
mplate catalog. (activator.templates.repository.RepositoryException: We don't have C:\Users\admin\.activator\1.2.10\templates\cache.
properties with an index hash in it, even though we should have downloaded one
activator.templates.repository.RepositoryException: We don't have C:\Users\admin\.activator\1.2.10\templates\cache.properties with a
n index hash in it, even though we should have downloaded one
at activator.cache.TemplateCacheActor.preStart(TemplateCacheActor.scala:184)
at akka.actor.Actor$class.aroundPreStart(Actor.scala:470)
at activator.cache.TemplateCacheActor.aroundPreStart(TemplateCacheActor.scala:25)
at akka.actor.ActorCell.create(ActorCell.scala:580)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:456)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:263)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
I've taken a look at several similar issues on SO and elsewhere. I've deleted .activator directory and retried, I've tried this process from behind a proxy and not, as well as offline (surely offline should work!), but it consistently gives the above error. activator ui gives the same error. I'm stuck and any suggestions would be appreciated. (Edit. tried with full activator download, rather than minimal, and I get the same error.)
Look for reasons it might be impossible to create or access 'C:\Users\admin.activator\1.2.10\templates\in
dex.db_6e0565f0c8826b17.tmp' ... maybe a permissions issue?
The failed check is for "is a directory" but that also fails if it just doesn't exist or can't be accessed.