logback rolling file is misplaced - logback

I have an application that is deployed on OpenShift. I have the below configuration on log back.xml.
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<prudent>true</prudent>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${logPath}myLogFile.%d{yyyy-MM-dd}.log</fileNamePattern>
<maxHistory>5</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%d{ISO8601} %p %t %c{1}.%M - %m%n</pattern>
</encoder>
I can read on the server $logPath is properly substituted and logback file has no problems in terms of setting properties. However it doesn't write the log file at the specified place. I tried this with specifying a file property and setting prudent to "false" as well, still no luck. I checked the permissions on the file system there are no problems, creating and writing a file at the specified location. I'm using log back 1.1.2.
Note: I checked this question and it's answer. Setup for a Logback RollingFileAppender with prudent flag and a file location
Does anybody have an idea what could be wrong here? I hope there isn't a bug with TimeBasedRollingPolicy
Updating ticket to clarify variable substitution (per logback documentation http://logback.qos.ch/manual/configuration.html#variableSubstitution )
${logPath} variable is substituted during build time (in jenkins) and the war file generated bears the correct path to the log file. Then this war file is deployed to Tomcat.

Related

wildfly10 and slf4j and logback do not work

I followed the following configuration for wildfly10:
https://developer.jboss.org/thread/237094
So to be able to use slf4j and logback in my application I disabled the logging subsystem:
<jboss-deployment-structure>
<deployment>
<!-- exclude-subsystem prevents a subsystems deployment unit processors running on a deployment -->
<!-- which gives basically the same effect as removing the subsystem, but it only affects single deployment -->
<exclude-subsystems>
<subsystem name="logging" />
</exclude-subsystems>
</jboss-deployment-structure>
With this configuration my application uses correctly my logback configuration: the problem is that the server does not use anymore its own logging, so the general information are not written anymore to server.log, but they are written to my application appender.
This sounds very strange to me: I tried a lot of other configurations like excluding directly the modules (i.e. org.slf4j and org.slf4j.impl) on my jboss-deployment-structure.xml file but with no effect.

Adding Splunk logback appender prevents application termination

I have the following logback configuration and I am using it in a very simple Java application that does nothing except logging one line. When I uncomment the Splunk appender line it doesn't let the application exit, even though the application is finished.
Is there a way to terminate all the logging threads so that the main application exits?
logback.xml
<appender name="SPLUNK" class="com.splunk.logging.HttpEventCollectorLogbackAppender">
<url>${splunkUrl}</url>
<token>${splunkToken}</token>
<source>${projectName}</source>
<host>${COMPUTERNAME}</host>
<sourcetype>batch_application_log:json</sourcetype>
<disableCertificateValidation>true</disableCertificateValidation>
<!--<messageFormat>json</messageFormat>-->
<!--<retries_on_error>1</retries_on_error>-->
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>"%msg"</pattern>
</layout>
</appender>
<root level="INFO">
<!--<appender-ref ref="SPLUNK"/>--> if I uncomment this line application never exits
</root>
Java code
public class Main {
public static void main(String[] args) {
final Logger logger = LoggerFactory.getLogger(Main.class);
logger.info("******");
}
}
You could add a Logback shutdown hook, this will close all appenders and stop any active threads related to Logback.
For example:
<configuration debug="true">
<shutdownHook class="ch.qos.logback.core.hook.DelayingShutdownHook">
<!--
the default value is 0 millis, I have included a non default
value here just to show you how it can be supplied
-->
<delay>10</delay>
</shutdownHook>
...
<configuration>
With the shutdown hook in place and debug="true" Logback will emit its own log events like so ...
08:57:19,410 |-INFO in ch.qos.logback.core.hook.DelayingShutdownHook#2bafec4c - Sleeping for 10 milliseconds
08:57:19,421 |-INFO in ch.qos.logback.core.hook.DelayingShutdownHook#2bafec4c - Logback context being closed via shutdown hook
Note: there is no requirement to use debug="true", I have only o
included that to show you how to verify that the shutdown hook has been executed.

HikariCP Pool makes Logback's insertFromJNDI configuration stop working

I have two Spring MVC applications that share a commons.jar library. This library includes logback logging library (logback 1.2.3 and slf4j 1.7.25) and the logback.xml file.
Both wars include this line in their web.xml file:
<env-entry>
<env-entry-name>applicationName</env-entry-name>
<env-entry-type>java.lang.String</env-entry-type>
<env-entry-value>nameOfApplicationA|nameOfApplicationB</env-entry-value>
</env-entry>
Each application generates its own log file including hostname, for example: HOST1-nameOfApplicationA.log. Logback configuration is as follows:
<insertFromJNDI env-entry-name="java:comp/env/applicationName" as="APP_NAME" />
<appender name="ROLLING_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_PATH}/${HOSTNAME}-${APP_NAME}.log</file>
...
</appender>
Everything was working OK (Spring MVC 4.3.7.RELEASE, Hibernate 4, C3P0 latest), but we decided to upgrade to Hibernate 5.2.10 and change to HikariCP 2.6.1. After that, logback was no longer able to resolve java:comp/env/applicationName:
ERROR in ch.qos.logback.classic.joran.action.InsertFromJNDIAction - [java:comp/env/applicationName] has null or empty value
Resulting in both applications using the same file name HOST1-APP_NAME_IS_UNDEFINED.log.
As we changed at the same time Hibernate and HikariCP we went back to C3P0 to check the root cause, and can confirm that the new version of Hibernate has nothing to do. The change was developed in its own branch so no other change seems to affect (anyway, when returning to C3P0 it works).
I've been doing some tracing in Hikari's and Logback's code but I'm not able to see anything. I'm stuck, no idea of what to look.
Plan B is insert in each war its own logback.xml but I would like to avoid it and understand the problem as it may affect other parts of the application.
Both wars are deployed together in an Apache Tomcat/8.0.38 server. Tried also 8.5.12. It also happens if only one of the wars is deployed alone.
Although I found no solution, #brettw identified the problem (see https://github.com/brettwooldridge/HikariCP/issues/873), and got a workaround.
It seems that because HikariCP depends on slf4j, and HikariCP is also being initialized and registered into JNDI, is that causing Logback to initialize before the <env-entry> entries have registered.
The test made was initalize Hikari datasource with "org.apache.naming.factory.BeanFactory" factory instead of "com.zaxxer.hikari.HikariJNDIFactory". This way it works correctly.

Viewing lilith log files using the log viewer

I use logback version 0.9.29 and lilith v0..9.43 in my application to generate a .lilith file using a FileAppender. I see the appender is created and that the app-log.lilith file is also created successfully.
The encoder configuration i use for the lilith File Appender is
<encoder class="de.huxhorn.lilith.logback.encoder.ClassicLilithEncoder">
<IncludeCallerData>true</IncludeCallerData>
</encoder>
When i run the lilith logviewer and try to open the generated lilith file, i'm prompted to index the selected log file and then i get an error telling me the lilith file is invalid.
I can view the contents of the lilith log file, it seems to be just a regular text file.
Any ideas on what might be wrong? and why the log viewer thinks the file is invalid?
Does the ClassicLilithEncoder just create a text file? or is that an indication that the file was not encoded correctly and that is why the logviewer considers it invalid.

NHibernate will insert but not update after move to host with shared server running mysql

I have a site running MVC and Nhibernate (not fluent) using standard session per request in an http module, runs fine locally (also with mysql) but after a move to a hosting provider no update statements are being issued.
I can insert but not update, no exceptions are raised, I have the 'show_sql' option switched on which locally shows the update statements being issued but on the server no update statements are logged.
I don't think NHProf is an option for me as I can only run asp.net apps on my shared server, are there any other methods of diagnosing NH issues like this ?
Anyone had a similar issue ?
Cheers,
A
The issue was that I had moved from my local dev environment with IIS5 to a shared server with IIS7, IIS7 has a different syntax for registering http modules so my NHibernate session module was not firing which caused the behaviour originally described.
To fix this problem I added the modules section in the web.config under system.web to system.webServer, you can add the validation validateIntegratedModeConfiguration="false" key to the system.webServer section which will allow your config to have the module registered under both sections so you can have the same config for IIS5/IIS7.
NHProf is an option for you!
You can have it log to a file, then pick that file up later. This is the log4net config you need:
<log4net>
<appender name="NHProfAppender"
type="HibernatingRhinos.Profiler.Appender.NHibernate.NHProfOfflineAppender,
HibernatingRhinos.Profiler.Appender" >
<file value="nhprof_output.nhprof" />
</appender>
<logger name="HibernatingRhinos.Profiler.Appender.NHibernate.NHProfAppender.Setup">
<appender-ref ref="NHProfAppender"/>
</logger>
</log4net>
Alternatively, if you don't have an NHProf license, you can log the NHibernate stuff to a file in order to see what's happening.