Same Appender log into 2 different files with Log4J2 - configuration

I would like to define 1 single Appender in my log4j2.xml configuration file, and using the magic of the Properties Substitution of Log4J2, be able to somehow log into 2 different files.
I imagine the Appender would look something like:
<RollingFile name="Rolling-${filename}" fileName="${filename}" filePattern="${filename}.%i.log.gz">
<PatternLayout>
<pattern>%d %p %c{1.} [%t] %m%n</pattern>
</PatternLayout>
<SizeBasedTriggeringPolicy size="500" />
</RollingFile>
Is there a way for a Logger to use this appender and to pass the filename property?
Or is there a way to pass it when we fetch the Logger with LogManager.getLogger?
Note that those logger may or may not be in the same Thread, it has to support both cases, so I don't think it's possible to use ThreadContext nor System properties.

The closest thing I can think of is RoutingAppender. RoutingAppender allows the log file to be dynamically selected based on values in some lookup. A popular built-in lookup is the ThreadContext map (see the example on the FAQ page), but you can create a custom lookup. Example code:
ThreadContext.put("ROUTINGKEY", "foo");
logger.debug("This message gets sent to route foo");
// Do some work, including logging by various loggers.
// All logging done in this thread is sent to foo.
// Other threads can also log to foo at the same time by setting ROUTINGKEY=foo.
logger.debug("... and we are done");
ThreadContext.remove("ROUTINGKEY"); // this thread no longer logs to foo
Example config that creates log files on the fly:
<Routing name="Routing">
<Routes pattern="$${ctx:ROUTINGKEY}">
<!-- This route is chosen if ThreadContext has a value for ROUTINGKEY.
The value dynamically determines the name of the log file. -->
<Route>
<RollingFile name="Rolling-${ctx:ROUTINGKEY}" fileName="logs/other-${ctx:ROUTINGKEY}.log"
filePattern="./logs/${date:yyyy-MM}/${ctx:ROUTINGKEY}-other-%d{yyyy-MM-dd}-%i.log.gz">
<PatternLayout>
<pattern>%d{ISO8601} [%t] %p %c{3} - %m%n</pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy interval="6" modulate="true" />
<SizeBasedTriggeringPolicy size="10 MB" />
</Policies>
</RollingFile>
</Route>
</Routes>
<!-- This route is chosen if ThreadContext has no value for key ROUTINGKEY. -->
<Route key="$${ctx:ROUTINGKEY}">
<RollingFile name="Rolling-default" fileName="logs/default.log"
filePattern="./logs/${date:yyyy-MM}/default-%d{yyyy-MM-dd}-%i.log.gz">
<PatternLayout>
<pattern>%d{ISO8601} [%t] %p %c{3} - %m%n</pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy interval="6" modulate="true" />
<SizeBasedTriggeringPolicy size="10 MB" />
</Policies>
</RollingFile>
</Route>
</Routing>
An alternative is to configure multiple loggers, each pointing to a separate appender (with additivity="false"). This allows your application to control the destination file by obtaining a logger by its name. However, in that case you would need to configure separate appenders so this does not fulfill your requirement, I mention it for completeness.

I am using the logger name to pass arguments to the appender.
It's hacky but it works:
LogManager.getLogger("com.company.test.Test.logto.xyz.log")
A custom StrLookup is necessary to extract the filename from the logger name.

Related

RollingFileAppender not working when used inside SiftingAppender

I have the following logback setup
<appender name="TEST-SIFT" class="ch.qos.logback.classic.sift.SiftingAppender">
<discriminator class="..."/>
<sift>
<appender name="ROLL-${fileName}" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>../log/${fileName}.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<fileNamePattern>../log/${fileName}%i.log</fileNamePattern>
<minIndex>1</minIndex>
<maxIndex>10</maxIndex>
</rollingPolicy>
<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<maxFileSize>5MB</maxFileSize>
</triggeringPolicy>
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} %-5level [%thread] %logger{36} - %msg%n</pattern>
</encoder>
</appender>
</sift>
</appender>
The discriminator class returns a value by parsing the loggerName. The key is defined as "fileName".
The logs rollover fine when I test just the RollingFileAppender (after replacing the variable references of ${fileName} with a static value) but when I have it nested under SiftingAppender, the logs do not roll over. I tested the sifting appender with "FileAppender" and it is able to create the right file name based on the discriminator.
I also tested the same configuration by using the discriminator as
<discriminator>
<key>fileName</key>
<defaultValue>appname</defaultValue>
</discriminator>
and removing the class tag. This creates appname.log but does not roll over.
Setting debug="true" did not write any additional information to the log file.
Am I missing something here? How do I implement RollingFileAppender inside a SiftingAppender?
I figured out the issue with my setup. My logback.xml has to two RollingFileAppenders (one nested in the sifter and one outside). Appender A was writing to application.log and Appender B in some circumstances was writing to application.log as well (i.e. ${fileName} evaluated to application). So if I remove appender A or rename Appender A's fileName, the logs roll over as configured. This probably means Appender A or B could not close and rename the file because the other appender still has a lock on it?
To test this I used an AsyncAppender as follows:
<appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
<appender-ref ref="FILE" />
</appender>
where "FILE" is the name of Appender A. Using this configuration works but I see some strange behavior where the files do not rollover at the exact size specified and in some cases, the files are renamed with index 10 and get deleted automatically. Since this behavior is not very reliable, for now I got rid of Appender A.

Duplicate values from csv are inserted to DB using Apache Camel

I have a large chunk of CSV files(Each containing around millions of records).
So I use seda to use the multi-threading feature. I split 50000 in chunks, process it and get a List of Entity objects, which I want to split and persist to DB using jpa. Initially I was getting a Out of Heap Memory Exception. But later I used a high configuration system and Heap issue was solved.
But right now the issue is, I am getting duplicate records getting inserted in the DB. say if there are 1000000 records in the csv, around 2000000 records are getting inserted to DB.
There is no primary key for the records in the Csv files. So I have used hibernate to generate a primary key for it.
Below is my code (came-context.xml)
<camelContext xmlns="http://camel.apache.org/schema/spring">
<route>
<from uri="file:C:\Users\PPP\Desktop\input?noop=true" />
<to uri="seda:StageIt" />
</route>
<route>
<from uri="seda:StageIt?concurrentConsumers=1" />
<split streaming="true">
<tokenize token="\n" group="50000"></tokenize>
<to uri="seda:WriteToFile" />
</split>
</route>
<route>
<from uri="seda:WriteToFile?concurrentConsumers=8" />
<setHeader headerName="CamelFileName">
<simple>${exchangeId}</simple>
</setHeader>
<unmarshal ref="bindyDataformat">
<bindy type="Csv" classType="target.bindy.RealEstate" />
</unmarshal>
<split>
<simple>body</simple>
<to uri="jpa:target.bindy.RealEstate"/>
</split>
</route>
Please Help.
Could you be accidently starting 2 contexts up so the routes run twice? If How do you start the route?
I think the problem may be with "?noop=true". Since this doesn't move the file that is being processed. As a result, Camel will consume the file again and again. Have you tried removing this option so Camel would move this file to a .camel subdirectory? Camel by default doesn't process files that are in a "hidden" directory -the ones that start with DOT. You can also add "?moveFailed=.failed" as a precaution, so files will be always moved to a directory, even if they fail. Let me know if this helps.
R.
To eliminate the duplicates in the DB, you could create the primary key from a hash of a record's contents instead of using hibernate to generate a random one.
add this noop.flag: true, to yml file, it will flag a file that camel process and will not process it again, also you can specify a destination location as soon as d file is processed, it moves a copy, then you do a manual delete method to remove the processed files from the source folder. a scheduler will be best to achieve it

Number of lines read with Spring Batch ItemReader

I am using SpringBatch to write a csv-file to the database. This works just fine.
I am using a FlatFileItemReader and a custom ItemWriter. I am using no processor.
The import takes quite some time and on the UI you don't see any progress. I implemented a progress bar and got some global properties where i can store some information (like lines to read or current import index).
My question is: How can i get the number of lines from the csv?
Here's my xml:
<batch:job id="importPersonsJob" job-repository="jobRepository">
<batch:step id="importPersonStep">
<batch:tasklet transaction-manager="transactionManager">
<batch:chunk reader="personItemReader"
writer="personItemWriter"
commit-interval="5"
skip-limit="10">
<batch:skippable-exception-classes>
<batch:include class="java.lang.Throwable"/>
</batch:skippable-exception-classes>
</batch:chunk>
<batch:listeners>
<batch:listener ref="skipListener"/>
<batch:listener ref="chunkListener"/>
</batch:listeners>
</batch:tasklet>
</batch:step>
<batch:listeners>
<batch:listener ref="authenticationJobListener"/>
<batch:listener ref="afterJobListener"/>
</batch:listeners>
</batch:job>
I already tried to use the ItemReadListener Interface, but this isn't possible as well.
if you need to know how many lines where read, it's available in spring batch itself,
take a look at the StepExecution
The method getReadCount() should give you the number you are looking for.
You need to add a step execution listener to your step in your xml configuration. To do that (copy/pasted from spring documentation):
<step id="step1">
<tasklet>
<chunk reader="reader" writer="writer" commit-interval="10"/>
<listeners>
<listener ref="chunkListener"/>
</listeners>
</tasklet>
</step>
where "chunkListner" is a bean of yours annotated with a method annotated with #AfterStep to tell spring batch to call it after your step.
you should take a look at the spring reference for step configuration
Hope that helps,

Logback - Multiple syslog appenders

I'm logging to syslog with a syslog appender as shown below:
<appender name="SYSLOG" class="ch.qos.logback.classic.net.SyslogAppender">
<syslogHost>localhost</syslogHost>
<facility>LOCAL6</facility>
<suffixPattern>app: %logger{20} %msg</suffixPattern>
</appender>
But I've got a new requirement where I want to send some logs to the "LOCAL5" facility instead of LOCAL6. I've read the logback configuration documentation http://logback.qos.ch/manual/configuration.html but I'm still not sure how to do this.
You can use two syslog appenders one for LOCAL6 and other for LOCAL5
<appender name="SYSLOG" class="ch.qos.logback.classic.net.SyslogAppender">
<syslogHost>localhost</syslogHost>
<facility>LOCAL6</facility>
<suffixPattern>app: %logger{20} %msg</suffixPattern>
</appender>
<appender name="SYSLOG1" class="ch.qos.logback.classic.net.SyslogAppender">
<syslogHost>localhost</syslogHost>
<facility>LOCAL5</facility>
<suffixPattern>app: %logger{20} %msg</suffixPattern>
</appender>
Advantage is that you a customize the logs( some logs) using filters, pattern, logger names etc. And SyslogAppender extends UnsyncronizedAppender.
Reason being you cannot use single appender is that method facilityStringToint(String facilityStr) in class SyslogAppender.java accepts String which then compares which standard facility using equals.
Example
if ("KERN".equalsIgnoreCase(facilityStr)) {
return SyslogConstants.LOG_KERN;
}

How to define an Aggregator in Camel after using a ProducerTemplate?

I have a route that processes csv files and inserts them as records into the database. Since it's a huge csv-file and the Camel csv-splitter ran out of memory we had to write our own splitter. I wrote the splitter using a ProducerTemplate.
The route to process the csv looks a bit like this:
<route id="processCsvRoute">
<from ref="inbox" />
<to uri="bean:csvBean?method=process"/>
</route>
In the csvBean we do the splitting and finally execute the following java code for every csv-line (a csv line results in a product object).
producer.sendBodyAndHeader("direct:csvAggregator", product, "ID", csv.getFilename());
No the csvAggregator-route picks up the csv:
<route id="csvAggregator">
<from uri="direct:csvAggregator" />
<aggregate strategyRef="exchangeAggregatorStrategy"
completionSize="10000"
completionInterval="10000"
parallelProcessing="true">
<correlationExpression>
<header>ID</header>
</correlationExpression>
<to uri="bean:batchInsertBean"/>
</aggregate>
</route>
Is there a way to define the aggregator in the processCsvRoute? My solution is working, but it doesn't feel right I have to create a separate route for it.
Thanks for your help.
You can just enable streaming mode on the splitter, then it reads the CSV file "line by line" and you won't run out of memory.
The documentation has more details: http://camel.apache.org/splitter
<split streaming="true" ...>