Logback: Rollover is not deleting expired Logs - logback

I have a RollingFileAppender which is archiving the logs to files like this:
/data/work/logs/printPDF/logs/service/printPDF_2020-01/printPDF_2020-01-23_0.log.gz
Unfortunately, it is not deleting old archives (like the one named above).
Does anybody know what I'm doing wrong?
Below is an excerpt from the Startup Logback Console output:
- setting totalSizeCap to 20 GB
- Archive files will be limited to [100 MB] each.
- Will use gz compression
- Will use the pattern /data/work/logs/printPDF/logs/service/printPDF_%d{yyyy-MM, aux}/printPDF_%d{yyyy-MM-dd}_%i.log for the active file
- The date pattern is 'yyyy-MM-dd' from file name pattern '/data/work/logs/printPDF/logs/service/printPDF_%d{yyyy-MM, aux}/printPDF_%d{yyyy-MM-dd}_%i.log.gz'.
- Roll-over at midnight.
- Setting initial period to Tue Nov 10 05:31:17 MET 2020
- Cleaning on start up
- Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
- first clean up after appender initialization
- Multiple periods, i.e. 32 periods, seem to have elapsed. This is expected at application start.
- Active log file name: /data/work/logs/printPDF/printPDF.log
- File property is set to [/data/work/logs/printPDF/printPDF.log]
As you can see from the Console Output, it has recognised a Daily Date Pattern & expects to rollover at midnight.
It is rolling over at midnight, but expired Logs are not being deleted. Here are a few of the filenames it failed to delete:
/data/work/logs/printPDF/logs/service/printPDF_2020-01/printPDF_2020-01-23_0.log.gz
/data/work/logs/printPDF/logs/service/printPDF_2020-01/printPDF_2020-01-23_1.log.gz
/data/work/logs/printPDF/logs/service/printPDF_2020-02/printPDF_2020-02-28_0.log.gz
And here's the one it just created at midnight:
/data/work/logs/printPDF/logs/service/printPDF_2020-11/printPDF_2020-11-10_0.log.gz
Here's logback.xml, which I expect to retain 90 days:
<configuration debug="true">
<property name="SERVICE" value="printPDF" />
<property name="LOGDIR" value="/data/work/logs/${SERVICE}" />
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<appender name="RollingFile" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOGDIR}/${SERVICE}.log</file>
<append>true</append>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<FileNamePattern>${LOGDIR}/logs/service/${SERVICE}_%d{yyyy-MM, aux}/${SERVICE}_%d{yyyy-MM-dd}_%i.log.gz</FileNamePattern>
<maxHistory>90</maxHistory>
<maxFileSize>100MB</maxFileSize>
<totalSizeCap>20GB</totalSizeCap>
<cleanHistoryOnStart>true</cleanHistoryOnStart>
</rollingPolicy>
<encoder>
<Pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</Pattern>
</encoder>
</appender>
<logger name="org.apache" level="ERROR"/>
<logger name="ch.qos.logback" level="INFO"/>
<root level="INFO">
<appender-ref ref="STDOUT"/>
<appender-ref ref="RollingFile"/>
</root>
</configuration>

you are using monthly rollover: "%d{yyyy-MM}" so maxhistory=3 will keep three months of logs.

Related

logback how to store log file in folder having name as current date

following logback.xml will create a log file, but i want to create new folder every day which have same name as current date and store the new log file
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<property name="DEV_HOME" value="/home/gaurav/flinklogs" />
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${DEV_HOME}/logFile.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- daily rollover -->
<fileNamePattern>logFile.%d{yyyy-MM-dd}.log</fileNamePattern>
<!-- keep 30 days' worth of history capped at 3GB total size -->
<maxHistory>30</maxHistory>
<totalSizeCap>3GB</totalSizeCap>
</rollingPolicy>
<encoder>
<pattern>%-4relative [%thread] %-5level %logger{35} - %msg%n</pattern>
</encoder>
</appender>
<root level="ERROR">
<appender-ref ref="FILE" />
</root>
</configuration>
I also tried following filenamepattern but its not working
<fileNamePattern>${DEV_HOME}/%d{yyyy/MM, aux}/logFile.%d{yyyy-MM-dd}.log</fileNamePattern>
it is creatin log file in home/gaurav/flinklogs/logFile.log
If both <file> and <fileNamePattern> are specified, the the current log file is located as specified in <file> and archive log files are located as specified in the <fileNamePattern> - see documentation.
You need to remove <file>${DEV_HOME}/logFile.log</file> and then change <fileNamePattern>logFile.%d{yyyy-MM-dd}.log</fileNamePattern> to <fileNamePattern>${DEV_HOME}/%d{yyyy/MM, aux}/logFile.%d{yyyy-MM-dd}.log</fileNamePattern> and it should work the way you want it to.
I made a sample code using the answer of "Konrad Botor".
Refer to the below code.
<appender name="dailyAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${payloadLoggingFilePath}-%d{yyyy-MM-dd}.log</fileNamePattern>
</rollingPolicy>
<encoder>
<pattern>%m%n</pattern>
</encoder>

Logback: SizeAndTimeBasedRollingPolicy deletes all archived files when totalSizeCap reached

I am using SizeAndTimeBasedRollingPolicy in logback. For small values of maxFileSize and totalSizeCap, logback deletes older archived files only, when totalSizeCap limit is reached. But, for large values of totalSizeCap (~ 5GB) , it deletes all archived files.
I would like to delete only the older archived files, when totalSizeCap limit is reached.I am using logback version 1.2.3
Here, is the logback configuration i am using.
<appender name="file" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${tivo.logpath}/${tivo.logfilename}.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<!-- Rollover everyday. If file exceeds 1GB within a day, then file is archived with index starting from 0 -->
<fileNamePattern>${tivo.logpath}/${tivo.logfilename}-%d{yyyyMMdd}-%i.log.gz</fileNamePattern>
<!-- Each file should be at most 1GB -->
<maxFileSize>1GB</maxFileSize>
<!-- Keep maximum 30 days worth of archive files, deleting older ones -->
<maxHistory>30</maxHistory>
<!-- Total size of all archived files is at most 5GB -->
<totalSizeCap>5GB</totalSizeCap>
</rollingPolicy>
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="com.tivo.logging.logback.layout.JsonLayout">
<env>${envId}</env>
<datacenter>${dcId}</datacenter>
<serverId>${serverId}</serverId>
<build>${info.properties.buildChange}</build>
<service>${tivo.appname}</service>
</layout>
</encoder>
</appender>
Looks like this is a known issue with the logback version < 1.3.0.
Logback: SizeAndTimeBasedRollingPolicy applies totalSizeCap to each day in maxHistory
https://jira.qos.ch/browse/LOGBACK-1361
So, we might have to update to that version.
One other interesting bug fixed in the logback 1.3.0 will be:
https://jira.qos.ch/browse/LOGBACK-1162

Loggly logback appender is really slow

I'm using the Loggly logback appender as detailed in their setup guide:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="LOGGLY" class="ch.qos.logback.ext.loggly.LogglyAppender">
<endpointUrl>https://logs-01.loggly.com/inputs/MY_TOKEN/tag/logback</endpointUrl>
<pattern>%d{"ISO8601", UTC} %p %t %c %M - %m%n</pattern>
</appender>
<root level="INFO">
<appender-ref ref="LOGGLY" />
</root>
</configuration>
Everything is working as expected (logs appearing in Loggly) but it is incredibely slow, about 1 second per log message. It's bought my application all but to a halt. Is there a performance tweak i'm missing?
I've found the GitHub page for the LogglyAppender and used the LogglyBatchAppender instead of the one recommended by the Loggly doco. This seems to have solved the problem with long blocks writing the log message:
<appender name="LOGGLY" class="ch.qos.logback.ext.loggly.LogglyBatchAppender">
<endpointUrl>https://logs-01.loggly.com/bulk/MY_TOKEN/tag/admin</endpointUrl>
<pattern>%d{"ISO8601", UTC} %p %t %c %M - %m%n</pattern>
<flushIntervalInSeconds>2</flushIntervalInSeconds>
</appender>
The syslog appender is also pretty fast https://www.loggly.com/docs/java-logback-syslog/

How to skip specific Execution Plan Steps?

In the TaskBlockService there is a POST call that one or more steps should be skipped. There is not a good example given how the posted XML (List of String) the paths of the steps to skip.
Tried the following content for the POSTed data:
curl -X POST https://xldeploy.company.com/deployit/tasks/v2/5e917094-d054-4cc7-940e-89d851ca225a/skip
File remove-steps.xml content - sample 1:
<list>
<string>0_1_1</string>
</list>
File remove-steps.xml content - sample 2:
<list>
<string>0-1-1</string>
</list>
The first format you list is right, but you have to make sure you're using a step path and not just the path to a block.
Lets say you get the blocks from your deployment plan with this call.
curl -uadmin:password http://localhost:4516/deployit/tasks/v2/28830810-5104-4ab9-9826-22f66dee265d
This will produce the result:
<task id="28830810-5104-4ab9-9826-22f66dee265d" failures="0" state="PENDING" owner="admin">
<description>Initial deployment of Environments/local/TestApp001</description>
<activeBlocks/>
<metadata>
<environment>local</environment>
<taskType>INITIAL</taskType>
<environment_id>Environments/local</environment_id>
<application>TestApp001</application>
<version>1.0</version>
</metadata>
<block id="0" state="PENDING" description="" root="true">
<block id="0_1" state="PENDING" description="Deploy" phase="true">
<block id="0_1_1" state="PENDING" description="Deploy TestApp001 1.0 on environment local"/>
</block>
</block>
<dependencies/>
If you want to see the steps in block 0_1_1 then you can use this rest call to get the steps.
curl -uadmin:password http://local6/deployit/tasks/v2/28830810-5104-4ab9-9826-22f66dee265d/block/0_1_1/step
<block id="0_1_1" state="PENDING" description="Deploy TestApp001 1.0 on environment local" current="0">
<step failures="0" state="PENDING" description="Execute Command"/>
<step failures="0" state="PENDING" description="Copy File001.txt to Infrastructure/localhost"/>
The steps are numbered within the block starting from 1. So if you are want to skip the step - Copy File001.txt to Infrastructure/localhost the step path is 0_1_1_2. Your XML will look like:
<list>
<string>0_1_1_2</string>
</list>

length of a log line with logback

I looked on the web but i did not find the answer to my question.
I use logback 1.1.3 with slf4j 1.7.10 and in my log file, log line are break with a new line like this :
date level thread Caller+1 at "class name, line name" \n - msg
I would like to have the same output but without the \n, has anyone an idea ?
Here is my appender :
<appender name="FILE" class="ch.qos.logback.core.FileAppender">
<file>${com.sun.aas.instanceRoot}/logs/log.log</file>
<encoder>
<pattern>%date %level [%thread] %caller{1..2} - %msg%n
</pattern>
</encoder>
</appender>