making logback filename in TimeBasedRollingPolicy use leading 0s for %i - logback

Is there any way to make logback create TimeBasedRollingPolicy file names that use leading 0s for %i. Here's an excerpt from the logback.xml file
...
<rollingPolicy class="ch.quos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>logs/foo_%d{yyyyMMdd}-%i.log</fileNamePattern>
...
</rollingPolicy>
...
This creates files named like this foo_20130501-0.log, foo_20130501-1.log, ... foo_20130501-9.log, foo_20130501-10.log, ...
Instead, I would like the files named like this: foo_20130501-000.log, foo_20130501-001.log, ... foo_20130501-009.log, foo_20130501-010.log, ...

No, as of 2013-05-01, this is not possible with logback version 1.0.12 or earlier. Please create a jira issue requesting this feature.

Related

WARNING of duplicate C++ declaration/object description (for namespace) when use "doxygenfile" objective for different source files

I want to use doxygen and sphinx to generate documents for source file, in the rst file, I use "doxygenfile" to introduce the source file, like
.. doxygenfile:: Headerfile1.hpp
:project: MyProject
.. doxygenfile:: Headerfile2.hpp
:project: MyProject
As the classes in the two headerfile is defined in the same namespace, so they both have a same namespace delcare:
namespace Namespace_xxx
{
...
definitions ...
...
}
When build, a warning is reported like:
WARNING: Duplicate C++ declaration, also defined at XXX :17.
Declaration is '.. cpp:type:: Namespace_xxx'.
And the same situation for python file, a warning likeļ¼š
WARNING: duplicate object description of <module_name>, when I import submodule from the same module in different .py and introduce them to the rst by "doxygen".
Why sphinx recognize reference of namespace or module as duplicated declaration? How to fix this problem?
I tried to use :noindex: true option, but it reported that "noindex" is not a valid option for "doxygenfile" when build

Logback: SizeAndTimeBasedRollingPolicy deletes all archived files when totalSizeCap reached

I am using SizeAndTimeBasedRollingPolicy in logback. For small values of maxFileSize and totalSizeCap, logback deletes older archived files only, when totalSizeCap limit is reached. But, for large values of totalSizeCap (~ 5GB) , it deletes all archived files.
I would like to delete only the older archived files, when totalSizeCap limit is reached.I am using logback version 1.2.3
Here, is the logback configuration i am using.
<appender name="file" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${tivo.logpath}/${tivo.logfilename}.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<!-- Rollover everyday. If file exceeds 1GB within a day, then file is archived with index starting from 0 -->
<fileNamePattern>${tivo.logpath}/${tivo.logfilename}-%d{yyyyMMdd}-%i.log.gz</fileNamePattern>
<!-- Each file should be at most 1GB -->
<maxFileSize>1GB</maxFileSize>
<!-- Keep maximum 30 days worth of archive files, deleting older ones -->
<maxHistory>30</maxHistory>
<!-- Total size of all archived files is at most 5GB -->
<totalSizeCap>5GB</totalSizeCap>
</rollingPolicy>
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="com.tivo.logging.logback.layout.JsonLayout">
<env>${envId}</env>
<datacenter>${dcId}</datacenter>
<serverId>${serverId}</serverId>
<build>${info.properties.buildChange}</build>
<service>${tivo.appname}</service>
</layout>
</encoder>
</appender>
Looks like this is a known issue with the logback version < 1.3.0.
Logback: SizeAndTimeBasedRollingPolicy applies totalSizeCap to each day in maxHistory
https://jira.qos.ch/browse/LOGBACK-1361
So, we might have to update to that version.
One other interesting bug fixed in the logback 1.3.0 will be:
https://jira.qos.ch/browse/LOGBACK-1162

Cannot connect to MySQL with Weka

I am trying to connect a database to Weka 3.6.13 in Linux Elementary OS.
First, I had a problem with JDBC connection, solved by this answer changing the /usr/bin/weka file.
Now, when I load the database, this error comes:
Unknown data type: INT. Add entry in weka/experiment/DatabaseUtils.props.
However, I am trying to use explorer only, this file doesn't even exists in my installation.
I installed via sudo apt install weka.
What should I do?
Look inside the directory where your weka.jar file resides, and check if there exists a file called DatabaseUtils.props.
The Weka wiki says:
Weka only looks for the DatabaseUtils.props file. If you take one of
the example files listed above, you need to rename it first.
My file is different I think the actual name does not really matter, it's the filename extension that matters.
In my version of this file there is a section that looks like this:
... (snip...
# mysql-conversion / type-mappings
CHAR=0
TEXT=0
VARCHAR=0
STRING=0
LONGVARCHAR=9
BINARY=0
VARBINARY=0
LONGVARBINARY=9
BIT=1
BOOL=1
NUMERIC=2
DECIMAL=2
FLOAT=2
DOUBLE=2
TINYINT=3
SMALLINT=4
#SHORT=4
SHORT=5
INTEGER=5
INT=5
BIGINT=6
LONG=6
REAL=7
DATE=8
TIME=10
TIMESTAMP=11
#mappings for table creation
CREATE_STRING=TEXT
CREATE_INT=INT
CREATE_DOUBLE=DOUBLE
CREATE_DATE=DATETIME
DateFormat=yyyy-MM-dd HH:mm:ss
#database flags
checkUpperCaseNames=false
checkLowerCaseNames=false
checkForTable=true
setAutoCommit=true
createIndex=false
# All the reserved keywords for this database
Keywords=\
AND,\
ASC,\
BY,\
DESC,\
FROM,\
GROUP,\
INSERT,\
ORDER,\
SELECT,\
UPDATE,\
WHERE
# The character to append to attribute names to avoid exceptions due to
# clashes between keywords and attribute names
KeywordsMaskChar=_
#flags for loading and saving instances using DatabaseLoader/Saver
nominalToStringLimit=50
idColumn=auto_generated_id
If you do a google search for this file, another guy has posted his on github. The weka Wiki or SVN/Git-Repo might also list an offfical version somewhere (cannot find it right now), or you can open your weka.jar file as a zip file and extract the .props file (/src/main/java/weka/experiment/DatabaseUtils.props.mysql).
In any case, Mysql exists in many different versions, and I think you can even switch the query engine inside mysql. So I cannot express any guarantees that any of these 2 .props files shown here really work for you. You should experiment a bit.

CVS -- Need command line to change status of file file from Binary to allow keyword substitution

I am coming into an existing project after several years of use. I have been attempting to add the nice keywords $Header$ and $Id$ so that I can identify the file versions in use.
I have come across several text files where these keywords did not expand at all. Investigation has determined that CVS thinks these files are BINARY and will not expand the keywords.
Is there anyway from a Linux Command Line invocation to permanently change the status of these files in the repository to cause keyword expansion? I'd be appreciative if you could tell me. Several attempts that I have tried have not succeeded.
cvs admin -kkv filename
will restore the file to the default text mode so keywords are expanded.
If you type
cvs log -h filename
(to show just the header and not the entire history), a binary file will show
keyword substitution: b
which indicates that keyword substitution is never done, while a text file will show
keyword substitution: kv
The CVSROOT/cvswrappers file can be used to specify the default new files you add, based on their names.

How to disable printing the Logback pattern in log files

I noticed that Logback prints the pattern that it uses to format the log entries at the top of every log file.
For instance, I see this at the top of each log file:
#logback.classic pattern: %d [%t] %-5p %c - %m%n
How can I disable this?
Setting outputPatternAsPresentationHeader to false would work. However, the issue is addressed in logback version 1.0.3 where outputPatternAsPresentationHeader is set to false by default.
Okay, I found the problem was answered on the logback-user mailing list.
I added this:
<outputPatternAsPresentationHeader>false</outputPatternAsPresentationHeader>
To the <encoder> element, below the <pattern> element, and my problem was solved!