log4j2 archive every week, rotate every startup? - configuration

I am trying to figure out how to simply rename the active log file on every startup, and archive all the rotated files once a week.
I am forced to specify the "filePattern" at the RollingFile appender declaration, instead of at the policy. Do this make sense?

I ended up doing my own implementation of a DeleteAction, and attaching it to a DefaultRolloverStrategy, so that it is zipping everything before deleting. You can find the sourcecode at:
https://github.com/lqbweb/log4j2-ZipDelete

Related

Release Pipeline to Deployment Group on prem

I think what I am looking to do is fairly simple - I just can't wrap my head around it.
I've got a repo in AzDo. This repo contains configuration files for firewalls. This is how we manage changes in these configurations.
I've got a simple build pipeline that copies the relevant files and creates an artifact.
I have a release pipeline that gets the files onto the on-prem machine in my Deployment Group. The files show up in c:\azagent\r1\_work\<artifact folder>.
As part of this pipeline I am looking to copy the files from c:\azagent\r1\_work\<artifact folder> to e:\shares\<artifact name>. This is the part that I cannot figure out how to make work.
What strategy could I use to put this together? I've looked into the documentation but it seems like this is somewhat of an edge case (not deploying an app or web site, etc). Ideally, I'd love to do this in a multi-stage YAML pipeline - but from what I've read, it appears as if these do not yet support Deployment Groups. So a classic pipeline is fine for now.
You can add a copy file task(Click the plus sign(+) on the agent job and search for copy files) in your release pipeline to copy the files to a different place on your local machine.
Then you can specify the source folder(ie. $(System.DefaultWorkingDirectory)), and the contents to copy and the target folder(ie. e:\shares\). In below example all contents in $(System.DefaultWorkingDirectory)(ie. C:\agent\_work\r1\a) will be copied to folder D:\Test\New folder
Please check the prefined variables for more information about its map to the local folders.

logback not working in Flink

I have a single node Flink instance which has the required jars for logback in the lib folder (logback-classic.jar, logback-core.jar, log4j-over-slf4j.jar). I have removed the jars for log4j from the lib folder (log4j-1.2.17.jar, slf4j-log4j12-1.7.7.jar). 'logback.xml' is also correctly updated in 'conf' folder. I have also included 'logback.xml' in the classpath, although this does not seem to be considered while the job is run. Flink refers to logback.xml inside the conf folder only. I have updated pom.xml as per Flink's documentation in order to exclude log4j.
I have some log entries set inside a few map and flatmap functions and some log entries outside those functions (eg: "program execution started").
When I run the job, Flink writes only those logs that are coded outside the transformations. Those logs that are coded inside the transformations (map, flatmap etc) are not getting written to the log file. Also, Flink displays a strange behavior regarding this. Whenever I update the logback jars inside the the lib folder(due to version changes), during the next job run, all logs (even those inside map and flatmap) are written correctly into the log file. But the logs don't get written in any of the runs after that. This means that my 'logback.xml' file is correct and the settings are also correct. But I don't understand why the same settings don't work while the same job is run again.
Update
This issue was reported to the Flink team and they have added this as a bug in JIRA https://issues.apache.org/jira/browse/FLINK-7990

Files not under caret on new computer

I opened my project on another computer, and the files where I'd been using a file watcher were expanded, like before they used to be nested like home.scss is now after I run the watcher once on that file.
Is there a way to automatically make all the files be nested?
Because when adding new files and folder with git, it would be quite troublesome to go into each and every file in order to make them become nested.
Like I have some minified JavaScript files that used to be nested, but now is expanded for some reason.
Hope you understand. Thank you.
Edit: Nested***
Is there a way to automatically make all the files go under a caret like that?
Unfortunately not. Such nesting information (to "go under a caret" as you are saying) is taken from "Output path to refresh" field of the corresponding File Watcher.
You have to run file watcher for such files at least once in order to see files nested like you have it on your another computer.
Here is how you can run File Watchers manually without the need to modify those files (so no extra history will appear in your git (or whatever VCS you may be using there)).
https://stackoverflow.com/a/20012655/783119
P.S.
In PhpStorm 2016.3 (the next version that will be released in 1.5-2 months or so) such nesting will be done automatically (the most common combinations) so there will be no need to have File Watchers for providing such info.
If you wish -- you can try EAP build right now (EAP means Early Access Program .. which is sort of Alpha/Beta builds (simply speaking).. and therefore some bugs for new functionality might be present and performance may not be optimal).

What does it mean to "include the corresponding worker script (app-indexeddb-mirror-worker.js) among your deployable files"?

In the documentation for app-indexeddb-mirror at https://elements.polymer-project.org/elements/app-storage?active=app-indexeddb-mirror there is a section I've copied below. I think I'm running into an error because the indicated file isn't loading, but I'm not sure how to fix the issue. Do I add a reference in staticFileGlobs in sw-precache-config.js or somewhere else?
In order to ensure that operations on IndexedDB block the main browser thread as little as possible, app-indexeddb-mirror relies on a WebWorker to operate on its corresponding IndexedDB database. If you are vulcanizing or otherwise combining your source files before your app is deployed, make sure that you include the corresponding worker script (app-indexeddb-mirror-worker.js) among your deployable files. You can configure the path to the worker script with the worker-url attribute.
The error I'm getting:
GET https://example.com/src/common-worker-scope.js?https://example.com/bower_components/app-storage/app-indexeddb-mirror/app-indexeddb-mirror-worker.js net::ERR_INTERNET_DISCONNECTED

How to stop mercurial from syncing an EXISTING project file

So the problem is that all developers need different settings for their local testing, but the settings file is part of the project (unlike the nbproject folder for example that we all ignore). I know about .htignore, but the filter only applies to files that are not part of the project.
If I forget the file, then this removes it from the "global" repository, where we have a "holder" version of the settings file.
Right now we just don't commit that file, but every now and then somebody forgets and pushes his own settings, which then are synced back to other developers and it's a constant pain. We just want to "automatically" not push that file. Is there a solution to this? Are we doing something wrong?
You could add a precommit hook that gives an error every time you try to commit this particular file.
To handle the case of developers that forget to setup such a hook, you can also add a serverside hook that will reject their push.