I'm using Logback's RollingFileAppender on a Linux server. If I delete the logfile while the process is running Logback (1.0.13) does not seem to recreate it and log messages are lost.
I found another related question, where the answer is that the recovery mechanism is OS specific.
Has anyone found a way to have Logback recreate the file or call a StatusListener if it detects that the file has been deleted?
Solved it using a custom FileAppender that extends RollingFileAppender and overrides writeOut(). Use openFile(getFile()) to recreate the file after checking that it does not exist.
Related
I am trying to configure wso2 by modifing its configuration file named "carbon.xml", but no matter what change I do to "carbon.xml", even adding a single "white space" or modifying a comment it's enough for the wso2 server to reset carbon.xml file to it´s original "out of the box" state.
I tryied to protect the file carbon.xml by dropping write permissions, but in this case wso2 server refuses to start, it aborts execution and displays an error complaining that it was not able to "write new configuration" !!!
Does any one know how to solve this?
I found the answer, In wso2 version 5.9 there is a new centralized configuration file, named "deployment.toml". Configurations must be done in this file and then wso2 propagates changes to the respective configurations files, like carbon.xml or catalina-server.xml, for example.
If you delete "deployment.toml" wso2 will fallback to previos behavior.
With the new 4.5.0 carbon-kernel release, all WSO2 products such as APIM 3.0.0, IS 5.9.0 introduced a new config model. According to the new config model, there is a centralized configuration file (deployment.toml) where users add the configurations, then those configurations will be added to the respective .xml files.
This new config model was introduced in order to simplify the configuration (previously there were a lot of configuration files) and to increase the user experience. Please follow this documentation to refer further information on this new config model
Related documents:
https://wso2.com/blogs/thesource/2019/10/simplifying-configuration-with-WSO2-identity-server
https://is.docs.wso2.com/en/next/references/new-configuration-model/
If you have a deployment.toml file, the changes directly made into the xml files will be overiden during the server startup. Deleting the deployment.toml file will use the old config model. But it is not a recommended approach.
The IOTAgent JSON is creating a very big log file, with the messages sent to the Orion Context Broker. Is it possible to configure in this nodejs process some rules for log management, maximun size, rotation, compression, log level messages. How to do ?
Many thanks in advance for your support
Best Regards
I don't know the exact cause of the problem, but you could consider the following hints:
Use ERROR or WARNING (*) as logLevel field in config.js. Levels INFO or DEBUG are very verbose.
You can use logrotate to rotate logs. Log rotate is a general purpose tool which many documentation around, so it should be easy to master (with time to learn, of course ;). The following configuration files in the IOTA-JSON repo may help:
logrotate configuration
crontab configuration to invoke logrotate
(*) I don't remember if the right config token is WARNING or WARN (or both!), you would need to test, sorry...
I want to log the exceptions in a log file using play 1.
Can anyone tell me how to configure it?
If it is a duplicate question, point it.
TIA
You could use create a custom log4j.properties in the conf/ directory:
Please see logging for production for details.
All right all you activemq guru's out there...
Currently activemq require a configuration file before it runs. It appears from its debug output message:
$ ./activemq start -h
INFO: Using default configuration (you can configure options in one of these file: /etc/default/activemq /home/user_name/.activemqrc)
That you can only put it in one of those two locations. Anybody know if this is the case? Is there some command line parameter to specify its location?
Thanks!
-roger-
Yes, it is possible. Here are 3 possible answers.
If classpath is setup properly:
activemq start xbean:myconfig.xml
activemq start xbean:file:./conf/broker1.xml
Not using the classpath:
activemq start xbean:file:C:/ActiveMQ/conf/broker2.xml
reference:
http://activemq.apache.org/activemq-command-line-tools-reference.html
I have not been able to find the answer to this and I struggled with this myself for a while, but I've found a bit of a workaround. When you use bin/activemq create, you can create a runnable instance that will have its own bin, conf, and data directories. Then you have more control over that runnable insance and the .activemqrc becomes less important.
See this for detail on the create option : http://activemq.apache.org/unix-shell-script.html
Try this:
bin/activemq start xbean:/home/user/activemq.xml
Note that if the xml file includes other files like jetty.xml then it needs to be in that dir also.
If using a recent 5.6 SNAPSHOT you can set the env var ACTIVEMQ_CONF to point to the location where you have the config files
in the /bin/activemq script under # CONFIGURATION # For using instances, you can add or remove any file destinations you'd like.
Be very though since it ignores the others at the first occurrency of a file, read more here
Unix configuration
happy coding !
Trying to configure Jenkins CI. Currently just running it from the .war (eventual intention as a service). Jenkins is aware of the CVS executable (i.e. will read the version [Concurrent Versions System (CVSNT) 2.0.62.1817 (client/server)]).
The .cvspass is not specified, because they apparently do not play nice with CVSNT (which prefers to keep passwords in the registry.) I've specified the password in the job config by using the :pserver:user:passg#server:/dir pattern for CVSROOT, which I found suggested in some places. Regardless of whether I run using that, or :pserver:userg#server:/dir as the CVSROOT I get the blinking red ball, jenkins stuck with a nearly full progress bar for 2 and a half minutes. It then fails. The console output yells with something like
FATAL: hudson.scm.ChangeLogSet.iterator()Ljava/util/Iterator;
java.lang.AbstractMethodError: hudson.scm.ChangeLogSet.iterator()Ljava/util/Iterator;
at hudson.model.AbstractBuild.getCulprits(AbstractBuild.java:282)
at hudson.model.AbstractBuild.getCulprits(AbstractBuild.java:279)
at hudson.model.AbstractBuild$AbstractRunner.post(AbstractBuild.java:596)
at hudson.model.Run.run(Run.java:1400)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:175)
Both CVSROOTs I'm using provide no trouble with TortoiseSVN. I've found some mention of difficult of logging into SVN from jenkins as a service and related user/system issues, but considering I'm running it from the .war I don't think that's the issue.
EDIT:
Interestingly the console log if I use an invalid user or password recognizes such.
cvs [checkout aborted]: authorization failed: server rejected access to /dir for user FOO
FATAL: CVS failed. exit code=1
Finished: FAILURE
which indicates that Hudson is talking to the CVS server and authenticating, but something else goes wrong.
/EDIT
Cheers
Answer to the question found, thanks to rpetti on #jenkins on freenode. Problem was I had switched between Hudson and Jenkins and there were some incompatible configuration files that were mucking things up. Deleting and recreating the home directory solved the problem.
CVSNT 2.0.62.1817 is very very old and has several known security issues. Please upgrade to the latest 2.8.01.