Apache Flink: logback configuration ignored - logback

I have a flink job in which i am using logback as my logging framework. Apart from the file, console appenders, i am also using logstash-logback-appender to send my log to a logstash instance.
If i run the flink job from Eclipse, the logs are sent to the specified logstash server.
I can see the logs being sent to logstash if i package the application as jar and run it outside Eclipse.
However if run the flink application as job (by uploading the same jar as above) from flink dashboard, the logs are not sent to logstash.
My flink setup is running on windows as per the instructions here Running flink on windows. I start flink using start-cluster.bat
I think the logback configuration is ignored. I have placed the logback configuration at src/main/resources in my application. How can i get the logback configuration recognized by flink setup?
I have tried the steps mentioned in Best practices. Does this steps to replace log4j with logback are for the jobmanager & taskmanager logs or are they for application logs?

Try adding logstash-logback-encoder lib in the lib/ folder along with logback jars.

Related

How can I display log output from junit to teamcity log

I am writing tests using junit to run in TeamCity pipeline. For logging I am using org.apache.log4j.Logger.info(), but I cannot find the log in TeamCity build log. It works without any problem in console.
Do I have to use something else for TeamCity?

Spring XD: Using log4j with logback

I have a collection of XD job modules that all use log4j logging. I have recently upgraded to Spring XD 1.3.1 and my modules are no longer logging.
I have tried adding my packages to the xd-singlenode-logback.groovy configuration file. This has no effect.
I have created a dummy module using slf4j, which logs correctly.
I have tried to find any information on log4j and logback compatibility, but haven't found a definitive answer.
Do I have to switch out log4j with slf4j, or is there something I am missing.
Do you have log4j on the classpath?
xd/lib contains log4j-over-slf4j-1.7.12.jar which should convert your log4j calls to slf4j which in turn calls logback.
You should not have the real log4j on the CP for this to work properly, though.

Problems uploading logstash 1.1.15 - monolithic jar as a Cloud foundry app.

When I run vmc update logstash. I get errors.
Uploading Application:
Checking for available resources: OK
Processing resources: OK
Packing application: OK
Uploading (55M): OK
HTTP exception: RestClient::RequestTimeout:Request Timeout
Has anyone successfully pushed logstash to cloud foundry (Not Micro).
Logstash is a standalone app (jar) which requires jruby. Cloudfoundry supports jruby within a war, but not yet in a standalone runtime environment.

How can I add websphere admin jars to Hudson's class path?

I try to use Hudson's Deploy Websphere plug-in to deploy my artifacts to remote websphere.
From the plug-in documentation, I need to do this:
The following WAS JAR files need to be placed into the Hudson class path or dropped into the %project.basedir%/WEB-INF/lib/ directory. These JAR files can be copied from the %WAS_HOME%/runtimes/ directory of your WAS server installation.
com.ibm.ws.admin.client_6.1.0
com.ibm.ws.webservices.thinclient_6.1.0
I have installed hudson as a windows service, how can I add these jars to hudson's class path?
According to Hudson's documentation:
Changing the configuration of services
The JVM launch parameters of these
Windows services are controlled by an
XML file hudson.xml and
hudson-slave.xml respectively. These
files can be found in $HUDSON_HOME and
in the slave root directory
respectively, after you've install
them as Windows services.
The file format should be
self-explanatory. Tweak the arguments
for example to give JVM a bigger
memory.
Stdout and stderr from the service
processes go to log files in the same
directory.
So, it appears you can manipulate the service's JVM classpath using the hudson.xml file.
HTH

Hudson + JUnit + embedded GlassFish, how to provide domain configuration?

I'm using NetBeans and GlassFish 3.0.1 to create an EJB3 application. I have written a few Unit Tests, which get run via JUnit and make use of the embedded GlassFish. Whenever I run these tests on my development machine (so from within NetBeans), it's all good.
Now I would like to let Hudson do those tests. At the moment it is failing with lookup failure on a resource (in this case the datasource to a JPA persistance unit):
[junit] SEVERE: Exception while invoking class org.glassfish.persistence.jpa.JPADeployer prepare method
[junit] java.lang.RuntimeException: javax.naming.NamingException: Lookup failed for 'mvs_devel' in SerialContext
After searching around and trying to learn about this, I believe it is related to the embedded GlassFish not having been configured with resources. In other words it's missing a domain.xml file. Right?
Two questions:
Why does it work with NetBeans on my dev box? What magic does NetBeans do in the background?
How should I provide the file? Where does the embedded GlassFish on the Hudson-box expect it?
Hudson is using the same Ant build-scripts (created by NetBeans).
I've read this post about instanceRoot and the EmbeddedFileSystemBuilder, but I don't understand enough of that. Is this needed for every TestCase (Emb. GF gets started/stopped for each bean-under-test)? Is this part of EJBContainer.createEJBContainer()? Again, why is it not necessary to do this when running tests on NetBeans?
Update
Following Peter's advice I can confirm: when running ant on a freshly checked out copy of the code, with the same properties as hudson is configured, the tests get executed!
10-1 it is a classpath issue as IDE's tend to swap paths in and out depending if you run normally or unittests.
Try running the tests on a commandline from a freshly checked out version from your SCM. Chances are you'll have the same error. Debugging on your local machine is a lot easier than on a remote machine.
When it builds reliably on the command line (in a separate directory) then it is time to move to hudson.