Attach custom Logback appenders after configuration reload - logback

We have been migrating from Log4j 1 to Logback and we have few custom appenders like below:
public class CustomAppender extends AppenderBase<ILoggingEvent> implements Runnable {}
Those attach to ch.qos.logback.classic.Logger using addAppender method when application start. All these programmatically created appenders are removed when Logback configuration file (logback.xml) changed. How can we re attach those appenders back to Logger without application restart?
Thanks

Related

What can cause flyway to not auto-detect migrations at startup, but then be able to do so at runtime?

Question:
What can cause flyway to not auto-detect migrations from the default path, and prevent resolution of migrations from a custom location, during startup only?
Given the following:
io.micronaut.flyway:micronaut-flyway uses flyway 6.4.4
Flyway, when run at application startup by micronaut, is unable to auto-detect migrations
Flyway, when run during bean initialization (i.e. in the constructor of the controller bean), is able to auto-detect migrations
Flyway is able to pick up and apply the migrations during startup during integration-testing. This gives me confidence that it is configured correctly. I can break it in expected ways by messing with the config / file location.
Migration file is certainly on the classpath during runtime on prod at the expected location, as evidenced by runtime-logs.
Context
I want to setup flyway migrations for my Kotlin-Micronaut-GoogleCloudFunction. As described in the docs, I have my migrations under src/main/resources/db/migration, named like V1__create_xyz_table.sql.
I verified that the migration is on the classpath at runtime, by logging it in the function body:
val fileContent = FunctionController::class.java.getResource("/db/migration/V1__create_xyz_table.sql").readText()
println(fileContent) // "create table xyz(id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY)"
This works, and logs the contents of the file to stdout as expected.
My integration tests run fine. Migrations are automatically detected and applied to the mysql-testcontainer instance. Data is written to and read from the dockerized DB.
However, when I start the application locally, or deploy it, the application warns me:
No migrations found. Are your locations set up correctly?
Unsurprisingly, triggering the function results in errors like "Table xyz does not exist".
Besides the actual db-credentials, my test and production setup share the following config:
# application.yml
datasources:
mysql:
url: <url>
username: <user>
password: <pw>
flyway:
datasources:
mysql:
enabled: true
Other things I have tried:
Using a Java-Based migration (same result)
Using the custom locations config (same result)
What "works":
When I autowire the datasource into the function controller, and apply the migrations inside the constructor it works: Successfully validated 1 migration.
init {
Flyway.configure().dataSource(mysqlDS).load().migrate()
}
This confirms, that all the necessary files are present and discoverable by flyway. Why would this not work during application startup?
I attached a debugger and found that different ClassLoaders are used to discover the resources:
During startup: AppClassLoader
During function execution: FunctionClassLoader
I was having the same issue. Debugging, I realized that the method that triggers the flyway migration wasn't running. This method lives inside a Micronaut BeanCreatedEventListener which listens on the creation of DataSource type beans.
What left me scratching my head was that the DataSource type bean was created successfully, which I confirmed on runtime by fetching it from the application context. So why wasn't the even listener triggering?
This is because the bean was being created before the event listener was even initialized. Why was this happening? Because I had another custom even listener in my app that injected a Jdbi bean. The jdbi bean subsequently injected the DataSource bean. This means that my custom event listener was injecting the DataSource bean, so it was impossible for it to be initialized before the bean was created.
I suggest setting a breakpoint in this method to check if it's being triggered. If it's not, it's possible the cause of your issue was similar to mine.

Apache Commons Logging with Logback

I have a Java web application that was using Log4j 1.x, which I migrated to slf4j using logback. I have a logback.xml file that includes my appenders, which are used by slf4j log statements such as the following:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
private final static Logger logger = LoggerFactory.getLogger(MyClass.class);
logger.info("slf4j using logback message");
However, I have some commons-logging in the code as well, declared as follows:
import org.apache.commons.logging.LogFactory;
import org.apache.commons.logging.Log;
private static Log log = LogFactory.getLog(MyClass.class);
log.info("commons logging message");
To my surprise, these statements also use my appenders declared in logback.xml (based on matching formatting in the logs). Why would this be? Does apache commons logging look at logback.xml?
At first I thought this had to do with the fact that I have jcl-over-slf4j being brought in as a transitive dependency in my pom.xml. But even after I exclude that, it appears commons logging is still using logback.xml. Is this expected? If so, is it possible to declare an appender in logback.xml that specifically formats statements using commons-logging?

Can I avoid redefining JNDI datasource when using JUnit?

My Java EE 7 app, which uses Spring, runs on Tomcat 7. It accesses a database by using a JNDI datasource, defined by this line in context.xml:
<Resource auth="Container" driverClassName="org.postgresql.Driver" maxActive="100" maxIdle="30" maxWait="10000" name="jdbc/leadmanager" password="xxxxxxxx" type="javax.sql.DataSource" url="jdbc:postgresql://localhost:5432/leadmanager" username="postgres"/>
I created some JUnit tests. When I tried to run them (in Eclipse, by right-clicking the test class and selecting Run As | JUnit Test), an exception occurred:
javax.persistence.PersistenceException: [PersistenceUnit: leadmanager] Unable to build EntityManagerFactory
...
Caused by: org.hibernate.service.jndi.JndiException: Error parsing JNDI name [java:/comp/env/jdbc/leadmanager]
...
Caused by: javax.naming.NoInitialContextException: Need to specify class name in environment or system property, or as an applet parameter, or in an application resource file: java.naming.factory.initial
...
Thanks to this helpful post -- https://blogs.oracle.com/randystuph/entry/injecting_jndi_datasources_for_junit -- I found a solution. I added this to my test class:
#BeforeClass
public static void setUpClass() throws Exception {
// create initial context
System.setProperty(Context.INITIAL_CONTEXT_FACTORY, "org.apache.naming.java.javaURLContextFactory");
System.setProperty(Context.URL_PKG_PREFIXES, "org.apache.naming");
InitialContext ic = new InitialContext();
ic.createSubcontext("java:");
ic.createSubcontext("java:/comp");
ic.createSubcontext("java:/comp/env");
ic.createSubcontext("java:/comp/env/jdbc");
PGPoolingDataSource ds = new PGPoolingDataSource();
ds.setServerName("localhost:5432/leadmanager");
ds.setUser("postgres");
ds.setPassword("xxxxxxxx");
ic.bind("java:/comp/env/jdbc/leadmanager", ds);
}
But that's hideous! I'm forced to define my datasource twice, once in context.xml and again in my test class. And I'm forced to store my database password in Java code that's going to be checked in to source control.
I've already consulted this post, as well: Setting up JNDI Datasource in jUnit
Is there a better way?
The reason is that your test is not running inside tomcat, but runs in a separate instance of JVM.
Try to build your unit test using Arquillian - this tool will package your test as a simple web application, which is executed within your tomcat. The result is that everything, what is accessible in tomcat, will be accessible in your tests, including resources.
You can also use TomcatJNDI. It reads Tomcat's configuration files and creates a JNDI environment as in your web application. It is as easy as this code example:
TomcatJNDI tomcatJNDI = new TomcatJNDI();
tomcatJNDI.processContextXml(contextXmlFile);
tomcatJNDI.start();
Now you can access all your resources you have declared in Tomcat's configuration files.
Find out more about it here.

How do I get log4J to work - I'm getting "package org.apache.log4j does not exist"

i know this may be a newbie qestion, but I'm having issues with setting up Log4J:
I want to run a log4j demo, and here's my code:
import org.apache.log4j.Logger;
import org.apache.log4j.BasicConfigurator;
public class HelloLOG4j {
private static final Logger logger = Logger.getLogger(Hello.class);
public static void main(String argv[]) {
BasicConfigurator.configure();
logger.debug("Hello world.");
logger.info("What a beatiful day.");
}
}
I set my Classpath:
C:\Users\Adel\Downloads\apache-log4j-1.2.17\log4j-1.2.17.jar
in both System and User var's
But when I run my program I still get
errors found:
File: C:\Users\Adel\Desktop\various_topics\JavaProjects\HelloLOG4j.java [line: 2]
Error: package org.apache.log4j does not exist
I know that I set classpath right - if I run cmd line:
C:\Program Files\Java\jdk1.6.0_20>print %LOG4J_HOME%
C:\Users\Adel\Downloads\apache-log4j-1.2.17\log4j-1.2.17.jar is currently bein
g printed
You need to add log4j home to the classpath as the JVM needs the path to the log4j classes
if on windows, you can use
set classpath=%classpath%;%LOG4J_HOME%
On linux/ ubuntu (much better than windows for development & servers)
export classpath=$classpath:$LOG4J_HOME
then run your app after adding other paths to classpath
like
set classpath=%classpath%;c:\users\adel\....
You do not need to add log4JHOME again - as %classpath%; will add to the current classpath.
LOG4J_HOME is not known to Java. It is just used by log4j in case of auto config/default config.
On a side note try using the new log4j2 !
Can you show how you are trying to compile the code?
And also, try adding the log4j.jar to 'lib' directory and compile with the classpath referencing this jar
Just want to remind that don't capitalized Log4j keyword , unlike Logger:
import org.apache.Log4j.Logger; //typo
import org.apache.log4j.Logger; //correct
/usr/share/java/log4j-1.2-api-2.8.2.jar path can be located by issue dpkg -L liblog4j2-java(debian-based) command, then do:
$ sudo javac -cp .:xxx.jar:/usr/share/java/log4j-1.2-api-2.8.2.jar xxx.java

Unable to setup Hadoop in pseudo mode

I have set up Hadoop on my computer in pseudo-distributed mode.
I followed the directions in Appendix A of 'Hadoop - A Definitive Guide' book to setup Hadoop in a pseudo-distributed mode.
However, from the output of following program, it is safe to infer that my Hadoop is running into standalone mode (i.e. local mode).
public static void main(String[] args) {
Configuration conf = new Configuration();
System.out.println(conf);
System.out.println(conf.get("fs.default.name"));
}
Output:
Configuration: core-default.xml, core-site.xml
file:///
The output is file:/// instead of hdfs://localhost. However the properties in core-site.xml are properly set:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost/</value>
</property>
</configuration>
Also when I submit a test job from Eclipse, it doesn't show up in jobTracker browser UI, I read somewhere that it is due to the fact that Hadoop is running in local mode.
Please let me know what's wrong in my configuration and how I can enable pseudo-distributed mode. Why am I not able to override fs.default.name property in default XML file with that I specified in core-site.xml file?
How are you launching the program? If you're not using the bin/hadoop script then the configuration files in conf/*.xml will not be on the classpath, and hence any values in them will be ignored.
You should also use the ToolRunner launcher:
public class MyJobDriver extends Configured implements Tool {
public static void main(String args[]) {
ToolRunner.run(new MyJobDriver(), args);
}
public int run(String args[]) {
Job job = new Job(getConf());
Configuration conf = job.getConfiguration();
System.out.println(conf);
System.out.println(conf.get("fs.default.name"));
return 0;
}
}
Some other points to note from this code:
Remember to create your Job with the Configuration provided by getConf() - this allows you to use the Generic Options Parser to parse out some common command line switches (-files, -jt, -fs, =Dkey=value etc)
If you need the Configuration to set some custom parameters - get the job copy using job.getConfiguration() - as Job makes a deep copy when you construct it, and any changes to the original will not be applied when you job runs
Then ensure you job is run using the bin/hadoop script:
#> bin/hadoop MyApp.jar a.b.c.MyAppDriver
If you're lauching from Eclipse, ensure the $HADOOP_HOME/conf folder is on the classpath and than will ensure the xml conf files are on the classpath when the Configuration object is created by the ToolRunner.