Output Spark application id in the logs with Log4j - json

I have a custom Log4j file for the Spark application. I would like to output Spark app id along with other attributes like message and date so the JSON string structure would look like this:
{"name":,"time":,"date":,"level":,"thread":,"message":,"app_id":}
Now, this structure looks like this:
{"name":,"time":,"date":,"level":,"thread":,"message":}
How can I define such layout for the Spark driver logs?
My log4j file looks like this:
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
<log4j:configuration xmlns:log4j='http://jakarta.apache.org/log4j/'>
<appender name="Json" class="org.apache.log4j.ConsoleAppender">
<layout class="org.apache.hadoop.log.Log4Json">
<param name="ConversionLayout" value=""/>
</layout>
</appender>
<root>
<level value="INFO"/>
<appender-ref ref="Json"/>
</root>
</log4j:configuration>

I doubt that org.apache.hadoop.log.Log4Json can be adjusted for this purpose. According to its javadoc and source code it might be rather cumbersome.
Although it looks like you are using Log4j 1x, its API is quite flexible and we can easily define our own layout by extending org.apache.log4j.Layout.
We'll need a case class that will be transformed into JSON according to the target structure:
case class LoggedMessage(name: String,
appId: String,
thread: String,
time: Long,
level: String,
message: String)
And Layout might be extended as follows. To access the value of "app_id", we'll use Log4j's Mapped Diagnostic Context
import org.apache.log4j.Layout
import org.apache.log4j.spi.LoggingEvent
import org.json4s.DefaultFormats
import org.json4s.native.Serialization.write
class JsonLoggingLayout extends Layout {
// required by the API
override def ignoresThrowable(): Boolean = false
// required by the API
override def activateOptions(): Unit = { /* nothing */ }
override def format(event: LoggingEvent): String = {
// we are using json4s for JSON serialization
implicit val formats = DefaultFormats
// retrieve app_id from Mapped Diagnostic Context
val appId = event.getMDC("app_id") match {
case null => "[no_app]" // logged messages outside our app
case defined: AnyRef => defined.toString
}
val message = LoggedMessage("TODO",
appId,
Thread.currentThread().getName,
event.getTimeStamp,
event.getLevel.toString,
event.getMessage.toString)
write(message) + "\n"
}
}
Finally, when the Spark session is created, we put the app_id value into MDC:
import org.apache.log4j.{Logger, MDC}
// create Spark session
MDC.put("app_id", session.sparkContext.applicationId)
logger.info("-------- this is info --------")
logger.warn("-------- THIS IS A WARNING --------")
logger.error("-------- !!! ERROR !!! --------")
This produces following logs:
{"name":"TODO","appId":"local-1550247707920","thread":"main","time":1550247708149,"level":"INFO","message":"-------- this is info --------"}
{"name":"TODO","appId":"local-1550247707920","thread":"main","time":1550247708150,"level":"WARN","message":"-------- THIS IS A WARNING --------"}
{"name":"TODO","appId":"local-1550247707920","thread":"main","time":1550247708150,"level":"ERROR","message":"-------- !!! ERROR !!! --------"}
And, of course, do not forget to refer the implementation in log4j config xml:
<appender name="Json" class="org.apache.log4j.ConsoleAppender">
<layout class="stackoverflow.q54706582.JsonLoggingLayout" />
</appender>

Related

How to configure different levels for different appenders but under same logger in logback

We are writing 2 appenders to write logs in different formats to 2 different files.
But we want to enable these logs based on some configuration.
So if the user wants to enable both of the formats, then both of logs will be printed. But if the user wants to disable one, that log should not be created.
Below is my logger configuration :
<logger name="package.name" additivity="false" level="DEBUG">
<appender-ref ref="STDOUT" />
<appender-ref ref="json_logs"/>
<appender-ref ref="text_logs"/>
</logger>
Now I want to put separate levels for this appender-ref. and value of these levels either should come from some property file which will be edited by user or user could simply update logback.xml file only.
I am not able to find a way to put separate levels for these appenders.
And since I have to write logs from same classes, I cant create 2 separate loggers too.
Also, if the user does not want to see txt logs, then the corresponding log.txt file should not be created.
What you are after is the ThresholdFilter which can be fine tuned for each appender:
http://logback.qos.ch/manual/filters.html#thresholdFilter
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<!-- deny all events with a level below INFO, that is TRACE and DEBUG -->
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>INFO</level>
</filter>
...
</appender>
As for not writing the file at all based on some flag... you could write your own appender. Something like this ultra-simplistic example that could be refined in a number of ways but should hopefully get you started:
public class ConditionalAppender<E> extends OutputStreamAppender<E> {
private boolean masterSwitch;
private String file;
#Override
public void start() {
if(masterSwitch)
setOutputStream(new FileOutputStream(file));
else
setOutputStream(new NullOutputStream());
super.start();
}
public void setMasterSwitch(final boolean enabled) {
this.masterSwitch = enabled;
}
public void setFile(final String file) {
this.file = file;
}
}
<appender name="CONDITIONAL" class="mypackage.ConditionalAppender">
<!-- switch this appender on or off -->
<masterSwitch>true</masterSwitch>
<!-- set the output file -->
<file>/var/log/app/app.log</file>
<!-- deny all events with a level below INFO, that is TRACE and DEBUG -->
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>INFO</level>
</filter>
...
</appender>
Hope this helps.

Fuse ide how to define database table end point

I have heard alot of success integration story when comes to Apache Camel with Fuse. HEnce. here Im just starting to explore the Fuse IDE, with just a simple task on top of my head, i would like to achieve:
Read a fix length file
Parse the fix length file
persist it to mysql database table
I am only able to get as far as:
Read the fix length file (with Endpoint "file:src/data/Japan?noop=true")
Define a Marshal with Bindy and Define a POJO package model with #FixedLengthRecord annotation
then i am stuck... HOW TO persist the POJO into mysql database table? I can see some JDBC, IBatis and JPA end point, but how to accomplish that in Fuse IDE?
My POJO package:
package com.mbww.model;
import org.apache.camel.dataformat.bindy.annotation.DataField;
import org.apache.camel.dataformat.bindy.annotation.FixedLengthRecord;
#FixedLengthRecord(length=91)
public class Japan {
#DataField(pos=1, length=10)
private String TNR;
#DataField(pos=11, length=10)
private String ATR;
#DataField(pos=21, length=70)
private String STR;
}
Well you can use all of the following components to actually read and write from the database:
JDBC
IBATIS
MyBATIS
SPRING-JDBC
SQL
Custom Processor
I am going to show you how to use the custom processor to insert the rows into a table. The main reason for this is that you will get to work with the messages and exchange and this will give you more of a insight into Camel. All of the other components can be used by following the documentation on the camel site.
So lets review what you have. You are reading the file and converting the body to a bindy object. So for each line in your text file Camel will send a bindy object of class com.mbww.model.JAPAN to the next end point. This next end point needs to talk to the database. There is one problem I can spot immediately you are using a marshal you should be using a unmarshal.
The documentation clearly states: If you receive a message from one of the Camel Components such as File, HTTP or JMS you often want to unmarshal the payload into some bean so that you can process it using some Bean Integration or perform Predicate evaluation and so forth. To do this use the unmarshal word in the DSL in Java or the Xml Configuration.
Your bindy class looks good but it is missing getters and setters modify the class to look like this:
package com.mbww.model;
import org.apache.camel.dataformat.bindy.annotation.DataField;
import org.apache.camel.dataformat.bindy.annotation.FixedLengthRecord;
#FixedLengthRecord(length=91)
public class Japan {
#DataField(pos=1, length=10)
private String TNR;
#DataField(pos=11, length=10)
private String ATR;
#DataField(pos=21, length=70)
private String STR;
public String getTNR() {
return TNR;
}
public void setTNR(String tNR) {
TNR = tNR;
}
public String getATR() {
return ATR;
}
public void setATR(String aTR) {
ATR = aTR;
}
public String getSTR() {
return STR;
}
public void setSTR(String sTR) {
STR = sTR;
}
}
First you need to create a data source to your database in your route. First thing is to add the mysql driver jar to your maven dependencies open your pom.xml file and add the following dependency to it.
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<!-- use this version of the driver or a later version of the driver -->
<version>5.1.25</version>
</dependency>
Right now we need to declare a custom processor to use in the route that will use this driver and insert the received body into a table.
So lets create a new class in Fuse IDE called PersistToDatabase code below:
package com.mbww.JapanData;
import java.sql.DriverManager;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.util.Map;
import org.apache.camel.Body;
import org.apache.camel.Exchange;
import org.apache.camel.Handler;
import org.apache.camel.Headers;
import com.mbww.model.Japan;
import com.mysql.jdbc.Statement;
public class PersistToDatabase {
#Handler
public void PersistRecord
(
#Body Japan msgBody
, #Headers Map hdr
, Exchange exch
) throws Exception
{
try {
Class.forName("com.mysql.jdbc.Driver");
} catch (ClassNotFoundException e) {
System.out.println("Where is your MySQL JDBC Driver?");
e.printStackTrace();
return;
}
System.out.println("MySQL JDBC Driver Registered!");
Connection connection = null;
try {
connection = DriverManager.getConnection("jdbc:mysql://localhost:3306/databasename","root", "password");
} catch (SQLException e) {
System.out.println("Connection Failed! Check output console");
e.printStackTrace();
return;
}
if (connection != null) {
System.out.println("You made it, take control your database now!");
} else {
System.out.println("Failed to make connection!");
}
try {
PreparedStatement stmt=connection.prepareStatement("INSERT INTO JapanDate(TNR,ATR,STR) VALUES(?,?,?)");
stmt.setString(1, msgBody.getTNR());
stmt.setString(2, msgBody.getATR());
stmt.setString(1, msgBody.getSTR());
int rows = stmt.executeUpdate();
System.out.println("Number of rows inserted: "+Integer.toString(rows));
}
catch(Exception e){
System.out.println("Error in executing sql statement: "+e.getMessage() );
throw new Exception(e.getMessage());
}
}
}
This class is a POJO nothing fancy except the #Handler annotation on the PersistRecord. This annotation tells camel that the PersistRecord method/procedure will handle the message exchange. You will also notice that the method PersistRecord has a parameter of type Japan. As mentioned earlier when you call the conversion bean in your camel route it translates each line into a Japan object and passes it along the route.
The rest of the code is just how to handle the JDBC connection and calling a insert statement.
We are almost done just one last thing to do. We need to declare this class in our camel route xml. This file will typically be called camel-route.xml or blueprint.xml depending on your arch type. Open the source tab and add the following line <bean id="JapanPersist" class="com.mbww.JapanData.PersistToDatabase"/> before the <camelContext> tag.
This declares a new spring bean called JapanPersist based on the class we just added to the camel route. You can now reference this bean inside your camel route.
Thus the final route xml file should look something like this:
<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:camel="http://camel.apache.org/schema/blueprint"
xsi:schemaLocation="
http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd
http://camel.apache.org/schema/blueprint http://camel.apache.org/schema/blueprint/camel-blueprint.xsd">
<bean id="JapanPersist" class="com.mbww.JapanData.PersistToDatabase"/>
<camelContext trace="false" id="blueprintContext" xmlns="http://camel.apache.org/schema/blueprint">
<route id="JapanDataFromFileToDB">
<from uri="file:src/data/japan"/>
<unmarshal ref="Japan"/>
<bean ref="JapanPersist"/>
</route>
</camelContext>
</blueprint>
Or see screen shot below:
Once you understand this technique you can start scaling the solution by using a splitter, connection pooling and threading to do massive amount of concurrent inserts etc.
Using the technique above you learned how to inject your own beans into a camel route which give you the ability to work with the messages directly in code.
I have not tested the code so there will probably be a bug or two but the idea should be clear.

How to log exceptions with network targets in NLog

I am using the NLog logging framework and am trying to get exception and stacktrace information showing up in any UDP logger appliaction, such as Sentinel and Log2Console, but can only get the log message part displayed. Outputting to a file works well as most examples do just that, so the problem revolves around using network targets with NLog.
Bonus if a custom format can be applied on inner exceptions and stacktrace, but this is not required. Exception.ToString() would go a long way.
Note on the example code: With Log2Console I found an article on how to send exception as a separate log entry. Although this worked, I was not happy with the solution.
Example exception logging code:
Logger Log = LogManager.GetCurrentClassLogger();
try
{
throw new InvalidOperationException("My ex", new FileNotFoundException("My inner ex1", new AccessViolationException("Innermost ex")));
}
catch (Exception e)
{
Log.ErrorException("TEST", e);
}
Example NLog.config:
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<targets async="true">
<!-- Send by UDP to Sentinel with NLogViewer protocol -->
<target name="network" xsi:type="NLogViewer" address="udp://192.168.1.3:9999" layout="${message}${onexception:inner=${newline}${exception:format=tostring}}" />
<!-- Send message by UDP to Log2Console with Chainsaw protocol -->
<target name="network2" xsi:type="Chainsaw" address="udp://192.168.1.3:9998" appinfo="Grocelist"/>
<!-- Send exception/stacktrace by UDP to Log2Console with generic network protocol -->
<target name="network2ex" xsi:type="Network" address="udp4://192.168.1.3:9998" layout="${exception:format=ToString}" />
<target name="logfile" xsi:type="File" layout="${longdate}|${level:uppercase=true}|${logger}|${message}|${exception:format=tostring}"
createDirs="true"
fileName="${basedir}/logs/${shortdate}.log"
/>
</targets>
<rules>
<logger name="*" minlevel="Debug" writeTo="logfile" />
<logger name="*" minlevel="Debug" writeTo="network" />
<logger name="*" minlevel="Debug" writeTo="network2" />
<logger name="*" minlevel="Warn" writeTo="network2ex" />
</rules>
</nlog>
Some links:
http://nlog-project.org
http://nlog-project.org/wiki/Targets
http://nlog-project.org/wiki/Exception_layout_renderer
http://nlog-project.org/2011/04/20/exception-logging-enhancements.html
http://nlog-project.org/wiki/How_to_properly_log_exceptions%3F
How to tell NLog to log exceptions?
https://stackoverflow.com/a/9684111/134761
http://nlog-forum.1685105.n2.nabble.com/How-to-send-stacktrace-of-exceptions-to-Chainsaw-or-Log2Console-td5465045.html
Edit:
After searching some more this seems to be a limitation on NLog's end. A recent patch is apparently out there: log4jxmlevent does not render Exception
Edit2:
I rebuilt NLog with patch, but it did not seem to help in Sentinel or Log2Console apps. I might have to try log4net to make sure those apps really do support what I am trying to achieve.
Edit3:
I currently use string.Format() to join and format message and exception text myself. This works well, but is not what I'm looking for here.
You can also extend NLog to include exceptions for network logging.
Create an extended layout:
[Layout("Log4JXmlEventLayoutEx")]
public class Log4JXmlEventLayoutEx : Log4JXmlEventLayout
{
protected override string GetFormattedMessage(LogEventInfo logEvent)
{
string msg = logEvent.Message + " ${exception:format=Message,Type,ToString,StackTrace}";
msg = SimpleLayout.Evaluate(msg, logEvent);
LogEventInfo updatedInfo;
if (msg == logEvent.Message) {
updatedInfo = logEvent;
} else {
updatedInfo = new LogEventInfo(
logEvent.Level, logEvent.LoggerName,
logEvent.FormatProvider, msg,
logEvent.Parameters, logEvent.Exception);
}
return base.GetFormattedMessage(updatedInfo);
}
}
Create a target that uses that layout
[Target("NLogViewerEx")]
public class NLogViewerTargetEx : NLogViewerTarget
{
private readonly Log4JXmlEventLayoutEx layout = new Log4JXmlEventLayoutEx();
public override Layout Layout { get { return layout; } set {} }
}
Update NLog.config:
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<extensions>
<add assembly="Assembly.Name.That.Contains.Extended.Target"/>
</extensions>
<targets>
<target name="logViewer"
xsi:type="NLogViewerEx"
address="udp://localhost:7071">
</targets>
...
</nlog>
A few years later and this is pretty trivial, try adding
includeSourceInfo="true"
to your target file, so it looks like;
<target name="viewer"
xsi:type="NLogViewer"
includeSourceInfo="true"
address="udp://127.0.0.1:9999" />
Gives you Source File, Line, Class and Method info.
I had this problem, and just updated my NLog nuget package to 2.0.1.2
Now I have exceptions coming through to Log2Console just fine.
Have you tried the latest developer snapshot of Chainsaw? It will display stack traces and supports log4net/UDP appenders, and according to NLog you can use it as well:
http://nlog-project.org/wiki/Chainsaw_target
Try the latest developer snapshot, has a ton of features: http://people.apache.org/~sdeboy
Just download and build the latest (NLog-Build-2.0.0.2007-0-g72f6495) sources from GitHub: https://github.com/jkowalski/NLog/tree/
This issue is fixed there by NLog developer.
In your NLog.config modify the target like the following.
<target name="file" xsi:type="File" fileName="log.txt" layout="${longdate}:${message} ${exception:format=message,stacktrace:separator=*}" />
The part that you are looking for is
${exception:format=message,stacktrace:separator=*}
For more information on this look here.

Spring 3.0.5 - Adding #ModelAttribute to handler method signature results in JsonMappingException

I'm not sure whether this is a misconfiguration on my part, a misunderstanding of what can be accomplished via #ModelAttribute and automatic JSON content conversion, or a bug in either Spring or Jackson. If it turns out to be the latter, of course, I'll file an issue with the appropriate folks.
I've encountered a problem with adding a #ModelAttribute to a controller's handler method. The intent of the method is to expose a bean that's been populated from a form or previous submission, but I can reproduce the issue without actually submitting data into the bean.
I'm using the Spring mvc-showcase sample. It's currently using Spring 3.1, but I first encountered, and am able to reproduce, this issue on my 3.0.5 setup. The mvc-showcase sample uses a pretty standard servlet-context.xml:
servlet-context.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans:beans xmlns="http://www.springframework.org/schema/mvc"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:beans="http://www.springframework.org/schema/beans"
xsi:schemaLocation="
http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.1.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd">
<!-- DispatcherServlet Context: defines this servlet's request-processing infrastructure -->
<!-- Enables the Spring MVC #Controller programming model -->
<annotation-driven conversion-service="conversionService">
<argument-resolvers>
<beans:bean class="org.springframework.samples.mvc.data.custom.CustomArgumentResolver"/>
</argument-resolvers>
</annotation-driven>
<!-- Handles HTTP GET requests for /resources/** by efficiently serving up static resources in the ${webappRoot}/resources/ directory -->
<resources mapping="/resources/**" location="/resources/" />
<!-- Resolves views selected for rendering by #Controllers to .jsp resources in the /WEB-INF/views directory -->
<beans:bean class="org.springframework.web.servlet.view.InternalResourceViewResolver">
<beans:property name="prefix" value="/WEB-INF/views/" />
<beans:property name="suffix" value=".jsp" />
</beans:bean>
<!-- Imports user-defined #Controller beans that process client requests -->
<beans:import resource="controllers.xml" />
<!-- Only needed because we install custom converters to support the examples in the org.springframewok.samples.mvc.convert package -->
<beans:bean id="conversionService" class="org.springframework.samples.mvc.convert.CustomConversionServiceFactoryBean" />
<!-- Only needed because we require fileupload in the org.springframework.samples.mvc.fileupload package -->
<beans:bean id="multipartResolver" class="org.springframework.web.multipart.commons.CommonsMultipartResolver" />
</beans:beans>
The controllers.xml referenced in the file simply sets up the relevant component-scan and view-controller for the root path. The relevant snippet is below.
controllers.xml
<!-- Maps '/' requests to the 'home' view -->
<mvc:view-controller path="/" view-name="home"/>
<context:component-scan base-package="org.springframework.samples.mvc" />
The test bean which I am attempting to deliver is a dead-simple POJO.
TestBean.java
package org.springframework.samples.mvc.test;
public class TestBean {
private String testField = "test#example.com";
public String getTestField() {
return testField;
}
public void setTestField(String testField) {
this.testField = testField;
}
}
And finally, the controller, which is also simple.
TestController.java
package org.springframework.samples.mvc.test;
import org.springframework.stereotype.Controller;
import org.springframework.ui.Model;
import org.springframework.web.bind.annotation.ModelAttribute;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.ResponseBody;
#Controller
#RequestMapping("test/*")
public class TestController {
#ModelAttribute("testBean")
public TestBean getTestBean() {
return new TestBean();
}
#RequestMapping(value = "beanOnly", method = RequestMethod.POST)
public #ResponseBody
TestBean testBean(#ModelAttribute("testBean") TestBean bean) {
return bean;
}
#RequestMapping(value = "withoutModel", method = RequestMethod.POST)
public #ResponseBody
Model testWithoutModel(Model model) {
model.addAttribute("result", "success");
return model;
}
#RequestMapping(value = "withModel", method = RequestMethod.POST)
public #ResponseBody
Model testWithModel(Model model, #ModelAttribute("testBean") TestBean bean) {
bean.setTestField("This is the new value of testField");
model.addAttribute("result", "success");
return model;
}
}
If I call the controller via the mapped path /mvc-showcase/test/beanOnly, I get a JSON representation of the bean, as expected. Calling the withoutModel handler delivers a JSON representation of the Spring Model object associated with the call. It includes the implicit #ModelAttribute from the initial declaration in the return value, but the bean is unavailable to the method. If I wish to process the results of a form submission, for example, and return a JSON response message, then I need that attribute.
The last method adds the #ModelAttribute, and this is where the trouble comes up. Calling /mvc-showcase/test/withModel causes an exception.
In my 3.0.5 installation, I get a JsonMappingException caused by a lack of serializer for FormattingConversionService. In the 3.1.0 sample, the exception is caused by lack of serializer for DefaultConversionService. I'll include the 3.1 exception here; it seems to have the same root cause, even if the path is a bit different.
3.1 org.codehaus.jackson.map.JsonMappingException
org.codehaus.jackson.map.JsonMappingException: No serializer found for class org.springframework.format.support.DefaultFormattingConversionService and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationConfig.Feature.FAIL_ON_EMPTY_BEANS) ) (through reference chain: org.springframework.validation.support.BindingAwareModelMap["org.springframework.validation.BindingResult.testBean"]->org.springframework.validation.BeanPropertyBindingResult["propertyAccessor"]->org.springframework.beans.BeanWrapperImpl["conversionService"])
at org.codehaus.jackson.map.ser.StdSerializerProvider$1.failForEmpty(StdSerializerProvider.java:89)
at org.codehaus.jackson.map.ser.StdSerializerProvider$1.serialize(StdSerializerProvider.java:62)
at org.codehaus.jackson.map.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:272)
at org.codehaus.jackson.map.ser.BeanSerializer.serializeFields(BeanSerializer.java:175)
at org.codehaus.jackson.map.ser.BeanSerializer.serialize(BeanSerializer.java:147)
at org.codehaus.jackson.map.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:272)
at org.codehaus.jackson.map.ser.BeanSerializer.serializeFields(BeanSerializer.java:175)
at org.codehaus.jackson.map.ser.BeanSerializer.serialize(BeanSerializer.java:147)
at org.codehaus.jackson.map.ser.MapSerializer.serializeFields(MapSerializer.java:207)
at org.codehaus.jackson.map.ser.MapSerializer.serialize(MapSerializer.java:140)
at org.codehaus.jackson.map.ser.MapSerializer.serialize(MapSerializer.java:22)
at org.codehaus.jackson.map.ser.StdSerializerProvider._serializeValue(StdSerializerProvider.java:315)
at org.codehaus.jackson.map.ser.StdSerializerProvider.serializeValue(StdSerializerProvider.java:242)
at org.codehaus.jackson.map.ObjectMapper.writeValue(ObjectMapper.java:1030)
at org.springframework.http.converter.json.MappingJacksonHttpMessageConverter.writeInternal(MappingJacksonHttpMessageConverter.java:153)
at org.springframework.http.converter.AbstractHttpMessageConverter.write(AbstractHttpMessageConverter.java:181)
at org.springframework.web.servlet.mvc.method.annotation.support.AbstractMessageConverterMethodProcessor.writeWithMessageConverters(AbstractMessageConverterMethodProcessor.java:121)
at org.springframework.web.servlet.mvc.method.annotation.support.AbstractMessageConverterMethodProcessor.writeWithMessageConverters(AbstractMessageConverterMethodProcessor.java:101)
at org.springframework.web.servlet.mvc.method.annotation.support.RequestResponseBodyMethodProcessor.handleReturnValue(RequestResponseBodyMethodProcessor.java:81)
at org.springframework.web.method.support.HandlerMethodReturnValueHandlerComposite.handleReturnValue(HandlerMethodReturnValueHandlerComposite.java:64)
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:114)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMethodAdapter.invokeHandlerMethod(RequestMappingHandlerMethodAdapter.java:505)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMethodAdapter.handleInternal(RequestMappingHandlerMethodAdapter.java:468)
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:80)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:790)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:719)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:644)
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:560)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:710)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
at
...
So, is there some configuration I am missing that should allow the Jackson converter to properly handle a response derived from a handler with #ModelAttribute in the method signature? If not, any thoughts as to whether this is more likely a Spring bug or a Jackson bug? I'm leaning toward Spring, at this point.
It looks like a Spring config problem, when serializing to JSON the DefaultFormattingConversionService is empty and Jackson (by default) will throw an exception if a bean is empty see FAIL_ON_EMPTY_BEANS in the features documentation. But I am not clear why the bean is empty.
It should work if you set FAIL_ON_EMPTY_BEANS to false, but still doesn't really explain why it is happening in the first place.
DefaultFormattingConversionService is new to 3.1 - it extends the FormattingConversionService which explains the different exceptions between 3.0.5 and 3.1.
I do not think it is a Jackson problem, although a new version of Jackson (1.8.0) was released only 3 days ago so you could try that also.
I will try to reproduce this locally.

How do I Log the Class file without the path in log4net

I want to be able to log the class file and line number in my log file so I am using the following config...
<appender name="RollingLogFileAppender" type="log4net.Appender.RollingFileAppender">
<!--etc-->
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date [%thread] %-5level (%file:%line) %logger => %message%newline" />
</layout>
</appender>
However the %file attribute is making my log file entries waaaay too long to read comfortably....
2009-08-07 16:41:55,271 [7] INFO (O:\mystream\aevpallsrv\DotNet\com.mycompany.au\myapp\Myappp\Controller.cs:75) MyApp.Controller => Controller.EnqueueWorkerThreads() - START
Is there a way to show just the class file ('Controller.cs') instead of the full path to the file also???
Michael
Although you're asking about the file name, I believe you can use %type to get the fully qualified type name (a.b.className). If you just want the class name, use %type{1}
Note that any method that generates caller information (%file and %type) have a performance cost associated with them.
As an aside, you can bypass the performance hit by naming the Logger using the Type name.
namespace MyNamespace
{
public class Foo
{
private static ILog log = LogManager.GetLogger(typeof(Foo));
}
}
Your conversion pattern would look like:
"%date [%thread] %-5level %logger %message"
Where your logger would be "MyNameSpace.Foo". Likewise, if you only want the class name, use "%logger{1}, which will resolve as "Foo".
One of the best advantages to this approach is that you can leverage log4net's hierarchical repository system to adjust logging levels per type:
<!-- all classes in MyNamespace are warn -->
<logger name="MyNamespace">
<level value="WARN" />
</logger>
<!-- only Foo is in debug -->
<logger name="MyNamespace.Foo">
<level value="DEBUG" />
</logger>
Out of the box, PatternLayout supports only the %file token. What you can do is subclass PatternLayout and add your own pattern, say %filename, and for this token only output the name of the file.