Camel bindy marshal to file creates multiple header row - csv

I have the following camel route:
from(inputDirectory)
.unmarshal(jaxb)
.process(jaxb2CSVDataProcessor)
.split(body()) //because there is a list of CSVRecords
.marshal(bindyCsvDataFormat)
.to(outputDirectory); //appending to existing file using "?autoCreate=true&fileExist=Append"
for my CSV model class I am using annotations:
#CsvRecord(separator = ",", generateHeaderColumns = true)
...
and for properties
#DataField(pos = 0)
...
My problem is that the headers are appended every time a new csv record is appended.
Is there a non-dirty way to control this? Am I missing anything here?

I made a work around which is working quite nicely, creating the header by querying the columnames of the #DataField annotation. This is happening once the first time the file is written. I wrote down the whole solution here:
How to generate a Flat file with header and footer using Camel Bindy

I ended up adding a processor that checks if the csv file exists just before the "to" clause. In there I do a manipulation of the byte array and remove the headers.

Hope this helps anyone else. I needed to do something similar where after my first split message I wanted to supress the header output. Here is a complete class (the 'FieldUtils' is part of the apache commons lib)
package com.routes;
import java.io.OutputStream;
import org.apache.camel.Exchange;
import org.apache.camel.dataformat.bindy.BindyAbstractFactory;
import org.apache.camel.dataformat.bindy.BindyCsvFactory;
import org.apache.camel.dataformat.bindy.BindyFactory;
import org.apache.camel.dataformat.bindy.FormatFactory;
import org.apache.camel.dataformat.bindy.csv.BindyCsvDataFormat;
import org.apache.commons.lang3.reflect.FieldUtils;
public class StreamingBindyCsvDataFormat extends BindyCsvDataFormat {
public StreamingBindyCsvDataFormat(Class<?> type) {
super(type);
}
#Override
public void marshal(Exchange exchange, Object body, OutputStream outputStream) throws Exception {
final StreamingBindyModelFactory factory = (StreamingBindyModelFactory) super.getFactory();
final int splitIndex = exchange.getProperty(Exchange.SPLIT_INDEX, -1, int.class);
final boolean splitComplete = exchange.getProperty(Exchange.SPLIT_COMPLETE, false, boolean.class);
super.marshal(exchange, body, outputStream);
if (splitIndex == 0) {
factory.setGenerateHeaderColumnNames(false); // turn off header generate after first exchange
} else if(splitComplete) {
factory.setGenerateHeaderColumnNames(true); // turn on header generate when split complete
}
}
#Override
protected BindyAbstractFactory createModelFactory(FormatFactory formatFactory) throws Exception {
BindyCsvFactory bindyCsvFactory = new StreamingBindyModelFactory(getClassType());
bindyCsvFactory.setFormatFactory(formatFactory);
return bindyCsvFactory;
}
public class StreamingBindyModelFactory extends BindyCsvFactory implements BindyFactory {
public StreamingBindyModelFactory(Class<?> type) throws Exception {
super(type);
}
public void setGenerateHeaderColumnNames(boolean generateHeaderColumnNames) throws IllegalAccessException {
FieldUtils.writeField(this, "generateHeaderColumnNames", generateHeaderColumnNames, true);
}
}
}

Related

Camel Bindy Streaming Payload and Writing to File

I have a route which supposes to read a huge XML file and then write a CSV file with a header. XML Record needs to be transformed first so I map it to java POJO and then marshal it again to write into a csv file.
I can't load all of the records in memory as the file contains more 200k records.
Issue: I am only seeing the last record being added to the CSV file. Not sure why it's not appending the data into the existing file.
Any idea how to make it work. The header is required in CSV.I am not seeing any other option to directly transform the stream and write headers along with to CSV without unmarshalling it to Pojo first. I tried using BeanIO as well, which requires me to add a Header record and not sure how that can be injected into a stream.
from("{{xml.files.route}}")
.split(body().tokenizeXML("EMPLOYEE", null))
.streaming()
.unmarshal().jacksonXml(Employee.class)
.marshal(bindyDataFormat)
.to("file://C:/Files/Test/emp/csv/?fileName=test.csv")
.end();
If I try to append into the existing file then CSV file appends headers to each iteration of records.
.to("file://C:/Files/Test/emp/csv/?fileName=test.csv&fileExist=append")
Your problem here is related to camel-bindy and not the file-component. It kinda expects you to marshal collection objects instead of individual objects hence if you marshal each object individually and have #CsvRecord(generateHeaderColumns = true ) on your Employee class then you'll get headers every time you marshal an individual Employee object.
You could set generateHeaderColumns to false and start the file with headers string manually. One way to obtain headers for Bindy annotated class is to get fields annotated with DataField using org.apache.commons.lang3.reflect.FieldUtils from apache-commons and construct headers string based on position, columnName and fieldName.
I usually prefer camel-stream over file-component when I need to stream something to a file but using file-component with appends probably works just as well.
Example:
package com.example;
import java.lang.reflect.Field;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Comparator;
import java.util.List;
import org.apache.camel.RoutesBuilder;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.dataformat.bindy.annotation.DataField;
import org.apache.camel.dataformat.bindy.csv.BindyCsvDataFormat;
import org.apache.camel.test.junit4.CamelTestSupport;
import org.apache.commons.lang3.reflect.FieldUtils;
import org.junit.Test;
public class ExampleTest extends CamelTestSupport {
#Test
public void testStreamEmployeesToCsvFile(){
List<Employee> body = new ArrayList<>();
body.add(new Employee("John", "Doe", 1965));
body.add(new Employee("Mary", "Sue", 1987));
body.add(new Employee("Gary", "Sue", 1991));
template.sendBody("direct:streamEmployeesToCSV", body);
}
#Override
protected RoutesBuilder createRouteBuilder() throws Exception {
return new RouteBuilder(){
#Override
public void configure() throws Exception {
BindyCsvDataFormat csvDataFormat = new BindyCsvDataFormat(Employee.class);
System.out.println(getCSVHeadersForClass(Employee.class, ","));
from("direct:streamEmployeesToCSV")
.setProperty("Employees", body())
// a bit hacky due to camel writing first entry and headers
// on the same line for some reason with (camel 2.25.2)
.setBody().constant("")
.to("file:target/testoutput?fileName=test.csv&fileExist=Override")
.setBody().constant(getCSVHeadersForClass(Employee.class, ","))
.to("stream:file?fileName=./target/testoutput/test.csv")
.split(exchangeProperty("Employees"))
.marshal(csvDataFormat)
.to("stream:file?fileName=./target/testoutput/test.csv")
.end()
.log("Done");
}
private String getCSVHeadersForClass(Class clazz, String separator ) {
Field[] fieldsArray = FieldUtils.getFieldsWithAnnotation(clazz, DataField.class);
List<Field> fields = new ArrayList<>(Arrays.asList(fieldsArray));
fields.sort(new Comparator<Field>(){
#Override
public int compare(Field lhsField, Field rhsField) {
DataField lhs = lhsField.getAnnotation(DataField.class);
DataField rhs = rhsField.getAnnotation(DataField.class);
return lhs.pos() < rhs.pos() ? -1 : (lhs.pos() > rhs.pos()) ? 1 : 0;
}
});
String[] fieldHeaders = new String[fields.size()];
for (int i = 0; i < fields.size(); i++) {
DataField dataField = fields.get(i).getAnnotation(DataField.class);
if(dataField.columnName().equals(""))
fieldHeaders[i] = fields.get(i).getName();
else
fieldHeaders[i] = dataField.columnName();
}
String csvHeaders = "";
for (int i = 0; i < fieldHeaders.length; i++) {
csvHeaders += fieldHeaders[i];
csvHeaders += i < fieldHeaders.length - 1 ? separator : "";
}
return csvHeaders;
}
};
}
}
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
<version>${apache-commons.version}</version>
</dependency>

Non-blocking parsing of a JSON String to a JsonNode

I'm exploring reactive programming with Spring Webflux and therefore, I'm trying to make my code completely nonblocking to get all the benefits of a reactive application.
Currently my code for the method to parse a Json String to a JsonNode to get specific values (in this case the elementId) looks like this:
public Mono<String> readElementIdFromJsonString(String jsonString){
final JsonNode jsonNode;
try {
jsonNode = MAPPER.readTree(jsonString);
} catch (IOException e) {
return Mono.error(e);
}
final String elementId = jsonNode.get("elementId").asText();
return Mono.just(elementId);
}
However, IntelliJ notifies me that I'm using an inappropriate blocking method call with this code:
MAPPER.readTree(jsonString);
How can I implement this code in a nonblocking way? I have seen that since Jackson 2.9+, it is possible to parse a Json String in a nonblocking async way, but I don't know how to use that API and I couldn't find an example how to do it correctly.
I am not sure why it is saying it is a blocking call since Jackson is non blocking as far as I know. Anyway one way to resolve this issue is to use schedulers if you do not want to use any other library. Like this.
public Mono<String> readElementIdFromJsonString(String input) {
return Mono.just(Mapper.readTree(input))
.map(it -> it.get("elementId").asText())
.onErrorResume( it -> Mono.error(it))
.subscribeOn(Schedulers.boundedElastic());
}
Something along that line.
import reactor.core.publisher.Mono;
import java.nio.charset.StandardCharsets;
import org.springframework.core.ResolvableType;
import org.springframework.core.io.buffer.DataBufferUtils;
import org.springframework.core.io.buffer.DefaultDataBuffer;
import org.springframework.core.io.buffer.DefaultDataBufferFactory;
import org.springframework.http.codec.json.AbstractJackson2Decoder;
import org.springframework.util.MimeType;
import org.springframework.util.MimeTypeUtils;
import com.fasterxml.jackson.databind.ObjectMapper;
#FunctionalInterface
public interface MessageParser<T> {
Mono<T> parse(String message);
}
public class JsonNodeParser extends AbstractJackson2Decoder implements MessageParser<JsonNode> {
private static final MimeType MIME_TYPE = MimeTypeUtils.APPLICATION_JSON;
private static final ObjectMapper OBJECT_MAPPER = allocateDefaultObjectMapper();
private final DefaultDataBufferFactory factory;
private final ResolvableType resolvableType;
public JsonNodeParser(final Environment env) {
super(OBJECT_MAPPER, MIME_TYPE);
this.factory = new DefaultDataBufferFactory();
this.resolvableType = ResolvableType.forClass(JsonNode.class);
this.setMaxInMemorySize(100000); // 1MB
canDecodeJsonNode();
}
#Override
public Mono<JsonNode> parse(final String message) {
final byte[] bytes = message.getBytes(StandardCharsets.UTF_8);
return decode(bytes);
}
private Mono<JsonNode> decode(final byte[] bytes) {
final DefaultDataBuffer defaultDataBuffer = this.factory.wrap(bytes);
return this.decodeToMono(Mono.just(defaultDataBuffer), this.resolvableType, MIME_TYPE, Map.of())
.ofType(JsonNode.class)
.subscribeOn(Schedulers.boundedElastic())
.doFinally((t) -> DataBufferUtils.release(defaultDataBuffer));
}
private void canDecodeJsonNode() {
if (!canDecode(this.resolvableType, MIME_TYPE)) {
throw new IllegalStateException(String.format("JsonNodeParser doesn't supports the given tar`enter code here`get " +
"element type [%s] and the MIME type [%s]", this.resolvableType, MIME_TYPE));
}
}
}

Is it possible to pass a java.util.Stream to Gson?

I'm currently working on a project where I need to fetch a large amount of data from the Database and parse it into a specific Json format, I already have built my custom Serializers and Its working properly when i pass a List to Gson. But as I was already working with Streams from my JPA Layer, I thought I could pass the Stream down to the Gson parser so that it could transform it directly to my Json data. But I'm getting an empty Json object instead of a correctly populated one.
So, if anyone could point to me a way to make Gson work with Java 8 Streams or if this isn't possible currently.. i could not find anything on Google, so i came to Stackoverflow.
You could use JsonWriter to streaming your data to output stream:
public void writeJsonStream(OutputStream out, Stream<DataObject> data) throws IOException {
try(JsonWriter writer = new JsonWriter(new OutputStreamWriter(out, "UTF-8"))) {
writer.setIndent(" ");
writer.beginArray();
data.forEach(d -> {
d.beginObject();
d.name("yourField").value(d.getYourField());
....
d.endObject();
});
writer.endArray();
}
}
Note that you're in charge of controling the json structure.
That is, if your DataObject contains nested Object, you have to write beginObject()/endObject() respectively. The same goes for nested array.
It is not as trivial as one would expect, but it can be done in a generic way.
When you look into the Javadoc to TypeAdapterFactory, they provide a very simplistic way of writing a TypeAdapterFactory for a custom type. Alas, it does not work as expected because of problems with element type detection. The proper way to do this can be found in Gson-internal CollectionTypeAdapterFactory. It is quite complex, but taking what's necessary one can come up with something like that:
final class StreamTypeAdapterFactory implements TypeAdapterFactory {
#SuppressWarnings("unchecked")
#Override
public <T> TypeAdapter<T> create(Gson gson, TypeToken<T> typeToken) {
Type type = typeToken.getType();
Class<? super T> rawType = typeToken.getRawType();
if (!Stream.class.isAssignableFrom(rawType)) {
return null;
}
Type elementType = ExtraGsonTypes.getStreamElementType(type, rawType);
TypeAdapter<?> elementAdapter = gson.getAdapter(TypeToken.get(elementType));
return (TypeAdapter<T>) new StreamTypeAdapter<>(elementAdapter);
}
private static class StreamTypeAdapter<E> extends TypeAdapter<Stream<E>> {
private final TypeAdapter<E> elementAdapter;
StreamTypeAdapter(TypeAdapter<E> elementAdapter) {
this.elementAdapter = elementAdapter;
}
public void write(JsonWriter out, Stream<E> value) throws IOException {
out.beginArray();
for (E element : iterable(value)) {
elementAdapter.write(out, element);
}
out.endArray();
}
public Stream<E> read(JsonReader in) throws IOException {
Stream.Builder<E> builder = Stream.builder();
in.beginArray();
while (in.hasNext()) {
builder.add(elementAdapter.read(in));
}
in.endArray();
return builder.build();
}
}
private static <T> Iterable<T> iterable(Stream<T> stream) {
return stream::iterator;
}
}
The ExtraGsonTypes is a special class that I used to circumvent package-private access to $Gson$Types.getSupertype method. It's a hack that works if you're not using JDK 9's modules - you simply place this class in the same package as $Gson$Types:
package com.google.gson.internal;
import java.lang.reflect.*;
import java.util.stream.Stream;
public final class ExtraGsonTypes {
public static Type getStreamElementType(Type context, Class<?> contextRawType) {
return getContainerElementType(context, contextRawType, Stream.class);
}
private static Type getContainerElementType(Type context, Class<?> contextRawType, Class<?> containerSupertype) {
Type containerType = $Gson$Types.getSupertype(context, contextRawType, containerSupertype);
if (containerType instanceof WildcardType) {
containerType = ((WildcardType)containerType).getUpperBounds()[0];
}
if (containerType instanceof ParameterizedType) {
return ((ParameterizedType) containerType).getActualTypeArguments()[0];
}
return Object.class;
}
}
(I filed an issue about that in GitHub)
You use it in the following way:
Gson gson = new GsonBuilder()
.registerTypeAdapterFactory(new StreamTypeAdapterFactory())
.create();
System.out.println(gson.toJson(Stream.of(1, 2, 3)));

Apache Camel CSV with Header

I have written a simple test app that reads records from a DB and puts the result in a csv file. So far it works fine but the column names i.e. headers are not put in the csv file. According to the doc it should be put there. I have also tried it without/with streaming and split but the situation is the same.
In the camel unit-tests in line 182 the headers are put there explicitly: https://github.com/apache/camel/blob/master/components/camel-csv/src/test/java/org/apache/camel/dataformat/csv/CsvDataFormatTest.java
How could this very simple problem be solved without the need to iterate over the headers? I also experimented with different settings but all the same. The e.g delimiters have been considered I set but the headers not. Thanks for the responses also in advance.
I used Camel 2.16.1 like this:
final CsvDataFormat csvDataFormat = new CsvDataFormat();
csvDataFormat.setHeaderDisabled(false);
[...]
from("direct:TEST").routeId("TEST")
.setBody(constant("SELECT * FROM MYTABLE"))
.to("jdbc:myDataSource?readSize=100") // max 100 records
// .split(simple("${body}")) // split the list
// .streaming() // not to keep all messages in memory
.marshal(csvDataFormat)
.to("file:extract?fileName=TEST.csv");
[...]
EDIT 1
I have also tried to add the headers from the exchange.in. They are there available with the name "CamelJdbcColumnNames" in a HashSet. I added it to the csvDataFormat like this:
final CsvDataFormat csvDataFormat = new CsvDataFormat();
csvDataFormat.setHeaderDisabled(false);
[...]
from("direct:TEST").routeId("TEST")
.setBody(constant("SELECT * FROM MYTABLE"))
.to("jdbc:myDataSource?readSize=100") // max 100 records
.process(new Processor() {
public void process(Exchange exchange) throws Exception {
headerNames = (HashSet)exchange.getIn().getHeader("CamelJdbcColumnNames");
System.out.println("#### Process headernames = " + new ArrayList<String>(headerNames).toString());
csvDataFormat.setHeader(new ArrayList<String>(headerNames));
}
})
.marshal(csvDataFormat)//.tracing()
.to("file:extract?fileName=TEST.csv");
The println() prints the column names but the cvs file generated does not.
EDIT2
I added the header names to the body as proposed in comment 1 like this:
.process(new Processor() {
public void process(Exchange exchange) throws Exception {
Set<String> headerNames = (HashSet)exchange.getIn().getHeader("CamelJdbcColumnNames");
Map<String, String> nameMap = new LinkedHashMap<String, String>();
for (String name: headerNames){
nameMap.put(name, name);
}
List<Map> listWithHeaders = new ArrayList<Map>();
listWithHeaders.add(nameMap);
List<Map> records = exchange.getIn().getBody(List.class);
listWithHeaders.addAll(records);
exchange.getIn().setBody(listWithHeaders, List.class);
System.out.println("#### Process headernames = " + new ArrayList<String>(headerNames).toString());
csvDataFormat.setHeader(new ArrayList<String>(headerNames));
}
})
The proposal solved the problem and thank you for that but it means that CsvDataFormat is not really usable. The exchange body after the JDBC query contains an ArrayList from HashMaps containing one record of the table. The key of the HashMap is the name of the column and the value is the value. So setting the config value for the header output in CsvDataFormat should be more than enough to get the headers generated. Do you know a simpler solution or did I miss something in the configuration?
You take the data from a database with JDBC so you need to add the headers yourself first to the message body so its the first row. The resultset from the jdbc is just the data, not including headers.
I have done it by overriding the BindyCsvDataFormat and BindyCsvFactory
public class BindySplittedCsvDataFormat extends BindyCsvDataFormat {
private boolean marshallingfirslLot = false;
public BindySplittedCsvDataFormat() {
super();
}
public BindySplittedCsvDataFormat(Class<?> type) {
super(type);
}
#Override
public void marshal(Exchange exchange, Object body, OutputStream outputStream) throws Exception {
marshallingfirslLot = new Integer(0).equals(exchange.getProperty("CamelSplitIndex"));
super.marshal(exchange, body, outputStream);
}
#Override
protected BindyAbstractFactory createModelFactory(FormatFactory formatFactory) throws Exception {
BindySplittedCsvFactory bindyCsvFactory = new BindySplittedCsvFactory(getClassType(), this);
bindyCsvFactory.setFormatFactory(formatFactory);
return bindyCsvFactory;
}
protected boolean isMarshallingFirslLot() {
return marshallingfirslLot;
}
}
public class BindySplittedCsvFactory extends BindyCsvFactory {
private BindySplittedCsvDataFormat bindySplittedCsvDataFormat;
public BindySplittedCsvFactory(Class<?> type, BindySplittedCsvDataFormat bindySplittedCsvDataFormat) throws Exception {
super(type);
this.bindySplittedCsvDataFormat = bindySplittedCsvDataFormat;
}
#Override
public boolean getGenerateHeaderColumnNames() {
return super.getGenerateHeaderColumnNames() && bindySplittedCsvDataFormat.isMarshallingFirslLot();
}
}
My solution with spring xml (but I'd like to have an option in for extracting also the header on top:
Using spring xml
<multicast stopOnException="true">
<pipeline>
<log message="saving table ${headers.tablename} header to ${headers.CamelFileName}..."/>
<setBody>
<groovy>request.headers.get('CamelJdbcColumnNames').join(";") + "\n"</groovy>
</setBody>
<to uri="file:output"/>
</pipeline>
<pipeline>
<log message="saving table ${headers.tablename} rows to ${headers.CamelFileName}..."/>
<marshal>
<csv delimiter=";" headerDisabled="false" useMaps="true"/>
</marshal>
<to uri="file:output?fileExist=Append"/>
</pipeline>
</multicast>
http://www.redaelli.org/matteo-blog/2019/05/24/exporting-database-tables-to-csv-files-with-apache-camel/

JSON Unmarshalling of xs:string

Problem:
We are facing strange problems when marshalling JSONs objects including the following content {"#type":"xs:string"}. Marshalling of this object results in a NullPointerException. See the stack trace below:
java.lang.NullPointerException
at com.sun.org.apache.xalan.internal.xsltc.trax.SAX2DOM.startElement(SAX2DOM.java:204)
at com.sun.org.apache.xml.internal.serializer.ToXMLSAXHandler.closeStartTag(ToXMLSAXHandler.java:208)
at com.sun.org.apache.xml.internal.serializer.ToXMLSAXHandler.characters(ToXMLSAXHandler.java:528)
at com.sun.org.apache.xalan.internal.xsltc.trax.TransformerHandlerImpl.characters(TransformerHandlerImpl.java:172)
at com.sun.xml.internal.bind.v2.runtime.unmarshaller.DomLoader.text(DomLoader.java:128)
at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallingContext.text(UnmarshallingContext.java:499)
at com.sun.xml.internal.bind.v2.runtime.unmarshaller.InterningXmlVisitor.text(InterningXmlVisitor.java:78)
at com.sun.xml.internal.bind.v2.runtime.unmarshaller.StAXStreamConnector.processText(StAXStreamConnector.java:324)
at com.sun.xml.internal.bind.v2.runtime.unmarshaller.StAXStreamConnector.handleEndElement(StAXStreamConnector.java:202)
at com.sun.xml.internal.bind.v2.runtime.unmarshaller.StAXStreamConnector.bridge(StAXStreamConnector.java:171)
at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal0(UnmarshallerImpl.java:355)
at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal(UnmarshallerImpl.java:334)
at com.sun.jersey.json.impl.BaseJSONUnmarshaller.unmarshalJAXBElementFromJSON(BaseJSONUnmarshaller.java:108)
at com.sun.jersey.json.impl.BaseJSONUnmarshaller.unmarshalFromJSON(BaseJSONUnmarshaller.java:97)
at JerseyNPETest.testNPEUnmarshal(JerseyNPETest.java:20)
The problem occurs while getting the response from the external service and casting it implicity by glassfish (Simple REST call).
We investigated the problem and found that it is actually related to the JSON unmarshaller.
Testcase:
Marshalling -
To verify our finding, we created a class which contains a member of type Object named propertyA. Then we set the value of propertyA to "some value" and marshalled it using the default marshaller which results in the JSON string "{"#type":"xs:string","$":"some value"}".
Unmarshalling - Afterwards we used the default unmarsahller. The attempt to unmarshall this JSON string resulted in the mentioned exception.
See the test case below:
import com.sun.jersey.api.json.JSONConfiguration;
import com.sun.jersey.json.impl.BaseJSONUnmarshaller;
import org.junit.Test;
import javax.xml.bind.JAXBContext;
import javax.xml.bind.JAXBException;
import javax.xml.bind.annotation.XmlRootElement;
import java.io.StringReader;
public class JerseyNPETest {
private static final String ERROR = "{\"additionalObject\":{\"#type\":\"xs:string\",\"$\":\"some value\"}}";
#Test
public void testNPEUnmarshal() throws JAXBException {
final JAXBContext context = JAXBContext.newInstance(AnObject.class);
final JSONConfiguration jsonConfig = JSONConfiguration.DEFAULT;
final BaseJSONUnmarshaller unmarshaller = new BaseJSONUnmarshaller(context, jsonConfig);
final StringReader reader = new StringReader(ERROR);
final AnObject result = unmarshaller.unmarshalFromJSON(reader, AnObject.class);
}
#XmlRootElement
public static class AnObject {
private Object additionalObject;
public Object getAdditionalObject() {
return additionalObject;
}
public void setAdditionalObject(final Object additionalObject) {
this.additionalObject = additionalObject;
}
}
}
Question:
How could this be solved in general e.g. by some configuration of glassfish to avoid this issue in the first place?
Currently we are working with glassfish 3.1.2.2. Any help is much appreciated!