Camel - CSV Headers setting not working - csv

I have CSV files without headers. Since I'm using 'useMaps' I want to specify the headers dynamically. If I set headers statically and then use in route it works fine as below Approach 1 -
#Component
public class BulkActionRoutes extends RouteBuilder {
#Override
public void configure() throws Exception {
CsvDataFormat csv = new CsvDataFormat(",");
csv.setUseMaps(true);
ArrayList<String> list = new ArrayList<String>();
list.add("DeviceName");
list.add("Brand");
list.add("status");
list.add("type");
list.add("features_c");
list.add("battery_c");
list.add("colors");
csv.setHeader(list);
from("direct:bulkImport")
.convertBodyTo(String.class)
.unmarshal(csv)
.split(body()).streaming()
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
GenericObjectModel model = null;
HashMap<String, String> csvRecord = (HashMap<String, String>)exchange.getIn().getBody();
}
});
}
}
However, if the list is passed via Camel headers as below then it does not work Approach 2 -
#Component
public class BulkActionRoutes extends RouteBuilder {
#Override
public void configure() throws Exception {
CsvDataFormat csv = new CsvDataFormat(",");
csv.setUseMaps(true);
from("direct:bulkImport")
.convertBodyTo(String.class)
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
ArrayList<String> fileHeaders = (ArrayList<String>)headers.get(Constants.FILE_HEADER_LIST);
if (fileHeaders != null && fileHeaders.size() > 0) {
csv.setHeader(fileHeaders);
}
}
})
.unmarshal(csv)
.split(body()).streaming()
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
GenericObjectModel model = null;
HashMap<String, String> csvRecord = (HashMap<String, String>)exchange.getIn().getBody();
}
});
}
}
What could be missing in the Approach 2?

The big difference between approach 1 and 2 is the scope.
In approach 1 you fully configure the CSV data format. This is all done when the Camel Context is created, since the data format is shared within the Camel Context. When messages are processed, it is the same config for all messages.
In approach 2 you just configure the basics globally. The header configuration is within the route and therefore can change for every single message. Every message would overwrite the header configuration of the context-global data format instance.
Without being sure about this, I guess that it is not possible to change a context-global DataFormat inside the routes.
What would you expect (just for example) when messages are processed in parallel? They would overwrite the header config against each other.
As an alternative, you could use a POJO where you can do your dynamic marshal / unmarshal from Java code.

Related

How to test keyedbroadcastprocessfunction in flink?

I am new to flink i am trying write junit test cases to test KeyedBroadCastProcessFunction. Below is my code ,i am currently calling the getDataStreamOutput method in TestUtils class and passing inputdata and patternrules to method once the input data is evaluated against list of pattern rules and if input data satisfy the condition i will get the signal and calling sink function and returning output data as string in getDataStreamOutput method
#Test
public void testCompareInputAndOutputDataForInputSignal() throws Exception {
Assertions.assertEquals(sampleInputSignal,
TestUtils.getDataStreamOutput(
inputSignal,
patternRules));
}
public static String getDataStreamOutput(JSONObject input, Map<String, String> patternRules) throws Exception {
env.setParallelism(1);
DataStream<JSONObject> inputSignal = env.fromElements(input);
DataStream<Map<String, String>> rawPatternStream =
env.fromElements(patternRules);
//Generate a key,value pair of set of patterns where key is pattern name and value is pattern condition
DataStream<Tuple2<String, Map<String, String>>> patternRuleStream =
rawPatternStream.flatMap(new FlatMapFunction<Map<String, String>,
Tuple2<String, Map<String, String>>>() {
#Override
public void flatMap(Map<String, String> patternRules,
Collector<Tuple2<String, Map<String, String>>> out) throws Exception {
for (Map.Entry<String, String> stringEntry : patternRules.entrySet()) {
JSONObject jsonObject = new JSONObject(stringEntry.getValue());
Map<String, String> map = new HashMap<>();
for (String key : jsonObject.keySet()) {
String value = jsonObject.get(key).toString();
map.put(key, value);
}
out.collect(new Tuple2<>(stringEntry.getKey(), map));
}
}
});
BroadcastStream<Tuple2<String, Map<String, String>>> patternRuleBroadcast =
patternStream.broadcast(patternRuleDescriptor);
DataStream<Tuple2<String, JSONObject>> validSignal = inputSignal.map(new MapFunction<JSONObject,
Tuple2<String, JSONObject>>() {
#Override
public Tuple2<String, JSONObject> map(JSONObject inputSignal) throws Exception {
String source =
inputSignal.getSource();
return new Tuple2<>(source, inputSignal);
}
}).keyBy(0).connect(patternRuleBroadcast).process(new MyKeyedBroadCastProcessFunction());
validSignal.map(new MapFunction<Tuple2<String, JSONObject>,
JSONObject>() {
#Override
public JSONObject map(Tuple2<String, JSONObject> inputSignal) throws Exception {
return inputSignal.f1;
}
}).addSink(new getDataStreamOutput());
env.execute("TestFlink");
}
return (getDataStreamOutput.dataStreamOutput);
}
#SuppressWarnings("serial")
public static final class getDataStreamOutput implements SinkFunction<JSONObject> {
public static String dataStreamOutput;
public void invoke(JSONObject inputSignal) throws Exception {
dataStreamOutput = inputSignal.toString();
}
}
I need to test different inputs with same broadcast rules but each time i am calling this function its again and again doing process from beginning take input signal broadcast data, is there a way i can broadcast once and keeping on sending the input to the method i explored i can use CoFlatMapFunction something like below to combine datastream and keep on sending the input rules while method is running but for this one of the datastream has to keep on getting data from kafka topic again it will overburden on method to load kafka utils and server
DataStream<JSONObject> inputSignalFromKafka = env.addSource(inputSignalKafka);
DataStream<org.json.JSONObject> inputSignalFromMethod = env.fromElements(inputSignal));
DataStream<JSONObject> inputSignal = inputSignalFromMethod.connect(inputSignalFromKafka)
.flatMap(new SignalCoFlatMapper());
public static class SignalCoFlatMapper
implements CoFlatMapFunction<JSONObject, JSONObject, JSONObject> {
#Override
public void flatMap1(JSONObject inputValue, Collector<JSONObject> out) throws Exception {
out.collect(inputValue);
}
#Override
public void flatMap2(JSONObject kafkaValue, Collector<JSONObject> out) throws Exception {
out.collect(kafkaValue);
}
}
I found a link in stackoverflow How to unit test BroadcastProcessFunction in flink when processElement depends on broadcasted data but this is confused me a lot
Any way i can only broadcast only once in Before method in test cases and keeping sending different kind of data to my broadcast function
You can use KeyedTwoInputStreamOperatorTestHarness in order to achieve this for example let's assume you have the following KeyedBroadcastProcessFunction where you define some business logic for both DataStream channels
public class SimpleKeyedBroadcastProcessFunction extends KeyedBroadcastProcessFunction<String, String, String, String> {
#Override
public void processElement(String inputEntry,
ReadOnlyContext readOnlyContext, Collector<String> collector) throws Exception {
//business logic for how you want to process your data stream records
}
#Override
public void processBroadcastElement(String broadcastInput, Context
context, Collector<String> collector) throws Exception {
//process input from your broadcast channel
}
Let's now assume your process function is stateful and is making modifications to the Flink internal state, you would have to create a TestHarness inside your test class to ensure you are able to keep track of the state during testing.
I would then create some unit tests using the following approach:
public class SimpleKeyedBroadcastProcessFunctionTest {
private SimpleKeyedBroadcastProcessFunction processFunction;
private KeyedTwoInputStreamOperatorTestHarness<String, String, String, String> testHarness;
#Before
public void setup() throws Exception {
processFunction = new SimpleKeyedBroadcastProcessFunction();
testHarness = new KeyedTwoInputStreamOperatorTestHarness<>(
new CoBroadcastWithKeyedOperator<>(processFunction, ImmutableList.of(BROADCAST_MAP_STATE_DESCRIPTOR)),
(KeySelector<String, String>) string -> string ,
(KeySelector<String, String>) string -> string,
TypeInformation.of(String.class));
testHarness.setup();
testHarness.open();
}
#After
public void cleanup() throws Exception {
testHarness.close();
}
#Test
public void testProcessRegularInput() throws Exception {
//processElement1 send elements into your regular stream, second param will be the event time of the record
testHarness.processElement1(new StreamRecord<>("Hello", 0));
//Access records collected during processElement
List<StreamRecord<? extends String>> records = testHarness.extractOutputStreamRecords();
assertEquals("Hello", records.get(0).getValue())
}
#Test
public void testProcessBroadcastInput() throws Exception {
//processElement2 send elements into your broadcast stream, second param will be the event time of the record
testHarness.processElement2(new StreamRecord<>("Hello from Broadcast", 0));
//Access records collected during processElement
List<StreamRecord<? extends String>> records = testHarness.extractOutputStreamRecords();
assertEquals("Hello from Broadcast", records.get(0).getValue())
}
}

JAX-RS Exception Mapper not working in Grizzly container

Working on a Jersey web application with a team, as the project got bigger and bigger, we decided to switch from Tomcat to Grizzly to allow deploying parts of the project on different port numbers. What I've found out now, that the custom exception handling we have fails to work now, instead I always get the grizzly html page.
Example exception:
public class DataNotFoundException extends RuntimeException{
private static final long serialVersionUID = -1622261264080480479L;
public DataNotFoundException(String message) {
super(message);
System.out.println("exception constructor called"); //this prints
}
}
Mapper:
#Provider
public class DataNotFoundExceptionMapper implements ExceptionMapper<DataNotFoundException>{
public DataNotFoundExceptionMapper() {
System.out.println("mapper constructor called"); //doesnt print
}
#Override
public Response toResponse(DataNotFoundException ex) {
System.out.println("toResponse called"); //doesnt print
ErrorMessage errorMessage = new ErrorMessage(ex.getMessage(), 404, "No documentation yet.");
return Response.status(Status.NOT_FOUND)
.entity(errorMessage)
.build();
//ErrorMessage is a simple POJO with 2 string and 1 int field
}
}
I'm not sure where is the problem source, if needed I can provide more information/code. What's the problem, what can I try?
EDIT:
Main.class:
public class Main {
/**
* Main method.
* #param args
* #throws Exception
*/
public static void main(String[] args) throws Exception {
...
List<ServerInfo> serverList = new ArrayList<ServerInfo>();
serverList.add(new ServerInfo(
"api",8450,
new ResourceConfig().registerClasses(
the.package.was.here.ApiResource.class)
));
for(ServerInfo server : serverList) {
server.start();
}
System.out.println("Press enter to exit...");
System.in.read();
for(ServerInfo server : serverList) {
server.stop();
}
}
}
EDIT2:
based on this question I've tried using this ServerProperties.RESPONSE_SET_STATUS_OVER_SEND_ERROR, "true"property, which only helped a little. I still get the html grizzly page when the exception happens, but now I see my exception (+stack trace) in the body of the page.
You're only registering one resource class for the entire application
new ResourceConfig().registerClasses(
eu.arrowhead.core.api.ApiResource.class
)
The mapper needs to be registered also
new ResourceConfig().registerClasses(
eu.arrowhead.core.api.ApiResource.class,
YourMapper.class)
)
You can also use package scanning, which will pick up all classes and automatically register them, if they are annotated with #Path or #Provider
new ResourceConfig().packages("the.packages.to.scan")

Apache Camel CSV with Header

I have written a simple test app that reads records from a DB and puts the result in a csv file. So far it works fine but the column names i.e. headers are not put in the csv file. According to the doc it should be put there. I have also tried it without/with streaming and split but the situation is the same.
In the camel unit-tests in line 182 the headers are put there explicitly: https://github.com/apache/camel/blob/master/components/camel-csv/src/test/java/org/apache/camel/dataformat/csv/CsvDataFormatTest.java
How could this very simple problem be solved without the need to iterate over the headers? I also experimented with different settings but all the same. The e.g delimiters have been considered I set but the headers not. Thanks for the responses also in advance.
I used Camel 2.16.1 like this:
final CsvDataFormat csvDataFormat = new CsvDataFormat();
csvDataFormat.setHeaderDisabled(false);
[...]
from("direct:TEST").routeId("TEST")
.setBody(constant("SELECT * FROM MYTABLE"))
.to("jdbc:myDataSource?readSize=100") // max 100 records
// .split(simple("${body}")) // split the list
// .streaming() // not to keep all messages in memory
.marshal(csvDataFormat)
.to("file:extract?fileName=TEST.csv");
[...]
EDIT 1
I have also tried to add the headers from the exchange.in. They are there available with the name "CamelJdbcColumnNames" in a HashSet. I added it to the csvDataFormat like this:
final CsvDataFormat csvDataFormat = new CsvDataFormat();
csvDataFormat.setHeaderDisabled(false);
[...]
from("direct:TEST").routeId("TEST")
.setBody(constant("SELECT * FROM MYTABLE"))
.to("jdbc:myDataSource?readSize=100") // max 100 records
.process(new Processor() {
public void process(Exchange exchange) throws Exception {
headerNames = (HashSet)exchange.getIn().getHeader("CamelJdbcColumnNames");
System.out.println("#### Process headernames = " + new ArrayList<String>(headerNames).toString());
csvDataFormat.setHeader(new ArrayList<String>(headerNames));
}
})
.marshal(csvDataFormat)//.tracing()
.to("file:extract?fileName=TEST.csv");
The println() prints the column names but the cvs file generated does not.
EDIT2
I added the header names to the body as proposed in comment 1 like this:
.process(new Processor() {
public void process(Exchange exchange) throws Exception {
Set<String> headerNames = (HashSet)exchange.getIn().getHeader("CamelJdbcColumnNames");
Map<String, String> nameMap = new LinkedHashMap<String, String>();
for (String name: headerNames){
nameMap.put(name, name);
}
List<Map> listWithHeaders = new ArrayList<Map>();
listWithHeaders.add(nameMap);
List<Map> records = exchange.getIn().getBody(List.class);
listWithHeaders.addAll(records);
exchange.getIn().setBody(listWithHeaders, List.class);
System.out.println("#### Process headernames = " + new ArrayList<String>(headerNames).toString());
csvDataFormat.setHeader(new ArrayList<String>(headerNames));
}
})
The proposal solved the problem and thank you for that but it means that CsvDataFormat is not really usable. The exchange body after the JDBC query contains an ArrayList from HashMaps containing one record of the table. The key of the HashMap is the name of the column and the value is the value. So setting the config value for the header output in CsvDataFormat should be more than enough to get the headers generated. Do you know a simpler solution or did I miss something in the configuration?
You take the data from a database with JDBC so you need to add the headers yourself first to the message body so its the first row. The resultset from the jdbc is just the data, not including headers.
I have done it by overriding the BindyCsvDataFormat and BindyCsvFactory
public class BindySplittedCsvDataFormat extends BindyCsvDataFormat {
private boolean marshallingfirslLot = false;
public BindySplittedCsvDataFormat() {
super();
}
public BindySplittedCsvDataFormat(Class<?> type) {
super(type);
}
#Override
public void marshal(Exchange exchange, Object body, OutputStream outputStream) throws Exception {
marshallingfirslLot = new Integer(0).equals(exchange.getProperty("CamelSplitIndex"));
super.marshal(exchange, body, outputStream);
}
#Override
protected BindyAbstractFactory createModelFactory(FormatFactory formatFactory) throws Exception {
BindySplittedCsvFactory bindyCsvFactory = new BindySplittedCsvFactory(getClassType(), this);
bindyCsvFactory.setFormatFactory(formatFactory);
return bindyCsvFactory;
}
protected boolean isMarshallingFirslLot() {
return marshallingfirslLot;
}
}
public class BindySplittedCsvFactory extends BindyCsvFactory {
private BindySplittedCsvDataFormat bindySplittedCsvDataFormat;
public BindySplittedCsvFactory(Class<?> type, BindySplittedCsvDataFormat bindySplittedCsvDataFormat) throws Exception {
super(type);
this.bindySplittedCsvDataFormat = bindySplittedCsvDataFormat;
}
#Override
public boolean getGenerateHeaderColumnNames() {
return super.getGenerateHeaderColumnNames() && bindySplittedCsvDataFormat.isMarshallingFirslLot();
}
}
My solution with spring xml (but I'd like to have an option in for extracting also the header on top:
Using spring xml
<multicast stopOnException="true">
<pipeline>
<log message="saving table ${headers.tablename} header to ${headers.CamelFileName}..."/>
<setBody>
<groovy>request.headers.get('CamelJdbcColumnNames').join(";") + "\n"</groovy>
</setBody>
<to uri="file:output"/>
</pipeline>
<pipeline>
<log message="saving table ${headers.tablename} rows to ${headers.CamelFileName}..."/>
<marshal>
<csv delimiter=";" headerDisabled="false" useMaps="true"/>
</marshal>
<to uri="file:output?fileExist=Append"/>
</pipeline>
</multicast>
http://www.redaelli.org/matteo-blog/2019/05/24/exporting-database-tables-to-csv-files-with-apache-camel/

Add new line at the end of Jersey generated JSON

I have a Jersey (1.x) based REST service. It uses Jackson 2.4.4 to generate JSON responses. I need to add a newline character at the end of response (cURL users complain that there's no new line in responses). I am using Jersey pretty-print feature (SerializationFeature.INDENT_OUTPUT).
current: {\n "prop" : "value"\n}
wanted: {\n "prop" : "value"\n}\n
I tried using a custom serializer. I need to add \n only at the end of the root object. Serializer is defined per data type, which means, if an instance of such class is nested in a response, I will get \n in the middle of my JSON.
I thought of subclassing com.fasterxml.jackson.core.JsonGenerator.java, overriding close() where i'd add writeRaw('\n'), but that feels very hacky.
Another idea would be to add Servlet filter which would re-write the response from Jersey Filter, adding the \n and incrementing the contentLenght by 1. Seems not only hacky, but also inefficient.
I could also give up Jersey taking care of serializing the content and do ObjectMapper.writeValue() + "\n", but this is quite intrusive to my code (need to change many places).
What is the clean solution for that problem?
I have found these threads for the same problem, but none of them provides solution:
http://markmail.org/message/nj4aqheqobmt4o5c
http://jackson-users.ning.com/forum/topics/add-newline-after-object-serialization-in-jersey
Update
Finally I went for #arachnid's solution with NewlineAddingPrettyPrinter (also bumper Jackson version to 2.6.2). Sadly, it does not work out of the box with Jaskson as JAX-RS Json provider. Changed PrettyPrinter in ObjectMapper does not get propagated to JsonGenerator (see here why). To make it work, I had to add ResponseFilter which adds ObjectWriterModifier (now I can easily toggle between pretty-print and minimal, based on input param ):
#Provider
public class PrettyPrintFilter extends BaseResponseFilter {
public ContainerResponse filter(ContainerRequest request, ContainerResponse response) {
ObjectWriterInjector.set(new PrettyPrintToggler(true));
return response;
}
final class PrettyPrintToggler extends ObjectWriterModifier {
private static final PrettyPrinter NO_PRETTY_PRINT = new MinimalPrettyPrinter();
private final boolean usePrettyPrint;
public PrettyPrintToggler(boolean usePrettyPrint) {
this.usePrettyPrint = usePrettyPrint;
}
#Override
public ObjectWriter modify(EndpointConfigBase<?> endpoint, MultivaluedMap<String, Object> responseHeaders,
Object valueToWrite, ObjectWriter w, JsonGenerator g) throws IOException {
if (usePrettyPrint) g.setPrettyPrinter(new NewlineAddingPrettyPrinter());
else g.setPrettyPrinter(NO_PRETTY_PRINT);
return w;
}
}
}
Actually, wrapping up (not subclassing) JsonGenerator isn't too bad:
public static final class NewlineAddingJsonFactory extends JsonFactory {
#Override
protected JsonGenerator _createGenerator(Writer out, IOContext ctxt) throws IOException {
return new NewlineAddingJsonGenerator(super._createGenerator(out, ctxt));
}
#Override
protected JsonGenerator _createUTF8Generator(OutputStream out, IOContext ctxt) throws IOException {
return new NewlineAddingJsonGenerator(super._createUTF8Generator(out, ctxt));
}
}
public static final class NewlineAddingJsonGenerator extends JsonGenerator {
private final JsonGenerator underlying;
private int depth = 0;
public NewlineAddingJsonGenerator(JsonGenerator underlying) {
this.underlying = underlying;
}
#Override
public void writeStartObject() throws IOException {
underlying.writeStartObject();
++depth;
}
#Override
public void writeEndObject() throws IOException {
underlying.writeEndObject();
if (--depth == 0) {
underlying.writeRaw('\n');
}
}
// ... and delegate all the other methods of JsonGenerator (CGLIB can hide this if you put in some time)
}
#Test
public void append_newline_after_end_of_json() throws Exception {
ObjectWriter writer = new ObjectMapper(new NewlineAddingJsonFactory()).writer();
assertThat(writer.writeValueAsString(ImmutableMap.of()), equalTo("{}\n"));
assertThat(writer.writeValueAsString(ImmutableMap.of("foo", "bar")), equalTo("{\"foo\":\"bar\"}\n"));
}
A servlet filter isn't necessarily too bad either, although recently the ServletOutputStream interface has been more involved to intercept properly.
I found doing this via PrettyPrinter problematic on earlier Jackson versions (such as your 2.4.4), in part because of the need to go through an ObjectWriter to configure it properly: only fixed in Jackson 2.6. For completeness, this is a working 2.5 solution:
#Test
public void append_newline_after_end_of_json() throws Exception {
// Jackson 2.6:
// ObjectMapper mapper = new ObjectMapper()
// .setDefaultPrettyPrinter(new NewlineAddingPrettyPrinter())
// .enable(SerializationFeature.INDENT_OUTPUT);
// ObjectWriter writer = mapper.writer();
ObjectMapper mapper = new ObjectMapper();
ObjectWriter writer = mapper.writer().with(new NewlineAddingPrettyPrinter());
assertThat(writer.writeValueAsString(ImmutableMap.of()), equalTo("{}\n"));
assertThat(writer.writeValueAsString(ImmutableMap.of("foo", "bar")),
equalTo("{\"foo\":\"bar\"}\n"));
}
public static final class NewlineAddingPrettyPrinter
extends MinimalPrettyPrinter
implements Instantiatable<PrettyPrinter> {
private int depth = 0;
#Override
public void writeStartObject(JsonGenerator jg) throws IOException, JsonGenerationException {
super.writeStartObject(jg);
++depth;
}
#Override
public void writeEndObject(JsonGenerator jg, int nrOfEntries) throws IOException, JsonGenerationException {
super.writeEndObject(jg, nrOfEntries);
if (--depth == 0) {
jg.writeRaw('\n');
}
}
#Override
public PrettyPrinter createInstance() {
return new NewlineAddingPrettyPrinter();
}
}
Not yet tested but the following should work:
public class MyObjectMapper extends ObjectMapper {
_defaultPrettyPrinter = com.fasterxml.jackson.core.util.MinimalPrettyPrinter("\n");
// AND/OR
#Override
protected PrettyPrinter _defaultPrettyPrinter() {
return new com.fasterxml.jackson.core.util.MinimalPrettyPrinter("\n");
}
}
public class JerseyConfiguration extends ResourceConfig {
...
MyObjectMapper mapper = new MyObjectMapper();
mapper.enable(SerializationFeature.INDENT_OUTPUT); //enables pretty printing
// create JsonProvider to provide custom ObjectMapper
JacksonJaxbJsonProvider provider = new JacksonJaxbJsonProvider();
provider.setMapper(mapper);
register(provider); //register so that jersey use it
}
Do not know if this is the "cleanest" solution but it feels less hacky than the others.
Should produce something like
{\n "root" : "1"\n}\n{\n "root2" : "2"\n}
But it seems that does not work if there is only one root element.
Idea is from https://gist.github.com/deverton/7743979

Simple camel cxfrs consumer that consumes json and creates a map

I am struggling on a simple task. I want to create a cxfrs consumer that simply consumes json.
The json should be converted to a simple map (key->value): I created a simple test:
#Test
public final void test() throws Exception {
MockEndpoint mockOut = context.getEndpoint(MOCK_OUT, MockEndpoint.class);
mockOut.expectedMessageCount(1);
context.addRoutes(createRouteBuilder());
context.start();
context.createProducerTemplate().sendBody(DIRECT_A, "{ \"ussdCode\":\"101#\",\"msisdn\":\"491234567\"}");
mockOut.assertIsSatisfied();
}
private RouteBuilder createRouteBuilder() throws Exception {
return new RouteBuilder() {
#Override
public void configure() throws Exception {
from(DIRECT_A).to("cxfrs://http://localhost:8085/ussd");
from("cxfrs://http://localhost:8085/ussd")
.unmarshal().json(JsonLibrary.Jackson)
.process(to).to(MOCK_OUT);
}
};
}
The problem is on context.start() i get ServiceConstructionException: No resource classes found. I also tried to create the consumer this way (setting binding style):
private Endpoint fromCxfRsEndpoint() {
CxfRsEndpoint cxfRsEndpoint = context.getEndpoint("cxfrs://http://localhost:8085/ussd", CxfRsEndpoint.class);
cxfRsEndpoint.setBindingStyle(BindingStyle.SimpleConsumer);
return cxfRsEndpoint;
}
This didn't helped neither. So how to create a simple rest/json consumer and unmarshal to a simple map?