I've implemented a Mule flow that reads CSV files and insert the records into Salesforce using a batch.
To manage the errors I have created a step that only accepts failed records.
I've tried modifying the original values so that it fails and proving that it works properly.
The Salesforce response message is a JSON containing a field called statusCode with the following value: INVALID_TYPE_ON_FIELD_IN_RECORD.
However Mule does not recognize it as an error and does not fail so it never enters the Step of failed records.
How can I modify this? Should I change it in Salesforce or add the statusCode cases in Error Mapping?
In Mule 4 you can use raiser-error to force an error. Then you just need to define what expression to trigger your expression:
#[sizeOf((payload.errors default [])) > 0]
or
#[payload.errors[0].statusCode=='INVALID_TYPE_ON_FIELD_IN_RECORD']
etc.
Example using choice router:
<choice doc:name="successful?">
<when expression="#[sizeOf((payload.errors default [])) > 0]">
<raise-error type="APP:INVALID_TYPE_ON_FIELD_IN_RECORD" />
</when>
</choice>
Alternative to controlling flow with errors is setting the acceptExpression on the batch step with the same expression:
<batch:step name="step1" acceptExpression="#[sizeOf((payload.errors default [])) > 0]">
Related
i'm using camel in a rest context and i've to manipulate a json got from a request . It's something like:
{
'field1':'abc',
'field2':'def'
}
All i've to do is to extract field1 and field2 and put them in 2 properties, so i tried something like that
<setProperty propertyName="Field1">
<jsonpath>$.field1</jsonpath>
</setProperty>
<setProperty propertyName="Field2">
<jsonpath>$.field2</jsonpath>
</setProperty>
but i get this error:
org.apache.camel.ExpressionEvaluationException:
com.jayway.jsonpath.PathNotFoundException: Expected to find an object with property ['field2'] in path $ but found 'java.lang.String'. This is not a json object according to the JsonProvider: 'com.jayway.jsonpath.spi.json.JsonSmartJsonProvider'.
and after some tests i found out my body was empty after the first use of jsonpath.
The same process applied to an XML using xpath doesn't give any error, and i'm wondering if it's possible to do the same with jsonpath instead to create a mapper object in java. thank you in advance
If the processed Camel message is of type InputStream, this stream can obviously be read only once.
To solve this:
either enable Camel stream caching (http://camel.apache.org/stream-caching.html)
or insert a step (before jsonpath queries) in your route to convert message body to a string (so that it can be read multiple times:
(eg <convertBodyTo type="java.lang.String" charset="ISO-8859-1">) )
I'm working with Dropwizard 1.3.2, which does logging using SLF4J over Logback. I am writing logs for ingestion into ElasticSearch, so I thought I'd use JSON logging and make some Kibana dashboards. But I really want more than one JSON item per log message - if I am recording a status update with ten fields, I would ideally like to log the object and have the JSON fields show up as top level entries in the JSON log. I did get MDC working but that is very clumsy and doesn't flatten objects.
That's turned out to be difficult! How can I do that? I have it logging in JSON, but I can't nicely log multiple JSON fields!
Things I've done:
My Dropwizard configuration has this appender:
appenders:
- type: console
target: stdout
layout:
type: json
timestampFormat: "ISO_INSTANT"
prettyPrint: false
appendLineSeparator: true
additionalFields:
keyOne: "value one"
keyTwo: "value two"
flattenMdc: true
The additional fields show up, but those values seem to be fixed in the configuration file and don't change. There is a "customFieldNames" but no documentation on how to use it, and no matter what I put in there I get a "no String-argument constructor/factory method to deserialize from String value" error. (The docs have an example value of "#timestamp" but no explanation, and even that generates the error. They also have examples like "(requestTime:request_time, userAgent:user_agent)" but again, undocumented and I can't make anything similar work, everything I've tried generates the error above.
I did get MDC to work, but it seems silly to plug in each item into MDC and then clear it.
And I can deserialize an object and log it as nested JSON, but that also seems weird.
All the answers I've seen on this are old - does anyone have any advice on how to do this nicely inside Dropwizard?
You can use logback explicitly in Dropwizard using a custom logger factory, and then set it up with logstash-logback-encoder, and configure it to write out to a JSON appender.
The JSON encoder may look like this:
<included>
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<pattern>
<pattern>
{
"id": "%uniqueId",
"relative_ns": "#asLong{%nanoTime}",
"tse_ms": "#asLong{%tse}",
"start_ms": "#asLong{%startTime}",
"cpu": "%cpu",
"mem": "%mem",
"load": "%loadavg"
}
</pattern>
</pattern>
<timestamp>
<!-- UTC is the best server consistent timezone -->
<timeZone>${encoders.json.timeZone}</timeZone>
<pattern>${encoders.json.timestampPattern}</pattern>
</timestamp>
<version/>
<message/>
<loggerName/>
<threadName/>
<logLevel/>
<logLevelValue/><!-- numeric value is useful for filtering >= -->
<stackHash/>
<mdc/>
<logstashMarkers/>
<arguments/>
<provider class="com.tersesystems.logback.exceptionmapping.json.ExceptionArgumentsProvider">
<fieldName>exception</fieldName>
</provider>
<stackTrace>
<!--
https://github.com/logstash/logstash-logback-encoder#customizing-stack-traces
-->
<throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
<rootCauseFirst>${encoders.json.shortenedThrowableConverter.rootCauseFirst}</rootCauseFirst>
<inlineHash>${encoders.json.shortenedThrowableConverter.inlineHash}</inlineHash>
</throwableConverter>
</stackTrace>
</providers>
</encoder>
</included>
File on Github
and produce output like this:
{"id":"FfwJtsNHYSw6O0Qbm7EAAA","relative_ns":20921024,"tse_ms":1584163814965,"start_ms":null,"#timestamp":"2020-03-14T05:30:14.965Z","#version":"1","message":"Creating Pool for datasource 'logging'","logger_name":"play.api.db.HikariCPConnectionPool","thread_name":"play-dev-mode-akka.actor.default-dispatcher-7","level":"INFO","level_value":20000}
I'm using mulesoft esb 3.7 with MySQL. If I run a query with no resultset I notice the payload has a value of size=0...How do I evaluate that in a choice router? Is it #[flowVars.size==0] or #[payload==null]?
Thanks
#sam, use a debugger and check whether the type of the resultset is a collection or not, say if it is a list then use #[payload.size()==0] if not then you'll see that the payload is null or not.
database component always returns a List object so no need of collection check.
you can check directly #[payload.size()==0]
In case of restful exception handling, use the validation component is-not-empty. If it's empty, the NotFoundException will be thrown. Easy to handle it in the exception strategie.
<validation:is-not-empty value="#[payload]" exceptionClass="org.mule.module.apikit.exception.NotFoundException" />
I created a saved search of "items" in netsuite.
<netsuite:search config-ref="NetSuite__Login_Authentication" searchRecord="ITEM_ADVANCED" bodyFieldsOnly="false" returnSearchColumns="true" doc:name="NetSuite"/>
<json:object-to-json-transformer doc:name="Object to JSON"/>
When 'returnSearchColumns' is set to "true", receiving the below exception. If this attribute is set to false, there is no exception but response is missing the columns selected.
java.lang.IllegalArgumentException: No enum constant org.mule.module.netsuite.RecordTypeEnum.ITEM
Also, received 'ConsumerIterator' object as response from netsuite and used "Object to JSON" transformer right after netsuite connector. The response received is an array of item objects.
1) Is there a way to convert this payload into XML format? Both object to XML and JSON to XML are not giving entire XML.
2) How to avoid the above mentioned illegal argument exception ?
1) object-to-xml should convert all fields to XML, or you could try something like Dataweave. What exactly is missing?
2) There is no type called 'ITEM'. You have to use one mentioned in this list: http://mulesoft.github.io/netsuite-connector/6.0.1/java/org/mule/module/netsuite/RecordTypeEnum.html such as 'INVENTORY_ITEM '
MarkLogic REST Client API's default search endpoint results in server error when using a query options node that contains more than one extract-path even though the request is successful when either extract-path is used individually within extract-document-data:
{"errorResponse":{"statusCode":500, "status":"Internal Server Error", "messageCode":"RESTAPI-INTERNALERROR", "message":"RESTAPI-INTERNALERROR: (err:FOER0000) Internal error: JSON build, unbalanced pairs: "}}
The offending paths:
<extract-path xmlns:tei="http://www.tei-c.org/ns/1.0" xmlns:FO="http://founders.archives.gov/">/tei:text/FO:metadata/FO:ProjectCode</extract-path>
<extract-path xmlns:tei="http://www.tei-c.org/ns/1.0" xmlns:FO="http://founders.archives.gov/">/tei:text/FO:metadata/FO:ShortProjectTitle</extract-path>
Only occurs when the format is JSON--XML format behaves as expected. This error can be reproduced across disparate datasets.
The entire options node:
<options xmlns="http://marklogic.com/appservices/search">
<search-option>unfiltered</search-option>
<quality-weight>0</quality-weight>
<page-length>10</page-length>
<extract-document-data selected="include">
<extract-path xmlns:tei="http://www.tei-c.org/ns/1.0" xmlns:FO="http://founders.archives.gov/">/tei:text/FO:metadata/FO:ProjectCode</extract-path>
<extract-path xmlns:tei="http://www.tei-c.org/ns/1.0" xmlns:FO="http://founders.archives.gov/">/tei:text/FO:metadata/FO:ShortProjectTitle</extract-path>
</extract-document-data>
</options>
I would simply extract the parent element FO:metadata; however, that returns a string, indicating a dependency on a parsing library (does it not?) which I would rather avoid if possible.
Any suggested workarounds are appreciated. Thanks.
There is a known bug with the inline response that should be fixed in 8.0-3.
In the interim, it should work to get the extracted fragments either as XML or as a multiplepart/mixed response (which, if the source documents are XML would also be XML).