Mule Studio, Transform byte array into MySQL - mysql

I've got a Magento connection up and running and want to get all customers.
My subflow looks like this:
<sub-flow name="listCustomers" doc:name="listCustomers">
<magento:list-customers config-ref="MagentoConnecter" doc:name="Magento"/>
<byte-array-to-object-transformer doc:name="Byte Array to Object"/>
<json:object-to-json-transformer doc:name="Object to JSON"/>
</sub-flow>
which results into a string. But I'd like to insert the variables/customer data into a MySQL.
Do I need to use a foreach component?
And how can I address the variables then?
Thanks,
Chris

Foreach seems like a good way to achieve that.
The step to do are the following:
Transform the JSON representation into a list of maps using the JSON Transformer (the returnClass will be java.util.Map)
Introduce the foreach scope
Within this scope insert a jdbc outbound endpoint that performs the insert query

Related

Camel - json body is consumed after have used jsonpath

i'm using camel in a rest context and i've to manipulate a json got from a request . It's something like:
{
'field1':'abc',
'field2':'def'
}
All i've to do is to extract field1 and field2 and put them in 2 properties, so i tried something like that
<setProperty propertyName="Field1">
<jsonpath>$.field1</jsonpath>
</setProperty>
<setProperty propertyName="Field2">
<jsonpath>$.field2</jsonpath>
</setProperty>
but i get this error:
org.apache.camel.ExpressionEvaluationException:
com.jayway.jsonpath.PathNotFoundException: Expected to find an object with property ['field2'] in path $ but found 'java.lang.String'. This is not a json object according to the JsonProvider: 'com.jayway.jsonpath.spi.json.JsonSmartJsonProvider'.
and after some tests i found out my body was empty after the first use of jsonpath.
The same process applied to an XML using xpath doesn't give any error, and i'm wondering if it's possible to do the same with jsonpath instead to create a mapper object in java. thank you in advance
If the processed Camel message is of type InputStream, this stream can obviously be read only once.
To solve this:
either enable Camel stream caching (http://camel.apache.org/stream-caching.html)
or insert a step (before jsonpath queries) in your route to convert message body to a string (so that it can be read multiple times:
(eg <convertBodyTo type="java.lang.String" charset="ISO-8859-1">) )

How to include multiple JSON fields when using JSON logging with SLF4J?

I'm working with Dropwizard 1.3.2, which does logging using SLF4J over Logback. I am writing logs for ingestion into ElasticSearch, so I thought I'd use JSON logging and make some Kibana dashboards. But I really want more than one JSON item per log message - if I am recording a status update with ten fields, I would ideally like to log the object and have the JSON fields show up as top level entries in the JSON log. I did get MDC working but that is very clumsy and doesn't flatten objects.
That's turned out to be difficult! How can I do that? I have it logging in JSON, but I can't nicely log multiple JSON fields!
Things I've done:
My Dropwizard configuration has this appender:
appenders:
- type: console
target: stdout
layout:
type: json
timestampFormat: "ISO_INSTANT"
prettyPrint: false
appendLineSeparator: true
additionalFields:
keyOne: "value one"
keyTwo: "value two"
flattenMdc: true
The additional fields show up, but those values seem to be fixed in the configuration file and don't change. There is a "customFieldNames" but no documentation on how to use it, and no matter what I put in there I get a "no String-argument constructor/factory method to deserialize from String value" error. (The docs have an example value of "#timestamp" but no explanation, and even that generates the error. They also have examples like "(requestTime:request_time, userAgent:user_agent)" but again, undocumented and I can't make anything similar work, everything I've tried generates the error above.
I did get MDC to work, but it seems silly to plug in each item into MDC and then clear it.
And I can deserialize an object and log it as nested JSON, but that also seems weird.
All the answers I've seen on this are old - does anyone have any advice on how to do this nicely inside Dropwizard?
You can use logback explicitly in Dropwizard using a custom logger factory, and then set it up with logstash-logback-encoder, and configure it to write out to a JSON appender.
The JSON encoder may look like this:
<included>
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<pattern>
<pattern>
{
"id": "%uniqueId",
"relative_ns": "#asLong{%nanoTime}",
"tse_ms": "#asLong{%tse}",
"start_ms": "#asLong{%startTime}",
"cpu": "%cpu",
"mem": "%mem",
"load": "%loadavg"
}
</pattern>
</pattern>
<timestamp>
<!-- UTC is the best server consistent timezone -->
<timeZone>${encoders.json.timeZone}</timeZone>
<pattern>${encoders.json.timestampPattern}</pattern>
</timestamp>
<version/>
<message/>
<loggerName/>
<threadName/>
<logLevel/>
<logLevelValue/><!-- numeric value is useful for filtering >= -->
<stackHash/>
<mdc/>
<logstashMarkers/>
<arguments/>
<provider class="com.tersesystems.logback.exceptionmapping.json.ExceptionArgumentsProvider">
<fieldName>exception</fieldName>
</provider>
<stackTrace>
<!--
https://github.com/logstash/logstash-logback-encoder#customizing-stack-traces
-->
<throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
<rootCauseFirst>${encoders.json.shortenedThrowableConverter.rootCauseFirst}</rootCauseFirst>
<inlineHash>${encoders.json.shortenedThrowableConverter.inlineHash}</inlineHash>
</throwableConverter>
</stackTrace>
</providers>
</encoder>
</included>
File on Github
and produce output like this:
{"id":"FfwJtsNHYSw6O0Qbm7EAAA","relative_ns":20921024,"tse_ms":1584163814965,"start_ms":null,"#timestamp":"2020-03-14T05:30:14.965Z","#version":"1","message":"Creating Pool for datasource 'logging'","logger_name":"play.api.db.HikariCPConnectionPool","thread_name":"play-dev-mode-akka.actor.default-dispatcher-7","level":"INFO","level_value":20000}

AWS Lambda output format - JSON

I trying to format my output from a lambda function into JSON. The lambda function queries my Amazon Aurora RDS instance and returns an array of rows in the following format:
[[name,age,town,postcode]]
which gives the an example output:
[["James", 23, "Maidenhead","sl72qw"]]
I understand that mapping templates are designed to translate one format to another but I don't understand how I can take the output above and map in to a JSON format using these mapping templates.
I have checked the documentation and it only covers converting one JSON to another.
Without seeing the code you're specifically using, it's difficult to give you a definitely correct answer, but I suspect what you're after is returning the data from python as a dictionary then converting that to JSON.
It looks like this thread contains the relevant details on how to do that.
More specifically, using the DictCursor
cursor = connection.cursor(pymysql.cursors.DictCursor)

ColdFusion CFHTTP working with data returned from API

I am just starting to work with the Rotten Tomatoes API to retrieve movie information, and I need some help understanding how to work with the data that is returned. This is my first time working with an API such as this, so please forgive me if this sounds basic.
Using cfhttp I can successfully connect to the API and return search data, but I don't really know what format I am getting back. I thought it was JSON, but using isJSON to check it returns false. I would like to be able to call individual fields within the returned data to populate a query result set that I can output to the user.
The code I am using to make the call is simple:
<cfhttp url="#apiURL#movies.json?apikey=#apiKey#&q=#movieName#" method="get" result="httpResp" timeout="120">
<cfhttpparam type="header" name="Content-Type" value="application/json" />
</cfhttp>
<cfdump var="#httpResp#" />
And the data that is being returned:
I don't expect anyone to give me a complete walk-through of how to build my app, but if someone could give me some pointers as to the proper way to convert the data into a query result, or something else I can use, I would appreciate it.
Edit: Didn't realize the image would be so difficult to read, so here's a cut and paste of the data being returned.
{"total":2,"movies":[{"id":"11029","title":"Krull","year":1983,"mpaa_rating":"PG","runtime":120,"release_dates":{"theater":"1983-07-29","dvd":"2001-04-03"},"ratings":{"critics_rating":"Rotten","critics_score":33,"audience_rating":"Spilled","audience_score":49},"synopsis":"","posters":{"thumbnail":"http://content6.flixster.com/movie/25/86/258696_mob.jpg","profile":"http://content6.flixster.com/movie/25/86/258696_pro.jpg","detailed":"http://content6.flixster.com/movie/25/86/258696_det.jpg","original":"http://content6.flixster.com/movie/25/86/258696_ori.jpg"},"abridged_cast":[{"name":"Ken Marshall","id":"162668719","characters":["Prince Colwyn"]},{"name":"Lysette Anthony","id":"162668720","characters":["Lyssa"]},{"name":"Freddie Jones","id":"162664678","characters":["Ynyr"]},{"name":"Francesca Annis","id":"162688297","characters":["Widow of the Web"]},{"name":"Alun Armstrong","id":"770670461","characters":["Torquil"]}],"links":{"self":"http://api.rottentomatoes.com/api/public/v1.0/movies/11029.json","alternate":"http://www.rottentomatoes.com/m/krull/","cast":"http://api.rottentomatoes.com/api/public/v1.0/movies/11029/cast.json","clips":"http://api.rottentomatoes.com/api/public/v1.0/movies/11029/clips.json","reviews":"http://api.rottentomatoes.com/api/public/v1.0/movies/11029/reviews.json","similar":"http://api.rottentomatoes.com/api/public/v1.0/movies/11029/similar.json"}},{"id":"770670060","title":"Bekenntnisse des Hochstaplers Felix Krull (Confessions of Felix Krull)","year":1957,"mpaa_rating":"Unrated","runtime":107,"release_dates":{"theater":"1958-03-04"},"ratings":{"critics_score":-1,"audience_rating":"Spilled","audience_score":33},"synopsis":"","posters":{"thumbnail":"http://content7.flixster.com/movie/10/84/16/10841649_mob.jpg","profile":"http://content7.flixster.com/movie/10/84/16/10841649_pro.jpg","detailed":"http://content7.flixster.com/movie/10/84/16/10841649_det.jpg","original":"http://content7.flixster.com/movie/10/84/16/10841649_ori.jpg"},"abridged_cast":[{"name":"Horst Buchholz","id":"162718595","characters":["Felix Krull"]},{"name":"Liselotte Pulver","id":"326392065","characters":["Zaza"]},{"name":"Ingrid Andree","id":"770670669","characters":["Zouzou"]},{"name":"Susi Nicoletti","id":"770670670","characters":["Madame Houpfle"]},{"name":"Paul Dahlke","id":"573372814","characters":["Professor Kuckuck"]}],"alternate_ids":{"imdb":"0050179"},"links":{"self":"http://api.rottentomatoes.com/api/public/v1.0/movies/770670060.json","alternate":"http://www.rottentomatoes.com/m/bekenntnisse-des-hochstaplers-felix-krull-confessions-of-felix-krull/","cast":"http://api.rottentomatoes.com/api/public/v1.0/movies/770670060/cast.json","clips":"http://api.rottentomatoes.com/api/public/v1.0/movies/770670060/clips.json","reviews":"http://api.rottentomatoes.com/api/public/v1.0/movies/770670060/reviews.json","similar":"http://api.rottentomatoes.com/api/public/v1.0/movies/770670060/similar.json"}}],"links":{"self":"http://api.rottentomatoes.com/api/public/v1.0/movies.json?q=Krull&page_limit=30&page=1"},"link_template":"http://api.rottentomatoes.com/api/public/v1.0/movies.json?q={search-term}&page_limit={results-per-page}&page={page-number}"}
Edit: Thanks, Dan. That was the nudge I needed. After I understood how to get at the JSON data, I was able to find the following explanation of how to turn it into a useful query:Work with remote API JSON data in CF.
The data needs to be deserialized:
<cfset tomatoData=DeserializeJSON(httpResp.filecontent)>
<cfdump var="#tomatoData#">
It looks like the first level in has nothing but structs. So you may be able to
<cfdump var="#tomatoData.total#"> <!--- A single item --->
<cfdump var="#tomatoData.movies#"> <!--- An array --->
The filecontent looks like json. You can refer to it by using
#httpResp.filecontent#.

how to access json data mule esb

i want to access json data generated from the sync flow into an async flow.
I am getting json data from sync flow correctly and i want to fetch certain attribute value from that my json data is as follows :
{"data" : [{"in_timestamp":"2012-12-04","message":"hello","out_timestamp":null,"from_user":"user2","ID":43,"to_user":"user1"}]} and to user is #[json:to_user]}
I want to access to_user attribute from this json format.
I have tried using #[json:to_user] but it simply prints it as a string and doesnt return any value.
Please help. Thanks in advance.
The right expression based on your sample JSON is:
#[json:data[0]/to_user]
JsonPath expression are depreciated for now and you will even not get enough document on it for doing ..
So, currently you need to use either :- <json:json-to-object-transformer returnClass="java.lang.Object" doc:name="JSON to Object" />
or <json:json-to-object-transformer returnClass="java.util.HashMap" doc:name="JSON to Object" />
or even <json:json-to-object-transformer returnClass="java.util.List" doc:name="JSON to Object" /> to extract data from JSON depending on the JSON data