Cannot coerce Array error while using map operator with XML data in mule 4? - esb

I am getting the following error while using the map operator:
org.mule.runtime.core.internal.message.ErrorBuilder$ErrorImplementation
{
description="Cannot coerce Array (org.mule.weave.v2.model.values.ArrayValue$IteratorArrayValue#22af825a) to String
Trace:
at main (Unknown), while writing Xml
Payload:
%dw 2.0
output application/xml
ns cc someUrl
---
(vars.products*product map {
cc #productDetails: {
cc #productCategory: $.productCategory,
cc #productName: $.productName,
cc #productImageData: $.productImageData
}
})
Products:
[
product:{productCategory= "A", productName="name", productImageData=base64 string},
product:{productCategory= "B", productName="name2", productImageData=base64 string},
product:{productCategory= "C", productName="name3", productImageData=base64 string}
]

There are no arrays in XML. I resolved that by using reduce() to concatenate the objects in the array. Also I added a root element, which is required in XML.
For simplicity, I just added products as a variable inside the script:
%dw 2.0
output application/xml
ns cc someUrl
var products=[
product:{productCategory: "A", productName:"name", productImageData:"base64 string"},
product:{productCategory: "B", productName:"name2", productImageData:"base64 string"},
product:{productCategory: "C", productName:"name3", productImageData:"base64 string"}
]
---
result: ( products.*product map {
cc #productDetails: {
cc #productCategory: $.productCategory,
cc #productName: $.productName,
cc #productImageData: $.productImageData
}
} ) reduce ((item, accumulator={}) -> item ++ accumulator )
Output:
<?xml version='1.0' encoding='UTF-8'?>
<result>
<cc:productDetails xmlns:cc="someUrl">
<cc:productCategory>C</cc:productCategory>
<cc:productName>name3</cc:productName>
<cc:productImageData>base64 string</cc:productImageData>
</cc:productDetails>
<cc:productDetails xmlns:cc="someUrl">
<cc:productCategory>B</cc:productCategory>
<cc:productName>name2</cc:productName>
<cc:productImageData>base64 string</cc:productImageData>
</cc:productDetails>
<cc:productDetails xmlns:cc="someUrl">
<cc:productCategory>A</cc:productCategory>
<cc:productName>name</cc:productName>
<cc:productImageData>base64 string</cc:productImageData>
</cc:productDetails>
</result>

Related

DataWeave 2.0 how to build dynamically populated accumulator for reduce()

I'm trying to convert an array of strings into an object for which each member uses the string for a key, and initializes the value to 0. (Classic accumulator for Word Count, right?)
Here's the style of the input data:
%dw 2.0
output application/dw
var hosts = [
"t.me",
"thewholeshebang.com",
"thegothicparty.com",
"windowdressing.com",
"thegothicparty.com"
]
To get the accumulator, I need a structure in this style:
var histogram_acc = {
"t.me" : 1,
"thewholeshebang.com" : 1,
"thegothicparty.com" : 2,
"windowdressing.com" : 1
}
My thought was that this is a slam-dunk case for reduce(), right?
So to get the de-duplicated list of hosts, we can use this phrase:
hosts distinctBy $
Happy so far. But now for me, it turns wicked.
I thought this might be the gold:
hosts distinctBy $ reduce (ep,acc={}) -> acc ++ {ep: 0}
But the problem is that this didn't work out so well. The first argument to the lambda for reduce() represents the iterating element, in this case the endpoint or address. The lambda appends the new object to the accumulator.
Well, that's how I hoped it would happen, but I got this instead:
{
ep: 0,
ep: 0,
ep: 0,
ep: 0
}
I kind of need it to do better than that.
As you said reduce is a good fit for this problem, alternatively you can use the "Dynamic elements" of objects feature to "flatten an array of objects into an object"
%dw 2.0
output application/dw
var hosts = [
"t.me",
"thewholeshebang.com",
"thegothicparty.com",
"windowdressing.com",
"thegothicparty.com"
]
---
{(
hosts
distinctBy $
map (ep) -> {"$ep": 0}
)}
See https://docs.mulesoft.com/mule-runtime/4.3/dataweave-types#dynamic_elements
Scenario 1:
The trick I think for this scenario is you need to enclose the expression for the distinctBy ... map with {}.
Example:
Input:
%dw 2.0
var hosts = [
"t.me",
"thewholeshebang.com",
"thegothicparty.com",
"windowdressing.com",
"thegothicparty.com"
]
output application/json
---
{ // This open bracket will do the trick.
(hosts distinctBy $ map {($):0})
} // See Scenario 2 if you remove or comment this pair bracket
Output:
{
"t.me": 0,
"thewholeshebang.com": 0,
"thegothicparty.com": 0,
"windowdressing.com": 0
}
Scenario 2: If you remove the {} from the expression {<expression distinctBy..map...} the output will be an Array.
Example:
Input:
%dw 2.0
var hosts = [
"t.me",
"thewholeshebang.com",
"thegothicparty.com",
"windowdressing.com",
"thegothicparty.com"
]
output application/json
---
//{ // This is now commented
(hosts distinctBy $ map {($):0})
//} // This is now commented
Output:
[
{
"t.me": 0
},
{
"thewholeshebang.com": 0
},
{
"thegothicparty.com": 0
},
{
"windowdressing.com": 0
}
]
Scenario 3: If you want to count the total duplicate per item, you can use the groupBy and sizeOf
Example:
Input:
%dw 2.0
var hosts = [
"t.me",
"thewholeshebang.com",
"thegothicparty.com",
"windowdressing.com",
"thegothicparty.com"
]
output application/json
---
hosts groupBy $ mapObject (value,key) -> {
(key): sizeOf(value)
}
Output:
{
"t.me": 1,
"thewholeshebang.com": 1,
"thegothicparty.com": 2,
"windowdressing.com": 1
}
Hilariously (but perhaps only to me) is the fact that I discovered the answer to this while I was writing my question. Hoping that someone will pose this same question, here is what I found.
In order to present the lambda argument in my example (ep) as the key in a structure, I must quote and intererpolate it.
"$ep"
Once I did that, it was a quick passage to:
hosts distinctBy $ reduce (ep,acc={}) -> acc ++ {"$ep": 0}
...and then of course this:
{
"t.me": 0,
"thewholeshebang.com": 0,
"thegothicparty.com": 0,
"windowdressing.com": 0
}

Merge multiple JSON arrays without changing the sequence in Dataweave 1.0

I have a couple of JSON arrays which I need to combine before I send it to Public API as input payload and I would like the payloads stay in the sequence as how I specify them in the dataweave but how can I do it in Dataweave 1.0? Each of my JSON array is large dataset and has many attributes for each record, I notice the data is jumbled up across all arrays after concatenation. How can I fix this?
%dw 1.0
%output application/json
---
payload[0] ++ payload[1] ++ payload[2] ++ payload[3] ++ payload[4]
It is not clear exactly what is not working for you or what are the payloads in your example, but just concatenating the array works.
%dw 1.0
%output application/json
%var array1 = [1,2,3,4]
%var array2 = [5,6,7,8]
---
array1 ++ array2
Output
[
1,
2,
3,
4,
5,
6,
7,
8
]
The only way you're going to be able to consistently ensure the order is to have a clause in which you order the data. Which I'm not sure you have based off the information you provided but would look something similar to:
%dw 1.0
%output application/json
%var data = [[1,2,3],[4,5,6]]
---
data reduce ((item, acc=[]) -> acc ++ (item orderBy $))

cannot parse json to xml using xmltodict.unparse when converting json list

Trying to do this using xmltodict.unparse:
I have this structure in json:
"organisations": [
{"organisation": "org1"},
{"organisation": "org2"},
{"organisation": "org3"}
]
But comes out like this in xml:
<organisations><organisation>org1</organisation></organisations>
<organisations><organisation>org2</organisation></organisations>
<organisations><organisation>org2</organisation></organisations>
I wanted like this:
<organisations>
<organisation>org1</organisation>
<organisation>org2</organisation>
<organisation>org2</organisation>
</organisations>
Im using xmltodict.unparse
def dict_to_xml(d, pretty_print=False, indent=DEFAULT_INDENT, document_root="root"):
if len(d.keys()) != 1:
d = {
document_root: d
}
res = xmltodict.unparse(d, indent=indent, short_empty_elements=True)
if pretty_print:
res = pretty_print_xml(res).strip()
return res
Anyone know what to do without hacking xmltodict??
thanks
I don't much about XML, but I got curious about this question and noticed:
Lists that are specified under a key in a dictionary use the key as a tag for each item.
https://github.com/martinblech/xmltodict#roundtripping
My approach was to reverse engineer the result you're after:
expected = '''
<organisations>
<organisation>org1</organisation>
<organisation>org2</organisation>
<organisation>org2</organisation>
</organisations>
'''
print(json.dumps(xmltodict.parse(expected), indent=4))
output:
{
"organisations": {
"organisation": [
"org1",
"org2",
"org2"
]
}
}
And "round tripping" that, gives the result you're after:
reverse = {
"organisations": {
"organisation": [
"org1",
"org2",
"org2"
]
}
}
print(xmltodict.unparse(reverse, pretty=True))
output:
<?xml version="1.0" encoding="utf-8"?>
<organisations>
<organisation>org1</organisation>
<organisation>org2</organisation>
<organisation>org2</organisation>
</organisations>
HTH!

Multiple JSON payload to CSV file

i have a task to generate CSV file from multiple JSON payloads (2). Below are my sample data providing for understanding purpose
- Payload-1
[
{
"id": "Run",
"errorMessage": "Cannot Run"
},
{
"id": "Walk",
"errorMessage": "Cannot Walk"
}
]
- Payload-2 (**Source Input**) in flowVars
[
{
"Action1": "Run",
"Action2": ""
},
{
"Action1": "",
"Action2": "Walk"
},
{
"Action1": "Sleep",
"Action2": ""
}
]
Now, i have to generate CSV file with one extra column to Source Input with ErrorMessage on one condition basis, where the id in payload 1 matches with sourceInput field then errorMessage should assign to that requested field and generate a CSV file as a output
i had tried with the below dataweave
%dw 1.0
%output application/csv header=true
---
flowVars.InputData map (val,index)->{
Action1: val.Action1,
Action2: val.Action2,
(
payload filter ($.id == val.Action1 or $.id == val.Action2) map (val2,index) -> {
ErrorMessage: val2.errorMessage replace /([\n,\/])/ with ""
}
)
}
But, here im facing an issue with, i'm able to generate the file with data as expected, but the header ErrorMessage is missing/not appearing in the file with my real data(in production). Kindly assist me.
and Expecting the below CSV output
Action1,Action2,ErrorMessage
Run,,Cannot Run
,Walk,Cannot Walk
Sleep,
Hello the best way to solve this kind of problem is using groupBy. The idea is that you groupBy one of the two parts to use the join by and then you iterate the other part and do a lookup. This way you avoid O(n^2) and transform it to O(n)
%dw 1.0
%var payloadById = payload groupBy $.id
%output application/csv
---
flowVars.InputData map ((value, index) ->
using(locatedError = payloadById[value.Action2][0] default payloadById[value.Action1][0]) (
(value ++ {ErrorMessage: locatedError.errorMessage replace /([\n,\/])/ with ""}) when locatedError != null otherwise value
)
)
filter $ != null
Assuming "Payload-1" is payload, and "Payload-2" is flowVars.actions, I would first create a key-value lookup with the payload. Then I would use that to populate flowVars.actions:
%dw 1.0
%output application/csv header=true
// Creates lookup, e.g.:
// {"Run": "Cannot run", "Walk": "Cannot walk"}
%var errorMsgLookup = payload reduce ((obj, lookup={}) ->
lookup ++ {(obj.id): obj.errorMessage})
---
flowVars.actions map ((action) -> action ++ errorMsgLookup[action.Action1])
Note: I'm also assuming flowVars.action's id field is unique across the array.

json key iteration in DW mule

I have the following requirement need to interate the dynamic json key
need to use this json key and iterate through it
This is my input
[
{
"eventType":"ORDER_SHIPPED",
"entityId":"d0594c02-fb0e-47e1-a61e-1139dc185657",
"userName":"educator#school.edu",
"dateTime":"2010-11-11T07:00:00Z",
"status":"SHIPPED",
"additionalData":{
"quoteId":"d0594c02-fb0e-47e1-a61e-1139dc185657",
"clientReferenceId":"Srites004",
"modifiedDt":"2010-11-11T07:00:00Z",
"packageId":"AIM_PACKAGE",
"sbsOrderId":"TEST-TS-201809-79486",
"orderReferenceId":"b0123c02-fb0e-47e1-a61e-1139dc185987",
"shipDate_1":"2010-11-11T07:00:00Z",
"shipDate_2":"2010-11-12T07:00:00Z",
"shipDate_3":"2010-11-13T07:00:00Z",
"shipMethod_1":"UPS Ground",
"shipMethod_3":"UPS Ground3",
"shipMethod_2":"UPS Ground2",
"trackingNumber_3":"333",
"trackingNumber_1":"2222",
"trackingNumber_2":"221"
}
}
]
I need output like following
{
"trackingInfo":[
{
"shipDate":"2010-11-11T07:00:00Z",
"shipMethod":"UPS Ground",
"trackingNbr":"2222"
},
{
"shipDate":"2010-11-12T07:00:00Z",
"shipMethod":"UPS Ground2",
"trackingNbr":"221"
},
{
"shipDate":"2010-11-13T07:00:00Z",
"shipMethod":"UPS Ground3",
"trackingNbr":"333"
}
]
}
the shipdate, shipmethod ,trackingnumber can be n numbers.
how to iterate using json key.
First map the array to iterate and then use pluck to get a list of keys.
Then as long as there is always the same amount of shipDate to shipMethod etc fields. filter the list of keys to only iterate the amount of times those field combinations exist.
Then construct the output of each object by dynamically looking up the key using 'shipDate__ concatenated with the index(incremented by 1 because your example starts at 1 and dw arrays start at 0):
%dw 2.0
output application/json
---
payload map ((item, index) -> item.additionalData pluck($$) filter ($ contains 'shipDate') map ((item2, index2) ->
using(incIndex=(index2+1 as String)){
"shipDate": item.additionalData[('shipDate_'++ incIndex)],
"shipMethod": item.additionalData[('shipMethod_'++ incIndex)],
"trackingNbr": item.additionalData[('trackingNumber_'++ incIndex)],
}
)
)
In DW 1.0 syntax:
%dw 1.0
%output application/json
---
payload map ((item, index) -> item.additionalData pluck ($$) filter ($ contains 'shipDate') map ((item2, index2) ->
using (incIndex = (index2 + 1 as :string))
{
"shipDate": item.additionalData[('shipDate_' ++ incIndex)],
"shipMethod": item.additionalData[('shipMethod_' ++ incIndex)],
"trackingNbr": item.additionalData[('trackingNumber_' ++ incIndex)]
}))
It's mostly the same, except:
output => %output
String => :string