I have a CSV like this:
data1,data2,data3;dataa;datab;datac;datax,datay,dataz
data1,data2,data3;dataa;datab;datac;datax,datay,dataz
data1,data2,data3;dataa;datab;datac;datax,datay,dataz
I use spliter to process the records line by line, further I use splitBy "," in dataweave to convert the record to a map. But how I can do another level of split for ";" ? SplitBy doesnt allow muliple delimiters so do the CSV type in dataweave.
Ultimately, I want a JSON like this:
{
"1":"data1",
"2":"data2",
"3":{
"a":"dataa",
"b":"datab",
"c":"datac"
},
"x":"datax",
"y":"datay",
"z":"dataz "
}
Any thoughts ?
Try the following DataWeave code:
%dw 1.0
%output application/json
---
payload map {
"1": $[0],
"2": $[1],
"3": using (detail = $[2] splitBy ";") {
a: detail[1],
b: detail[2],
c: detail[3]
},
x: $[3],
y: $[4],
z: $[5]
}
Notes:
I modified the input data to separate datac and datax. Replace the ; character with , e.g.: ...;datab;datac,datax,...
I use File connector to read the CSV file, and directly process it in DataWeave transformer (do not use a Splitter)
I want to observe, that your JSON example has bad structure!
In this JSON the 4th element is an object and it hasn't a key, just value...
First of all, u should validate your end JSON.
Example of your valid JSON:
When u validate your JSON, I'll try to help in convering your CSV data to the JSON.
Related
This is my sample JSON
{
"id":"743",
"groupName":"group1",
"transation":{
"101":"success",
"102":"rejected",
"301":"processing"
}
}
Expected Result:
"101"
"102"
"301"
Can anyone please help me to print the above result using XQuery?
I can achieve this through JavaScript, but I need to write in XQuery.
Not knowing how you are reading the JSON document, whether as a doc in the database or parsing a JSON string, below uses xdmp:unquote() to parse a string, but you could instead just read the document from the database with fn:doc() or through cts:search().
Then, you could just XPath to the transation fields and return those node names with the name() function:
let $jsonData := xdmp:unquote('
{
"id":"743",
"groupName":"group1",
"transation":{
"101":"success",
"102":"rejected",
"301":"processing"
}
}')
return
$jsonData/transation/*/name()
I need to parse the response of an API that is like this:
"[{\"Customers\":[{\"Id\":1607715391563}],\"Partners\":[],\"ModDate\":\"\\/Date(1608031919597)\\/\",\"CreatedByUserId\":null},{\"Message\":null,\"Code\":\"200\",\"NextPage\":1}]"
I wish I have it like this:
[
{
"Customers":[
{
"Id":1607715391563
}
],
"Partners":[
],
"ModDate":"/Date(1608031919597)/",
"CreatedByUserId":null
},
{
"Message":null,
"Code":"200",
"NextPage":1
}
]
I already tried to remove the strings using payload[1 to -2], and parse the JSON using read(payload[1 to -2], 'application/json'). I already tried to follow some tips of this link but neither worked.
EDIT:
The point here is that I want to access, for example, Customers.Id value in other connector, and I can't
how about this?
%dw 2.0
output application/json
var inpString = "[{\"Customers\":[{\"Id\":1607715391563}],\"Partners\":[],\"ModDate\":\"\\/Date(1608031919597)\\/\",\"CreatedByUserId\":null},{\"Message\":null,\"Code\":\"200\",\"NextPage\":1}]"
---
read(inpString,"application/json")
You can try the following DataWeave expression:
%dw 2.0
output application/json
---
read((payload replace /^"|"$/ with '') replace '\"' with '"', "application/json")
The first replace will remove heading and trailing double quotes, and the second one will replace back slash scaped double quotes by double quotes.
I am using Anypoint Studio 6.1 and Mule 3.8.1 and am mapping JSON to JSON in Dataweave. In the JSON mapping I have an optional field called "Channels" which contains a list of strings. When the field is not there I get a warning in Dataweave. How can I write the Dataweave code to ignore if its null?
Dataweave code:
%dw 1.0
%output application/json skipNullOn="everywhere"
---
payload map ((payload01 , indexOfPayload01) -> {
Channels: payload01.Channels map ((channel , indexOfAccessChannel) -> channel)
})
I have tried to use "when" and also the "?" selector modifier but cannot get the syntax right.
Thanks
You were right to use when and the ? operator. You just need to use parentheses to make sure they apply to the right things. Note that I am using $ as a shorthand for the payload01 parameter in your example.
%dw 1.0
%output application/json
---
payload map {
(Channels: $.Channels map (lower $)) when $.Channels?
}
If you didn't need to use map on the Channels array within each item, you could just allow the null to pass through:
payload map {
Channels: $.Channels
}
This would yield the following for input objects that don't contain a Channels field:
{
Channels: null
}
Adding the parentheses allows us to use when to determine whether the whole key/value pair (aka tuple) should be output:
payload map {
(Channels: $.Channels) when $.Channels?
}
Yielding:
{
}
This is probably simple, but everything I find in a search is only vaguely related to my question.
I have this:
{
"user":"C03999",
"caseNumber":"011-1234567",
"documents":[
{
"file":[
{"name":"indem1","document":"xSFSF%%GSF","mimeType":"PDF"},
{"name":"indem2","document":"xSFSF%%GSF","mimeType":"PDF"}
]
}
],
"mortgagee":"22995",
"billTo":"Originator",
"agreementDate":"2016-11-25",
"term":360,
"expirationDate":"2017-11-25",
"startDate":"Agreement",
"lenderEndorsed":true,
"lenderExecutedAgreement":false,
"indemnificationAgreementTransferable":false,
"qadLrsFileNumber":"2016-99999-9999",
"docketNumber":"2016-9999-99"
}
I would like to get this string out of it:
indem1,indem2
I created a global function:
<configuration doc:name="Configuration">
<expression-language autoResolveVariables="true">
<global-functions>
def convertToString(data){
return data.toString();
}
</global-functions>
</expression-language>
</configuration>
My transform looks like this:
%dw 1.0
%output application/csv
---
payload.documents.file map {
"" : convertToString($.name)
}
My output looks like:
[indem1\, indem2]
What do I need to do to get my desired string (indem1,indem2)?
Your question title says that " JSON array To string" but the dataweave code attached above has application/csv in the output.
Are you trying to convert into csv?? If that is the expectation, comma is a reserved character for .csv format and that is the reason , is getting escaped with backslash[indem1\,indem2].
If your intention is to convert into java, here is the code that returns String value.
%dw 1.0
%output application/java
payload.documents[0].file.*name joinBy ','
In general, if you have an object, and you need PART of that object to be written in a given format / MIME-type, use the write() function in the dw::Core module. If you need to read some content in an object that is JSON/JAVA/CSV and the rest is some other format, use the read() function in the same module.
For example, suppose you have payload as:
{
field1: "foo",
field2: { nested: "value" }
}
Further, suppose that you want the object to be JAVA, but the "field2" value to be seen as JSON. You can use this DW:
output application/java
---
{
field1: payload.field1,
field2: write(payload.field2,'application/json')
}
The result is:
{
field1: "foo",
field2: '{ "nested": "value" }'
}
NOTICE that "field2" now shows up as a nested JSON object, not a JAVA object. One way to tell is that the key in the JSON object is in quotes, but in a JAVA object they are not.
I have a Mule application that needs to produce some CSV output which looks like the following:
[CSV Payload 1]
Data|Data|Data
[CSV Payload 2]
Data|Data|Data|Data|Data|Data|Data|Data|Data
[CSV Payload 3]
Data|Data|Data|Data
[CSV Payload 4]
Data|Data|Data|Data|Data|Data
As you can see, I have a combination of 4 CSV payloads, each with different structures. The first two of these payloads are single line and hard coded. The third is derived from an input file and the fourth is an extract from a database.
My question is: is DataWeave suitable for achieving this or should an alternative method (such as scatter gather) be explored? I've tried to implement this in DataWeave with no luck as I'm struggling to get past the limitation of having to define an output structure.
Please note: the order of the final output needs to be Payload 1 then 2 then 3 then 4. This order cannot be mixed up.
It is not necessary to define an output structure in DataWeave as long as the structure of the data being mapped is compatible with the MIME type defined in the output directive.
You can use a Message Enricher to obtain Payload 4 as an application/java object, and assign it to a flow variable, called for example additionalData.
Then you can used a DataWeave transformation like this, assuming the input payload is derived from the input file (i.e. Payload 3):
%dw 1.0
%output text/csv separator="|", header=false
---
{p1-fld1: "Data", p1-fld2: "Data", p1-fld3: "Data"} +
(
{p2-fld1: "Data", p2-fld2: "Data", p2-fld3: "Data",
p2-fld4: "Data", p2-fld5: "Data", p2-fld6: "Data",
p2-fld7: "Data", p2-fld8: "Data", p2-fld9: "Data"
} +
(payload + flowVars.additionalData )
)
This should produce the target format you need (Payload 1 and Payload 2 are hardcoded objects in the transformation).