My json file looks like:
{
"1489402": {
"Category": "Program error",
"CorrectionInstructionsObjectList": [{
"ObjectName": "/SCMB/CL_IM_ORG_CHECK IF_EX_HRBAS00_RELAT~MAINTAIN_RELATION",
"ObjectType": "METH",
"ProgramID": "LIMU"
}, {
"ObjectName": "/SCMB/MP556100_F01",
"ObjectType": "REPS",
"ProgramID": "LIMU"
}, {
"ObjectName": "/SCMB/GET_ORG_STRUCTURE",
"ObjectType": "FUNC",
"ProgramID": "LIMU"
}],
"CurrentStatus": "Released for Customer",
"PrimarySAPComponent": "tm-md-org",
"ReleasedOn": "16.07.2010"
}
}
I want to create a corresponding ABAP structure in my report so that I can consume this json file and map it into the structure. I want to use /ui2/cl_json=>deserialize but I am not able to fugure out what should be the receiving ABAP type.
/ui2/cl_json=>deserialize( EXPORTING json = lv_json_content
pretty_name = /ui2/cl_json=>pretty_mode-camel_case
CHANGING data = lt_data ).
In other words what should be the structure of lt_data.
I don't answer your question because I don't master /ui2/cl_json, but I propose another solution.
As rule of thumb, I wouldn't recommend to use /ui2/cl_json because it's not officially supported by SAP as far as I know (it's just an initiative of one SAP employee), but to use XSLT or SAP Simple Transformation language (preferred). I go for XSLT because ST is impossible to use due to the dynamic property name "1489402" in the JSON file.
Create an XSLT transformation
The ABAP program calls the transformation
Note that when the transformation source is JSON, SAP converts it into SAP JSON-XML format (tags like <object>, <array>, <str>).
The XSLT transformation must return XML in SAP asXML format if the transformation result is an ABAP variable (i.e. RESULT root = variable, not RESULT XML variable).
XSLT transformation Z_OBJECTS:
<xsl:transform xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:sap="http://www.sap.com/sapxsl" version="1.0">
<xsl:strip-space elements="*"/>
<xsl:template match="/object/object">
<asx:abap xmlns:asx="http://www.sap.com/abapxml" version="1.0">
<asx:values>
<ROOT>
<ITEM_NAME>
<xsl:value-of select="#name"/>
</ITEM_NAME>
<CATEGORY>
<xsl:value-of select="str[#name='Category']"/>
</CATEGORY>
<OBJECT_LIST>
<xsl:for-each select="array/object">
<item>
<OBJECT_NAME>
<xsl:value-of select="str[#name='ObjectName']"/>
</OBJECT_NAME>
<OBJECT_TYPE>
<xsl:value-of select="str[#name='ObjectType']"/>
</OBJECT_TYPE>
<PROGRAM_ID>
<xsl:value-of select="str[#name='ProgramID']"/>
</PROGRAM_ID>
</item>
</xsl:for-each>
</OBJECT_LIST>
<CURRENT_STATUS>
<xsl:value-of select="str[#name='CurrentStatus']"/>
</CURRENT_STATUS>
<PRIMARY_SAP_COMPONENT>
<xsl:value-of select="str[#name='PrimarySAPComponent']"/>
</PRIMARY_SAP_COMPONENT>
<RELEASED_ON>
<xsl:value-of select="str[#name='ReleasedOn']"/>
</RELEASED_ON>
</ROOT>
</asx:values>
</asx:abap>
</xsl:template>
</xsl:transform>
ABAP program:
TYPES: BEGIN OF ty_object,
object_name TYPE string,
object_type TYPE string,
program_id TYPE string,
END OF ty_object,
ty_object_list TYPE STANDARD TABLE OF ty_object WITH EMPTY KEY,
BEGIN OF ty_item,
item_name TYPE string, " will contain "1489402"
category TYPE string,
object_list TYPE ty_object_list,
current_status TYPE string,
primary_sap_component TYPE string,
released_on TYPE string,
END OF ty_item.
DATA(json) = `{ "1489402": {`
&& ` "Category": "Program error",`
&& ` "CorrectionInstructionsObjectList": [{`
&& ` "ObjectName": "/SCMB/CL_IM_ORG_CHECK IF_EX_HRBAS00_RELAT~MAINTAIN_RELATION",`
&& ` "ObjectType": "METH",`
&& ` "ProgramID": "LIMU"`
&& ` }, {`
&& ` "ObjectName": "/SCMB/MP556100_F01",`
&& ` "ObjectType": "REPS",`
&& ` "ProgramID": "LIMU"`
&& ` }, {`
&& ` "ObjectName": "/SCMB/GET_ORG_STRUCTURE",`
&& ` "ObjectType": "FUNC",`
&& ` "ProgramID": "LIMU"`
&& ` }],`
&& ` "CurrentStatus": "Released for Customer",`
&& ` "PrimarySAPComponent": "tm-md-org",`
&& ` "ReleasedOn": "16.07.2010"}}`.
DATA(item) = VALUE ty_item( ).
CALL TRANSFORMATION z_objects SOURCE XML json RESULT root = item.
NB: so that to write the XSL transformation, you need to know the JSON-XML of a given JSON. You may use the ID transformation to know it. Example:
DATA(json) = `{"a":[1,"s"]}`.
DATA(json_xml) = ``.
CALL TRANSFORMATION id SOURCE XML json RESULT XML json_xml OPTIONS xml_header = 'no'.
ASSERT json_xml+1 = `<object><array name="a"><num>1</num><str>s</str></array></object>`.
You may try this, it shall work. Pay attention to two additional flags for deserializing which control the processing of associative arrays and also name mappings for comfortable renaming.
TYPES:
BEGIN OF ts_cio,
object_name TYPE string,
object_type TYPE string,
program_id TYPE string,
END OF ts_cio,
BEGIN OF ts_error,
category TYPE string,
ci_list TYPE STANDARD TABLE OF ts_cio WITH DEFAULT KEY,
current_status TYPE string,
primary_sap_component TYPE string,
released_on TYPE d,
END OF ts_error,
BEGIN OF ts_dump,
id TYPE i,
error TYPE ts_error,
END OF ts_dump,
tt_dump TYPE SORTED TABLE OF ts_dump WITH UNIQUE KEY id.
DATA: lt_data TYPE tt_dump.
/ui2/cl_json=>deserialize( EXPORTING json = lv_json
pretty_name = /ui2/cl_json=>pretty_mode-camel_case
assoc_arrays = abap_true
assoc_arrays_opt = abap_true
name_mappings = VALUE #(
( abap = `CI_LIST` json = `CorrectionInstructionsObjectList` )
( abap = `PROGRAM_ID` json = `ProgramID` )
( abap = `PRIMARY_SAP_COMPONENT` json = `PrimarySAPComponent` ) )
CHANGING data = lt_data ).
Related
I am getting the following error while using the map operator:
org.mule.runtime.core.internal.message.ErrorBuilder$ErrorImplementation
{
description="Cannot coerce Array (org.mule.weave.v2.model.values.ArrayValue$IteratorArrayValue#22af825a) to String
Trace:
at main (Unknown), while writing Xml
Payload:
%dw 2.0
output application/xml
ns cc someUrl
---
(vars.products*product map {
cc #productDetails: {
cc #productCategory: $.productCategory,
cc #productName: $.productName,
cc #productImageData: $.productImageData
}
})
Products:
[
product:{productCategory= "A", productName="name", productImageData=base64 string},
product:{productCategory= "B", productName="name2", productImageData=base64 string},
product:{productCategory= "C", productName="name3", productImageData=base64 string}
]
There are no arrays in XML. I resolved that by using reduce() to concatenate the objects in the array. Also I added a root element, which is required in XML.
For simplicity, I just added products as a variable inside the script:
%dw 2.0
output application/xml
ns cc someUrl
var products=[
product:{productCategory: "A", productName:"name", productImageData:"base64 string"},
product:{productCategory: "B", productName:"name2", productImageData:"base64 string"},
product:{productCategory: "C", productName:"name3", productImageData:"base64 string"}
]
---
result: ( products.*product map {
cc #productDetails: {
cc #productCategory: $.productCategory,
cc #productName: $.productName,
cc #productImageData: $.productImageData
}
} ) reduce ((item, accumulator={}) -> item ++ accumulator )
Output:
<?xml version='1.0' encoding='UTF-8'?>
<result>
<cc:productDetails xmlns:cc="someUrl">
<cc:productCategory>C</cc:productCategory>
<cc:productName>name3</cc:productName>
<cc:productImageData>base64 string</cc:productImageData>
</cc:productDetails>
<cc:productDetails xmlns:cc="someUrl">
<cc:productCategory>B</cc:productCategory>
<cc:productName>name2</cc:productName>
<cc:productImageData>base64 string</cc:productImageData>
</cc:productDetails>
<cc:productDetails xmlns:cc="someUrl">
<cc:productCategory>A</cc:productCategory>
<cc:productName>name</cc:productName>
<cc:productImageData>base64 string</cc:productImageData>
</cc:productDetails>
</result>
I'm facing issue while fetching keys and values from the data using regular expression if the JSON contains \ & ".
{
"KeyOne":"Value One",
"KeyTwo": "Value \\ two",
"KeyThree": "Value \" Three",
"KeyFour": "ValueFour\\"
}
It is sample data, from this I want to read the values are keys. How can I achieve with regular expressions.
Note: I'm deserializing this JSON data in the server side(SAP ABAP).
On earlier releases less than 7.2 (from memory) you can use class /ui2/cl_json
if on 7.3 or later use kernel IXML writer which support JSON.
It is orders of magnitude faster than /ui2/cl_json
you can use identity transformation approach where the source structure is known
and you can create that structure in abap or already has an abap equivalent defined. Otherwise just traverse the JSON document.
The example string was easily parsed
EDIT: Add sample code
REPORT zjsondemo.
CLASS lcl DEFINITION CREATE PUBLIC.
PUBLIC SECTION.
METHODS json_stru_known.
METHODS json_stru_traverse.
ENDCLASS.
CLASS lcl IMPLEMENTATION.
METHOD json_stru_known.
DATA l_src_json TYPE string.
DATA l_mara TYPE mara.
WRITE: / 'DEMO 1 Known structure Identity transformation '.
l_src_json = `{"MARA":{"MATNR":"012345678", "MATKL": "DUMMY" }}`.
WRITE: / 'Conver to MARA -> ', l_src_json.
CALL TRANSFORMATION id SOURCE XML l_src_json
RESULT mara = l_mara. "
WRITE: / 'MARA - MATNR', l_mara-matnr,
/ ' MATKL', l_mara-matkl.
TYPES:
BEGIN OF lty_foo_bar,
KeyOne TYPE string,
KeyTwo Type string,
KeyThree TYPE string,
KeyFour Type string,
END OF lty_foo_bar.
DATA:
lv_json_string TYPE string,
ls_data TYPE lty_foo_bar.
" in this example we use upper case attribute names
"because we map to SAP target
" structure which has upper case names.
" if you need lowercase variables then you can not map straight to an
" SAP type. Then you need to use the traverse technique. See example 2
lv_json_string = |\{| &&
|"KEYONE":"Value One",| &&
|"KEYTWO": "Value \\\\ two", | &&
|"KEYTHREE": "Value \\" Three", | &&
|"KEYFOUR": "ValueFour\\\\" | &&
|\}|.
lv_json_string = `{"JUNK":` && lv_json_string && `}`.
CALL TRANSFORMATION id SOURCE XML lv_json_string
RESULT junk = ls_data. "
Write: / ls_data-keyone,ls_data-keytwo, ls_data-keythree , ls_data-keyfour.
ENDMETHOD.
METHOD json_stru_traverse.
DATA l_src_json TYPE string.
DATA: lo_node TYPE REF TO if_sxml_node.
DATA: lif_element TYPE REF TO if_sxml_open_element,
lif_element_close TYPE REF TO if_sxml_close_element,
lif_value_node TYPE REF TO if_sxml_value,
l_val TYPE string,
l_attr TYPE if_sxml_attribute=>attributes,
l_att_val TYPE string.
FIELD-SYMBOLS: <attr> LIKE LINE OF l_attr.
WRITE: / 'DEMO 2 Traverse any json document'.
l_src_json = `{"MATNR":"012345678", "MATKL": "DUMMY", "SOMENODE": "With this value" }`.
WRITE: / 'Parse as JSON with 3 nodes -> ', l_src_json.
DATA(reader) = cl_sxml_string_reader=>create( cl_abap_codepage=>convert_to( l_src_json ) ).
lo_node = reader->read_next_node( ). " {
IF lo_node IS INITIAL.
EXIT.
ENDIF.
DO 3 TIMES.
lif_element ?= reader->read_next_node( ).
l_attr = lif_element->get_attributes( ).
LOOP AT l_attr ASSIGNING <attr>.
l_att_val = <attr>->get_value( ).
WRITE: / 'Attribute:', l_att_val.
ENDLOOP.
lif_value_node ?= reader->read_next_node( ).
l_val = lif_value_node->get_value( ).
WRITE: '=>', l_val.
lif_element_close ?= reader->read_next_node( ).
ENDDO.
ENDMETHOD.
ENDCLASS.
START-OF-SELECTION.
DATA lo_lcl TYPE REF TO lcl.
CREATE OBJECT lo_lcl.
lo_lcl->json_stru_known( ).
lo_lcl->json_stru_traverse( ).
The SAP system is supplied with many example programs.
Search for demo*json
SAP docu on json parsing
As #mrzasa and #joanis said in their comments: Do not use RegEx to parse JSON!
For small objects or when performance is not a concern, you can use /ui2/cl_json:
TYPES:
BEGIN OF lty_foo_bar,
KeyOne TYPE string,
KeyTwo Type string,
KeyThree TYPE string,
KeyFour Type string,
END OF lty_foo_bar.
DATA:
lv_json_string TYPE string,
ls_data TYPE lty_foo_bar.
lv_json_string = |\{| &&
|"KeyOne":"Value One",| &&
|"KeyTwo": "Value \\\\ two", | &&
|"KeyThree": "Value \\" Three", | &&
|"KeyFour": "ValueFour\\\\" | &&
|\}|.
/ui2/cl_json=>deserialize(
EXPORTING
json = lv_json_string
CHANGING
data = ls_data ).
ls_data-KeyOne contains 'Value One' and so on.
For larger objects and/or better performance check lxml from #phil soadys answer below. The correct handling of upper and lower case letters still causes headache in ABAP anyways.
So I have the following dictionaries that I get by parsing a text file
keys = ["scientific name", "common names", "colors]
values = ["somename1", ["name11", "name12"], ["color11", "color12"]]
keys = ["scientific name", "common names", "colors]
values = ["somename2", ["name21", "name22"], ["color21", "color22"]]
and so on. I am dumping the key value pairs using a dictionary to a json file using a for loop where I go through each key value pair one by one
for loop starts
d = dict(zip(keys, values))
with open("file.json", 'a') as j:
json.dump(d, j)
If I open the saved json file I see the contents as
{"scientific name": "somename1", "common names": ["name11", "name12"], "colors": ["color11", "color12"]}{"scientific name": "somename2", "common names": ["name21", "name22"], "colors": ["color21", "color22"]}
Is this the right way to do it?
The purpose is to query the common name or colors for a given scientific name. So then I do
with open("file.json", "r") as j:
data = json.load(j)
I get the error, json.decoder.JSONDecodeError: Extra data:
I think this is because I am not dumping the dictionaries in json in the for loop correctly. I have to insert some square brackets programatically. Just doing json.dump(d, j) won't suffice.
JSON may only have one root element. This root element can be [], {} or most other datatypes.
In your file, however, you get multiple root elements:
{...}{...}
This isn't valid JSON, and the error Extra data refers to the second {}, where valid JSON would end instead.
You can write multiple dicts to a JSON string, but you need to wrap them in an array:
[{...},{...}]
But now off to how I would fix your code. First, I rewrote what you posted, because your code was rather pseudo-code and didn't run directly.
import json
inputs = [(["scientific name", "common names", "colors"],
["somename1", ["name11", "name12"], ["color11", "color12"]]),
(["scientific name", "common names", "colors"],
["somename2", ["name21", "name22"], ["color21", "color22"]])]
for keys, values in inputs:
d = dict(zip(keys, values))
with open("file.json", 'a') as j:
json.dump(d, j)
with open("file.json", 'r') as j:
print(json.load(j))
As you correctly realized, this code failes with
json.decoder.JSONDecodeError: Extra data: line 1 column 105 (char 104)
The way I would write it, is:
import json
inputs = [(["scientific name", "common names", "colors"],
["somename1", ["name11", "name12"], ["color11", "color12"]]),
(["scientific name", "common names", "colors"],
["somename2", ["name21", "name22"], ["color21", "color22"]])]
jsonData = list()
for keys, values in inputs:
d = dict(zip(keys, values))
jsonData.append(d)
with open("file.json", 'w') as j:
json.dump(jsonData, j)
with open("file.json", 'r') as j:
print(json.load(j))
Also, for python's json library, it is important that you write the entire json file in one go, meaning with 'w' instead of 'a'.
i have a task to generate CSV file from multiple JSON payloads (2). Below are my sample data providing for understanding purpose
- Payload-1
[
{
"id": "Run",
"errorMessage": "Cannot Run"
},
{
"id": "Walk",
"errorMessage": "Cannot Walk"
}
]
- Payload-2 (**Source Input**) in flowVars
[
{
"Action1": "Run",
"Action2": ""
},
{
"Action1": "",
"Action2": "Walk"
},
{
"Action1": "Sleep",
"Action2": ""
}
]
Now, i have to generate CSV file with one extra column to Source Input with ErrorMessage on one condition basis, where the id in payload 1 matches with sourceInput field then errorMessage should assign to that requested field and generate a CSV file as a output
i had tried with the below dataweave
%dw 1.0
%output application/csv header=true
---
flowVars.InputData map (val,index)->{
Action1: val.Action1,
Action2: val.Action2,
(
payload filter ($.id == val.Action1 or $.id == val.Action2) map (val2,index) -> {
ErrorMessage: val2.errorMessage replace /([\n,\/])/ with ""
}
)
}
But, here im facing an issue with, i'm able to generate the file with data as expected, but the header ErrorMessage is missing/not appearing in the file with my real data(in production). Kindly assist me.
and Expecting the below CSV output
Action1,Action2,ErrorMessage
Run,,Cannot Run
,Walk,Cannot Walk
Sleep,
Hello the best way to solve this kind of problem is using groupBy. The idea is that you groupBy one of the two parts to use the join by and then you iterate the other part and do a lookup. This way you avoid O(n^2) and transform it to O(n)
%dw 1.0
%var payloadById = payload groupBy $.id
%output application/csv
---
flowVars.InputData map ((value, index) ->
using(locatedError = payloadById[value.Action2][0] default payloadById[value.Action1][0]) (
(value ++ {ErrorMessage: locatedError.errorMessage replace /([\n,\/])/ with ""}) when locatedError != null otherwise value
)
)
filter $ != null
Assuming "Payload-1" is payload, and "Payload-2" is flowVars.actions, I would first create a key-value lookup with the payload. Then I would use that to populate flowVars.actions:
%dw 1.0
%output application/csv header=true
// Creates lookup, e.g.:
// {"Run": "Cannot run", "Walk": "Cannot walk"}
%var errorMsgLookup = payload reduce ((obj, lookup={}) ->
lookup ++ {(obj.id): obj.errorMessage})
---
flowVars.actions map ((action) -> action ++ errorMsgLookup[action.Action1])
Note: I'm also assuming flowVars.action's id field is unique across the array.
I have an output :
MysqlResult = {selected,["id","first_name","last_name"],
[{1,"Matt","Williamson"},
{2,"Matt","Williamson2"}]}
how to make it look like :
XML = "
<result id='1'>
<first_name>Matt</first_name>
<last_name>Williamson</last_name>
</result>
<result id='2'>
<first_name>Matt</first_name>
<last_name>Williamson2</last_name>
</result>"
I am looking for a smart way for placing it into IQ ( ejabberd )
IQ#iq{type = result, sub_el =
[{xmlelement, "result",
[{"xmlns", ?NS_NAMES}],
[{xmlelement, "userinfo", [],
[{xmlcdata,"???"?? }]}]}]}
First extract the results element from the tuple:
{selected, _Columns, Results} = MysqlResult.
Then convert it to ejabberd's internal XML format with a list comprehension:
XML = [{xmlelement, "result", [{"id", integer_to_list(Id)}],
[{xmlelement, "first_name", [], [{xmlcdata, FirstName}]},
{xmlelement, "last_name", [], [{xmlcdata, LastName}]}]}
|| {Id, FirstName, LastName} <- Results].
And insert it into your IQ record:
IQ#iq{type = result, sub_el =
[{xmlelement, "result",
[{"xmlns", ?NS_NAMES}],
[{xmlelement, "userinfo", [],
XML}]}]}
(assuming that you want the <result/> elements as children of the <userinfo/> element)
Use xmerl to create XML in Erlang:
1> MysqlResult = {selected,["id","first_name","last_name"],
1> [{1,"Matt","Williamson"},
1> {2,"Matt","Williamson2"}]}.
{selected,["id","first_name","last_name"],
[{1,"Matt","Williamson"},{2,"Matt","Williamson2"}]}
2> {selected, _Columns, Results} = MysqlResult.
{selected,["id","first_name","last_name"],
[{1,"Matt","Williamson"},{2,"Matt","Williamson2"}]}
3> Content = [{result, [{id, Id}], [{first_name, [First]}, {last_name, [Last]}]} || {Id, First, Last} <- Results].
[{result,[{id,1}],
[{first_name,["Matt"]},{last_name,["Williamson"]}]},
{result,[{id,2}],
[{first_name,["Matt"]},{last_name,["Williamson2"]}]}]
4> xmerl:export_simple(, xmerl_xml).
["<?xml version=\"1.0\"?>",
[[["<","result",[[" ","id","=\"","1","\""]],">"],
[[["<","first_name",">"],["Matt"],["</","first_name",">"]],
[["<","last_name",">"],
["Williamson"],
["</","last_name",">"]]],
["</","result",">"]],
[["<","result",[[" ","id","=\"","2","\""]],">"],
[[["<","first_name",">"],["Matt"],["</","first_name",">"]],
[["<","last_name",">"],
["Williamson2"],
["</","last_name",">"]]],
["</","result",">"]]]]
5> io:format("~s", [v(-1)]).
<?xml version="1.0"?><result id="1"><first_name>Matt</first_name><last_name>Williamson</last_name></result><result id="2"><first_name>Matt</first_name><last_name>Williamson2</last_name></result>ok
Try to use --xml and --execute option in mysql command line client.
mysql client
The xmerl solution is absolutely fine, and probably the way to go if this is a one-off type thing.
However, if you are writing an xmpp client, even a simple one, consider using exmpp - https://github.com/processone/exmpp . You can use some of the tactics to extract data and generate XML, but in general, the helper functions (most likely within the exmpp_iq and exmpp_stanza modules) will be very handy.
exmpp isn't going anywhere either -- the alpha of ejabberd3 is using it internally (finally)