Parsing json using deserialize in abap btp environment - json

I fetched the data from a URL and data is stored in a variable in json format.now i need to parse this json format but i am unable to parse the data.
below is the code
CLASS zcode_82 DEFINITION
PUBLIC
FINAL
CREATE PUBLIC .
PUBLIC SECTION.
INTERFACES if_oo_adt_classrun .
DATA:lv_response TYPE string,
lv_body type string,
lv_path type string,
lv_json TYPE /ui2/cl_json=>json,
r_json type string.
types: begin of ty_data,
field type string,
id type string,
customer type string,
customer_id type string,
address type string,
date_Created type string,
time_created type string,
END OF TY_DATA.
data: lv_data type STANDARD TABLE OF ty_data with DEFAULT KEY,
lr_data TYPE REF TO data,
ls_data type ty_data.
PROTECTED SECTION.
PRIVATE SECTION.
ENDCLASS.
CLASS zcode_82 IMPLEMENTATION.
METHOD if_oo_adt_classrun~main.
DATA: lv_body_1 TYPE string,
ls_value type RANGE OF ty_data.
TRY.
"create http destination by url; API endpoint for API sandbox
DATA(lo_http_destination) =
cl_http_destination_provider=>create_by_url( 'enter the url ' ).
"create HTTP client by destination
DATA(lo_web_http_client) = cl_web_http_client_manager=>create_by_http_destination( lo_http_destination ) .
"adding headers with API Key for API Sandbox
DATA(lo_web_http_request) = lo_web_http_client->get_http_request( ).
lo_web_http_request->set_header_fields( VALUE #(
( name = 'Authorization' value = 'Bearer key' )
( name = 'Content-Type' value = 'application/json' )
) ).
"set request method and execute request
DATA(lo_web_http_response) = lo_web_http_client->execute( if_web_http_client=>get ).
lv_response = lo_web_http_response->get_text( ).
CATCH cx_http_dest_provider_error cx_web_http_client_error cx_web_message_error.
"error handling
ENDTRY.
* out->write( |response: { lv_response }| ).
*
CLEAR lv_data[].
/ui2/cl_json=>deserialize(
EXPORTING
json = lv_Response
* jsonx =
pretty_name = /ui2/cl_json=>pretty_mode-user
* assoc_arrays =
* assoc_arrays_opt =
* name_mappings =
* conversion_exits =
* hex_as_base64 =
CHANGING
data = lv_data
).
out->write(
EXPORTING
data = lv_data
* name =
* RECEIVING
* output =
).
endmethod.
endclass.
Here the downloaded data from url is the input and the input is
response: {"records":[{"id":"rec5Qk24OQpKDyykq","createdTime":"2022-08-03T10:14:43.000Z","fields":{"customer_id":"0000010001","address":"Chennai","created_time":"06:00:14","customer":"IDADMIN","date_created":"16.04.2004"}},{"id":"rec7bSe8Zb18z6b5a","createdTime":"2022-08-08T13:07:16.000Z","fields":{"customer_id":"0000010007","address":"Kakinada","created_time":"04:01:18","customer":"Ramya","date_created":"15.04.2000"}},{"id":"recD9Hh4YLgNXOhUE","createdTime":"2022-08-08T11:48:06.000Z","fields":{"customer_id":"0000010002","address":"Bangalore","created_time":"04:03:35","customer":"MAASSBERG","date_created":"20.04.2004"}},{"id":"recK7Tfw4PFAedDiB","createdTime":"2022-08-03T10:14:43.000Z","fields":{"customer_id":"0000010005","address":"Chennai","created_time":"06:00:49","customer":"IDADMIN","date_created":"21.04.2004"}},{"id":"recKOq0DhEtAma7BV","createdTime":"2022-08-03T10:14:43.000Z","fields":{"customer_id":"0000010006","address":"Hyderabad","created_time":"18:42:28","customer":"GLAESS","date_created":"21.04.2004"}},{"id":"recS8pg10dFBGj8o7","createdTime":"2022-08-03T10:14:43.000Z","fields":{"customer_id":"0000010003","address":"Gurugram","created_time":"04:10:02","customer":"MAASSBERG","date_created":"20.04.2004"}},{"id":"recf4QbOmKMrBeLQZ","createdTime":"2022-08-03T10:14:43.000Z","fields":{"customer_id":"0000010004","address":"Bangalore","created_time":"06:00:12","customer":"IDADMIN","date_created":"21.04.2004"}},{"id":"recs7oHEqfkN87`enter code here`tWm","createdTime":"2022-08-03T10:14:43.000Z","fields":{"customer_id":"0000010000","address":"Hyderabad","created_time":"04:01:18","customer":"MAASSBERG","date_created":"15.04.2004"}}]}
The output of the code is
Table
FIELD ID CUSTOMER CUSTOMER_ID ADDRESS DATE_CREATED TIME_CREATED

Your structure type simply does not match the JSON structure. You can't simply skip the outer records array and expect the deserializer to know you want to deserialize what is within it. Same with the fields object to string, it's an object/structure, not a string.
The JSON <> ABAP type would look like this
{
"records":[
{
"id":"rec5Qk24OQpKDyykq",
"createdTime":"2022-08-03T10:14:43.000Z",
"fields":{
"customer_id":"0000010001",
"address":"Chennai",
"created_time":"06:00:14",
"customer":"IDADMIN",
"date_created":"16.04.2004"
}
},
...
TYPES: BEGIN OF ty_field,
customer_id TYPE string,
address TYPE string,
created_time TYPE string,
customer TYPE string,
date_created TYPE string,
END OF ty_field.
TYPES: BEGIN OF ty_record,
id TYPE string,
createdtime TYPE string,
fields TYPE ty_field,
END OF ty_record.
TYPES tt_record TYPE STANDARD TABLE OF ty_record WITH EMPTY KEY.
TYPES: BEGIN OF ty_response,
records TYPE tt_record,
END OF ty_response.
Then you can use a variable like DATA ls_response TYPE ty_response as your data parameter.
Side note: If you can, consider using Simple Transformations with JSON data. There you can fine-tune namings (mix snake_case and camelCase), which fields to serialize, deserialze, require/optional, and it's magnitudes faster (especially with larger json files).

Related

/ui2/cl_json=>deserialize doesn't fill structure

I have two types of JSON: result and error.
I'm trying to deserialize these two JSON to my internal structure, but there's a problem.
Only for result the function works correctly, for error the structure is always blank.
Can anybody help me to solve my problem or indicate my mistake?
Here are my JSON:
{
"result": [
{
"to": "to_somebody",
"id": "some_id",
"code": "some_code"
}
]
}
{
"error": {
"name": "some_name",
"date": [],
"id": "11",
"descr": "Unknown error"
},
"result": null
}
Here is my ABAP code (there's a screen to enter the JSON):
DATA go_textedit TYPE REF TO cl_gui_textedit.
PARAMETERS: json TYPE string.
AT SELECTION-SCREEN OUTPUT.
IF go_textedit IS NOT BOUND.
CREATE OBJECT go_textedit
EXPORTING
parent = cl_gui_container=>screen0.
go_textedit->set_textstream( json ).
ENDIF.
AT SELECTION-SCREEN.
go_textedit->get_textstream( IMPORTING text = json ).
cl_gui_cfw=>flush( ).
TYPES: BEGIN OF stt_result,
to TYPE string,
id TYPE string,
code TYPE string,
END OF stt_result.
TYPES: BEGIN OF stt_error,
name TYPE string,
date TYPE string,
id TYPE string,
descr TYPE string,
result TYPE string,
END OF stt_error.
DATA: BEGIN OF ls_response_result,
result TYPE STANDARD TABLE OF stt_result,
error TYPE STANDARD TABLE OF stt_error,
END OF ls_response_result.
/ui2/cl_json=>deserialize( EXPORTING json = lv_cdata
pretty_name = /ui2/cl_json=>pretty_mode-camel_case
CHANGING data = ls_response_result ).
Revise your type declarations. There is a discrepancy in one place where you expect result as a JSON array (ABAP table) vs. a JSON object (ABAP structure).
This is the complete code I used:
CLASS the_json_parser DEFINITION PUBLIC FINAL CREATE PUBLIC.
PUBLIC SECTION.
TYPES:
BEGIN OF result_structure,
to TYPE string,
id TYPE string,
code TYPE string,
END OF result_structure.
TYPES result_table TYPE
STANDARD TABLE OF result_structure
WITH EMPTY KEY.
TYPES date_table TYPE
STANDARD TABLE OF string
WITH EMPTY KEY.
TYPES:
BEGIN OF error_structure,
name TYPE string,
date TYPE date_table,
id TYPE string,
descr TYPE string,
result TYPE string,
END OF error_structure.
TYPES:
BEGIN OF complete_result_structure,
result TYPE result_table,
error TYPE error_structure,
END OF complete_result_structure.
CLASS-METHODS parse
IMPORTING
json TYPE string
RETURNING
VALUE(result) TYPE complete_result_structure.
ENDCLASS.
CLASS the_json_parser IMPLEMENTATION.
METHOD parse.
/ui2/cl_json=>deserialize(
EXPORTING
json = json
pretty_name = /ui2/cl_json=>pretty_mode-camel_case
CHANGING
data = result ).
ENDMETHOD.
ENDCLASS.
Verified with the test class:
CLASS unit_tests DEFINITION FOR TESTING RISK LEVEL HARMLESS DURATION SHORT.
PUBLIC SECTION.
METHODS parses_result FOR TESTING.
METHODS parses_error FOR TESTING.
ENDCLASS.
CLASS unit_tests IMPLEMENTATION.
METHOD parses_result.
DATA(json) = `{` &&
`"result": [` &&
`{` &&
`"to": "to_somebody",` &&
`"id": "some_id",` &&
`"code": "some_code"` &&
`}` &&
`]` &&
`}`.
DATA(result) = the_json_parser=>parse( json ).
cl_abap_unit_assert=>assert_not_initial( result ).
cl_abap_unit_assert=>assert_not_initial( result-result ).
cl_abap_unit_assert=>assert_initial( result-error ).
ENDMETHOD.
METHOD parses_error.
DATA(json) = `{` &&
`"error": {` &&
`"name": "some_name",` &&
`"date": [],` &&
`"id": "11",` &&
`"descr": "Unknown error"` &&
`},` &&
`"result": null` &&
`}`.
DATA(result) = the_json_parser=>parse( json ).
cl_abap_unit_assert=>assert_not_initial( result ).
cl_abap_unit_assert=>assert_initial( result-result ).
cl_abap_unit_assert=>assert_not_initial( result-error ).
ENDMETHOD.
ENDCLASS.
A JSON object {...} can be mapped only by an ABAP structure.
A JSON array [...] can be mapped only by an ABAP internal table.
In your code, the error JSON is a JSON object, but the ABAP variable is an internal table.
So you should correct the ABAP variable by removing STANDARD TABLE OF so that it becomes a structure:
DATA: BEGIN OF ls_response_result,
result TYPE STANDARD TABLE OF stt_result,
error TYPE stt_error, " <=== "STANDARD TABLE OF" removed
END OF ls_response_result.
Thanks to all for your advice.
I solve my problem.
The case was:
I get from provider two types of JSON: result or error.
I can't get both of them at the same time.
I need to deserialize them to my internal structure.
{
"result": [
{
"to": "some_value",
"id": "some_id",
"code": "some_code"
}
]
}
and
{
"error": {
"name": "some_name",
"date": [some_date],
"id": "some_id",
"descr": "some_description"
},
"result": null
}
Here's my needed ABAP code, which works correctly.
P.S: My mistake was that I worked with error like with an internal table instead of structure.
TYPES: BEGIN OF stt_result,
to TYPE string,
id TYPE string,
code TYPE string,
END OF stt_result.
TYPES lt_data TYPE STANDARD TABLE OF string WITH EMPTY KEY.
TYPES: BEGIN OF stt_error,
name TYPE string,
date TYPE lt_data,
id TYPE string,
descr TYPE string,
result TYPE stt_result,
END OF stt_error.
DATA: BEGIN OF ls_response_result,
result TYPE STANDARD TABLE OF stt_result,
error TYPE stt_error,
END OF ls_response_result.
/ui2/cl_json=>deserialize( EXPORTING json = lv_cdata
pretty_name = /ui2/cl_json=>pretty_mode-camel_case
CHANGING data = ls_response_result ).

How can i print Unmarshalled data in golang?

I am reading json from Raabbitmq in golang and mapping the json in an interface
My struct looks like this, and
type Documents struct {
user_id string
partner_id []string
last_login int
}
and I am mapping the incoming json in the above struct, but for debugginf purpose, i want to see the interface array , how can i print the mapped data array (body in my case )
var body []Documents
json.Unmarshal(d.Body, &body)
log.Printf("Received a message: %s", body)
Do i need to put other identifier instead of %s ?
You have problem with your struct definition. You need use Exported identifier, like-
type Documents struct {
UserID string `json:"user_id"`
PartnerID []string `json:"partner_id"`
LastLogin int `json:"last_login"`
}
For your question, refer to format printing verbs.
To print values of body-
log.Printf("Received a message: %v", body)
To print values along with variable name -
log.Printf("Received a message: %#v", body)

What kind of data structure is needed to parse JSON to itab?

I want to parse a json string into an abap internal table, for example, this one
{
"apiVersion": "1.0",
"data": {
"location": "Dresden",
"temperature": "7",
"skytext": "Light rain",
"humidity": "96",
"wind": "7.31 km/h",
"date": "02-14-2017",
"day": "Tuesday"
}
}
I want to use the method cl_fdt_json=>json_to_data and put the values and keys into a table like this
types: begin of map,
key type string,
value type string,
end of map.
data json_data type standard table of map.
But, unfortunately, it does not work like that. Does anyone have experience with this kind of problem? I don't have access to all the documentation because this is my sample task for a hirement to SAP and this is the last part of the "puzzle" ;) It is hard for me to find the solution.
Thank you!!!
EDIT: accordingly to vwegerts answer I tried the following. This is a little bit different to what i originally wanted to do, but it would also be ok)
DATA cl_oops TYPE REF TO cx_dynamic_check.
DATA(text) = result.
TYPES: BEGIN OF ty_structure,
skytext TYPE string,
location type string,
temperature type string,
humidity type string,
wind type string,
date type string,
day type string,
END OF ty_structure.
DATA : wa_structure TYPE ty_structure.
TRY.
CALL TRANSFORMATION id
SOURCE XML text
RESULT data = wa_structure.
message wa_structure-skytext type 'I'.
CATCH cx_transformation_error INTO cl_oops.
WRITE cl_oops->get_longtext( ).
ENDTRY.
but it still doesnt work. when i check the value of wa_structure-skytext it is unfortunatly empty. i cannot find the mistake. does anyone have an idea?
Rather than use the FDT class (which might not be available on all systems), you might want to take a look at the well-documented capabilities of the ABAP runtime system itself. This example program might be the way to go for you. You would essentially provide a Simple Transformation that would map the JSON XML structure to your data structure, instantiate a sXML JSON reader and then pass that as source to CALL TRANSFORMATION.
Besides #vwegert recommendation to use the SAP documented json transformations, you could check the open source alternatives. This one looks promising.
{"apiVersion":"1.0", "data":{ "location":"Dresden", "temperature":"7", "skytext":"Light rain", "humidity":"96", "wind":"7.31 km/h", "date":"02-14-2017", "day":"Tuesday" } }
The corresponding structure in ABAP would be:
"The nested data table
Types: Begin of ty_data,
location TYPE string,
temperature TYPE string,
skytext TYPE string,
etc.
End of ty_data,
ty_t_data TYPE STANDARD TABLE OF ty_data WITH NON-UNIQUE DEFAULT KEY INITIAL SIZE 0.
"the whole json structure
Types: Begin of ty_json,
apiversion TYPE string,
data TYPE ty_t_data,
End of ty_json.
DATA: ls_data TYPE ty_json.
Now you have to find a proper JSON deserializer, which handles nested tables.
Most deserializer expect a table input, so you have to add '['... ']' at the end of your JSON string and define lt_data TYPE STANDARD TABLE OF ty_json.
You can do it like that via SAP JSON-XML reader:
CLASS lcl_json DEFINITION.
PUBLIC SECTION.
TYPES: BEGIN OF map,
key TYPE string,
value TYPE string,
END OF map,
tt_map TYPE STANDARD TABLE OF map WITH DEFAULT KEY.
CLASS-METHODS: parse IMPORTING iv_json TYPE string
RETURNING VALUE(rv_map) TYPE tt_map.
ENDCLASS.
CLASS lcl_json IMPLEMENTATION.
METHOD parse.
DATA(o_reader) = cl_sxml_string_reader=>create( cl_abap_codepage=>convert_to( iv_json ) ).
TRY.
DATA(o_node) = o_reader->read_next_node( ).
WHILE o_node IS BOUND.
CASE o_node->type.
WHEN if_sxml_node=>co_nt_element_open.
DATA(op) = CAST if_sxml_open_element( o_node ).
LOOP AT op->get_attributes( ) ASSIGNING FIELD-SYMBOL(<a>).
APPEND VALUE #( key = <a>->get_value( ) ) TO rv_map ASSIGNING FIELD-SYMBOL(<json>).
ENDLOOP.
WHEN if_sxml_node=>co_nt_value.
DATA(val) = CAST if_sxml_value_node( o_node ).
<json>-value = val->get_value( ).
WHEN OTHERS.
ENDCASE.
o_node = o_reader->read_next_node( ).
ENDWHILE.
CATCH cx_root INTO DATA(e_txt).
RAISE EXCEPTION TYPE cx_sxml_parse_error EXPORTING error_text = e_txt->get_text( ).
ENDTRY.
ENDMETHOD.
ENDCLASS.
START-OF-SELECTION.
DATA(json_string) = ` {"apiVersion":"1.0", ` &&
` "data":{ "location":"Dresden", "temperature":"7",` &&
` "skytext":"Light rain", "humidity":"96", "wind":"7.31 km/h", "date":"02-14-2017", "day":"Tuesday" } } `.
TRY.
DATA(it_map) = lcl_json=>parse( json_string ).
CATCH cx_root INTO DATA(e_txt).
" do handling
ENDTRY.

F# deserialize a JSON string into correct record type

The scenario is - receiving a JSON string from the network and deserializing it into correct corresponding record type.
JSON string can be either:
(1) "{"a":"some text"}" or
(2) "{"b":1}"
Values might differ, but the format of fields will either be corresponding to Type1 or Type2:
type Type1 = {a:string}
type Type2 = {b:int}
When receiving an unknown string, I'm trying to get an instance of a correct record type:
// Contents of a string might be like (1) or like (2), but we don't know which one
let someJsonString = "..."
let obj = JsonConvert.DeserializeObject(someJsonString)
The last line returns an object of type Object.
Using pattern matching on it doesn't determine the type:
match obj with
| :? Type1 as t1 -> printfn "Type1: a = %A" t1.a
| :? Type2 as t2 -> printfn "Type2: b = %A" t2.b
| _ -> printfn "None of above"
Here the "None of above" is printed.
When I deserialize an object using indicating some type:
JsonConvert.DeserializeObject<Type1>(someJsonString)
Pattern matching is working and printing:
Type1: a = <the value of a>
However, that doesn't work in my case, because I can't know in advance what type of contents unknown JSON string is going to have.
Is there any way to deserialize a JSON string into correct record type based on the string contents?
Note: If necessary, when the string is serialized on the side where it's being sent, the name of the type can be sent as a part of this string. But then, how to get an instance of a Type type, having a name of a type, like "Type1" or "Type2"?
The fully qualified name of the type will be different on different machines, so I'm not sure if it's possible. I.e. one machine will have Type1 specified as:
"FSI_0059+Test1, FSI-ASSEMBLY, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null"
And another one as:
"FSI_0047+Test1, FSI-ASSEMBLY, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null"
You cannot do this generically without additional type information. The type would need to be specified when serializing, so that it may be read back when deserializing.
Newtonsoft.Json has a TypeNameHandling option that you can set when serialzing, so that the resulting JSON deserializes into the correct type.
Here's a complete example:
let instance = { a = 10 }
let settings = new JsonSerializerSettings(TypeNameHandling = TypeNameHandling.All)
let json = JsonConvert.SerializeObject(instance, settings)
let retrieved = JsonConvert.DeserializeObject(json, settings)

How to specify only particular fields using read.schema in JSON : SPARK Scala

I am trying to programmatically enforce schema(json) on textFile which looks like json. I tried with jsonFile but the issue is for creating a dataframe from a list of json files, spark has to do a 1 pass through the data to create a schema for the dataframe. So it needs to parse all the data which is taking longer time (4 hours since my data is zipped and of size TBs). So I want to try reading it as textFile and enforce schema to get interested fields alone to later query on the resulting data frame. But I am not sure how do I map it to the input. Can some give me some reference on how do I map schema to json like input.
input :
This is the full schema :
records: org.apache.spark.sql.DataFrame = [country: string, countryFeatures: string, customerId: string, homeCountry: string, homeCountryFeatures: string, places: array<struct<freeTrial:boolean,placeId:string,placeRating:bigint>>, siteName: string, siteId: string, siteTypeId: string, Timestamp: bigint, Timezone: string, countryId: string, pageId: string, homeId: string, pageType: string, model: string, requestId: string, sessionId: string, inputs: array<struct<inputName:string,inputType:string,inputId:string,offerType:string,originalRating:bigint,processed:boolean,rating:bigint,score:double,methodId:string>>]
But I am only interested in few fields like :
res45: Array[String] = Array({"requestId":"bnjinmm","siteName":"bueller","pageType":"ad","model":"prepare","inputs":[{"methodId":"436136582","inputType":"US","processed":true,"rating":0,"originalRating":1},{"methodId":"23232322","inputType":"UK","processed":falase,"rating":0,"originalRating":1}]
val records = sc.textFile("s3://testData/sample.json.gz")
val schema = StructType(Array(StructField("requestId",StringType,true),
StructField("siteName",StringType,true),
StructField("model",StringType,true),
StructField("pageType",StringType,true),
StructField("inputs", ArrayType(
StructType(
StructField("inputType",StringType,true),
StructField("originalRating",LongType,true),
StructField("processed",BooleanType,true),
StructField("rating",LongType,true),
StructField("methodId",StringType,true)
),true),true)))
val rowRDD = ??
val inputRDD = sqlContext.applySchema(rowRDD, schema)
inputRDD.registerTempTable("input")
sql("select * from input").foreach(println)
Is there any way to map this ? Or do I need to use son parser or something. I want to use textFile only because of the constraints.
Tried with :
val records =sqlContext.read.schema(schema).json("s3://testData/test2.gz")
But keeping getting the error :
<console>:37: error: overloaded method value apply with alternatives:
(fields: Array[org.apache.spark.sql.types.StructField])org.apache.spark.sql.types.StructType <and>
(fields: java.util.List[org.apache.spark.sql.types.StructField])org.apache.spark.sql.types.StructType <and>
(fields: Seq[org.apache.spark.sql.types.StructField])org.apache.spark.sql.types.StructType
cannot be applied to (org.apache.spark.sql.types.StructField, org.apache.spark.sql.types.StructField, org.apache.spark.sql.types.StructField, org.apache.spark.sql.types.StructField, org.apache.spark.sql.types.StructField, org.apache.spark.sql.types.StructField)
StructField("inputs",ArrayType(StructType(StructField("inputType",StringType,true), StructField("originalRating",LongType,true), StructField("processed",BooleanType,true), StructField("rating",LongType,true), StructField("score",DoubleType,true), StructField("methodId",StringType,true)),true),true)))
^
It can load with following code with predefined schema, spark don't need to go through the file in ZIP file. The code in the question has ambiguity.
import org.apache.spark.sql.types._
val input = StructType(
Array(
StructField("inputType",StringType,true),
StructField("originalRating",LongType,true),
StructField("processed",BooleanType,true),
StructField("rating",LongType,true),
StructField("score",DoubleType,true),
StructField("methodId",StringType,true)
)
)
val schema = StructType(Array(
StructField("requestId",StringType,true),
StructField("siteName",StringType,true),
StructField("model",StringType,true),
StructField("inputs",
ArrayType(input,true),
true)
)
)
val records =sqlContext.read.schema(schema).json("s3://testData/test2.gz")
Not all the fields need to be provided. While it's good to provide all if possible.
Spark try best to parse all, if some row is not valid. It will add _corrupt_record as a column which contains the whole row.
While if it's plained json file file.