how to get value from json via variable in typescript? - json

supposing I have to read some data from some json files(i18n), every json file may look like:
{
"foo": "1",
"bar": "2",
...
}
I don't know how many fields this json have(it can be expanded), but it's fields look like
{
[prop: string]: string
}
besides, all the json files share the same fields.
when I try to read a value from this json via:
//a can be expanded, I'm not sure how many fileds does it have
let a = {
name: "dd",
addr: "ee",
}
//I'm confident a has a field "name"
let b = "name";
console.log(a[b]);
the error message is:
Element implicitly has an 'any' type because expression of type 'string' can't be used to index type
how could I fix it?

The error you're encountering is because the keys in a is not just any string (in fact, it can only be "name" or "add"), but b can be a string of any arbitrary value. If you are very sure that b represents a key found in the object a, you can hint TypeScript as such:
let b: keyof typeof a = "name";
Attempting to assign any arbitrary string value to b will lead to an error:
// This will cause an error
let b: key typeof a = "foobar";
See proof-of-concept on TypeScript Playground.

Related

/ui2/cl_json=>deserialize doesn't fill structure

I have two types of JSON: result and error.
I'm trying to deserialize these two JSON to my internal structure, but there's a problem.
Only for result the function works correctly, for error the structure is always blank.
Can anybody help me to solve my problem or indicate my mistake?
Here are my JSON:
{
"result": [
{
"to": "to_somebody",
"id": "some_id",
"code": "some_code"
}
]
}
{
"error": {
"name": "some_name",
"date": [],
"id": "11",
"descr": "Unknown error"
},
"result": null
}
Here is my ABAP code (there's a screen to enter the JSON):
DATA go_textedit TYPE REF TO cl_gui_textedit.
PARAMETERS: json TYPE string.
AT SELECTION-SCREEN OUTPUT.
IF go_textedit IS NOT BOUND.
CREATE OBJECT go_textedit
EXPORTING
parent = cl_gui_container=>screen0.
go_textedit->set_textstream( json ).
ENDIF.
AT SELECTION-SCREEN.
go_textedit->get_textstream( IMPORTING text = json ).
cl_gui_cfw=>flush( ).
TYPES: BEGIN OF stt_result,
to TYPE string,
id TYPE string,
code TYPE string,
END OF stt_result.
TYPES: BEGIN OF stt_error,
name TYPE string,
date TYPE string,
id TYPE string,
descr TYPE string,
result TYPE string,
END OF stt_error.
DATA: BEGIN OF ls_response_result,
result TYPE STANDARD TABLE OF stt_result,
error TYPE STANDARD TABLE OF stt_error,
END OF ls_response_result.
/ui2/cl_json=>deserialize( EXPORTING json = lv_cdata
pretty_name = /ui2/cl_json=>pretty_mode-camel_case
CHANGING data = ls_response_result ).
Revise your type declarations. There is a discrepancy in one place where you expect result as a JSON array (ABAP table) vs. a JSON object (ABAP structure).
This is the complete code I used:
CLASS the_json_parser DEFINITION PUBLIC FINAL CREATE PUBLIC.
PUBLIC SECTION.
TYPES:
BEGIN OF result_structure,
to TYPE string,
id TYPE string,
code TYPE string,
END OF result_structure.
TYPES result_table TYPE
STANDARD TABLE OF result_structure
WITH EMPTY KEY.
TYPES date_table TYPE
STANDARD TABLE OF string
WITH EMPTY KEY.
TYPES:
BEGIN OF error_structure,
name TYPE string,
date TYPE date_table,
id TYPE string,
descr TYPE string,
result TYPE string,
END OF error_structure.
TYPES:
BEGIN OF complete_result_structure,
result TYPE result_table,
error TYPE error_structure,
END OF complete_result_structure.
CLASS-METHODS parse
IMPORTING
json TYPE string
RETURNING
VALUE(result) TYPE complete_result_structure.
ENDCLASS.
CLASS the_json_parser IMPLEMENTATION.
METHOD parse.
/ui2/cl_json=>deserialize(
EXPORTING
json = json
pretty_name = /ui2/cl_json=>pretty_mode-camel_case
CHANGING
data = result ).
ENDMETHOD.
ENDCLASS.
Verified with the test class:
CLASS unit_tests DEFINITION FOR TESTING RISK LEVEL HARMLESS DURATION SHORT.
PUBLIC SECTION.
METHODS parses_result FOR TESTING.
METHODS parses_error FOR TESTING.
ENDCLASS.
CLASS unit_tests IMPLEMENTATION.
METHOD parses_result.
DATA(json) = `{` &&
`"result": [` &&
`{` &&
`"to": "to_somebody",` &&
`"id": "some_id",` &&
`"code": "some_code"` &&
`}` &&
`]` &&
`}`.
DATA(result) = the_json_parser=>parse( json ).
cl_abap_unit_assert=>assert_not_initial( result ).
cl_abap_unit_assert=>assert_not_initial( result-result ).
cl_abap_unit_assert=>assert_initial( result-error ).
ENDMETHOD.
METHOD parses_error.
DATA(json) = `{` &&
`"error": {` &&
`"name": "some_name",` &&
`"date": [],` &&
`"id": "11",` &&
`"descr": "Unknown error"` &&
`},` &&
`"result": null` &&
`}`.
DATA(result) = the_json_parser=>parse( json ).
cl_abap_unit_assert=>assert_not_initial( result ).
cl_abap_unit_assert=>assert_initial( result-result ).
cl_abap_unit_assert=>assert_not_initial( result-error ).
ENDMETHOD.
ENDCLASS.
A JSON object {...} can be mapped only by an ABAP structure.
A JSON array [...] can be mapped only by an ABAP internal table.
In your code, the error JSON is a JSON object, but the ABAP variable is an internal table.
So you should correct the ABAP variable by removing STANDARD TABLE OF so that it becomes a structure:
DATA: BEGIN OF ls_response_result,
result TYPE STANDARD TABLE OF stt_result,
error TYPE stt_error, " <=== "STANDARD TABLE OF" removed
END OF ls_response_result.
Thanks to all for your advice.
I solve my problem.
The case was:
I get from provider two types of JSON: result or error.
I can't get both of them at the same time.
I need to deserialize them to my internal structure.
{
"result": [
{
"to": "some_value",
"id": "some_id",
"code": "some_code"
}
]
}
and
{
"error": {
"name": "some_name",
"date": [some_date],
"id": "some_id",
"descr": "some_description"
},
"result": null
}
Here's my needed ABAP code, which works correctly.
P.S: My mistake was that I worked with error like with an internal table instead of structure.
TYPES: BEGIN OF stt_result,
to TYPE string,
id TYPE string,
code TYPE string,
END OF stt_result.
TYPES lt_data TYPE STANDARD TABLE OF string WITH EMPTY KEY.
TYPES: BEGIN OF stt_error,
name TYPE string,
date TYPE lt_data,
id TYPE string,
descr TYPE string,
result TYPE stt_result,
END OF stt_error.
DATA: BEGIN OF ls_response_result,
result TYPE STANDARD TABLE OF stt_result,
error TYPE stt_error,
END OF ls_response_result.
/ui2/cl_json=>deserialize( EXPORTING json = lv_cdata
pretty_name = /ui2/cl_json=>pretty_mode-camel_case
CHANGING data = ls_response_result ).

Golang Error Types are empty when encoded to JSON

I'm trying to encode some JSON for a REST api, everything is working fine except for some errors. For example, with this struct:
type TemplateResponse struct {
Message string
Error error
Template Template
}
Encoded with this data:
res := TemplateResponse{"Template not found.", fmt.Errorf("There is no template on this host with the name " + vars["name"]), Template{}}
json.NewEncoder(w).Encode(res)
Returns:
{
"Message": "Template not found.",
"Error": {},
"Template": {
"Name": "",
"Disabled": false,
"Path": "",
"Version": ""
}
}
I'm getting this seemingly randomly across my application, where 'error' types are being returned as empty. Any ideas?
Thanks!
Because error is just an interface. It may hold a value of any concrete type that implements it.
In your example you used fmt.Errorf() to create an error value. That calls errors.New() which returns a pointer to a value of the unexported errors.errorString struct. Its definition is:
type errorString struct {
s string
}
This struct value will be marshaled, but since it has no exported fields (only exported fields are marshaled), it will be an empty JSON object: {}.
The "fix" is: don't marshal values of "general" interfaces, relying on that the dynamic values can be marshaled into JSON meaningfully. Instead you should add a field that stores the error string (the result of error.Error()), and omit the Error error field from marshaling, e.g.:
type TemplateResponse struct {
Message string
Error error `json:"-"`
ErrorMsg string
Template Template
}
Of course then you also need to set / fill the ErrorMsg field before marshaling.
Or if you don't need to store the error value in the struct, remove that field completely:
type TemplateResponse struct {
Message string
ErrorMsg string
Template Template
}
If you still want to keep the Error error field (and not the ErrorMsg field), then you need to implement a custom marshaling logic by implementing the json.Marshaler interface where you can "convert" the error value to a meaningful string for example (or into another value that can be marshaled properly).

What kind of data structure is needed to parse JSON to itab?

I want to parse a json string into an abap internal table, for example, this one
{
"apiVersion": "1.0",
"data": {
"location": "Dresden",
"temperature": "7",
"skytext": "Light rain",
"humidity": "96",
"wind": "7.31 km/h",
"date": "02-14-2017",
"day": "Tuesday"
}
}
I want to use the method cl_fdt_json=>json_to_data and put the values and keys into a table like this
types: begin of map,
key type string,
value type string,
end of map.
data json_data type standard table of map.
But, unfortunately, it does not work like that. Does anyone have experience with this kind of problem? I don't have access to all the documentation because this is my sample task for a hirement to SAP and this is the last part of the "puzzle" ;) It is hard for me to find the solution.
Thank you!!!
EDIT: accordingly to vwegerts answer I tried the following. This is a little bit different to what i originally wanted to do, but it would also be ok)
DATA cl_oops TYPE REF TO cx_dynamic_check.
DATA(text) = result.
TYPES: BEGIN OF ty_structure,
skytext TYPE string,
location type string,
temperature type string,
humidity type string,
wind type string,
date type string,
day type string,
END OF ty_structure.
DATA : wa_structure TYPE ty_structure.
TRY.
CALL TRANSFORMATION id
SOURCE XML text
RESULT data = wa_structure.
message wa_structure-skytext type 'I'.
CATCH cx_transformation_error INTO cl_oops.
WRITE cl_oops->get_longtext( ).
ENDTRY.
but it still doesnt work. when i check the value of wa_structure-skytext it is unfortunatly empty. i cannot find the mistake. does anyone have an idea?
Rather than use the FDT class (which might not be available on all systems), you might want to take a look at the well-documented capabilities of the ABAP runtime system itself. This example program might be the way to go for you. You would essentially provide a Simple Transformation that would map the JSON XML structure to your data structure, instantiate a sXML JSON reader and then pass that as source to CALL TRANSFORMATION.
Besides #vwegert recommendation to use the SAP documented json transformations, you could check the open source alternatives. This one looks promising.
{"apiVersion":"1.0", "data":{ "location":"Dresden", "temperature":"7", "skytext":"Light rain", "humidity":"96", "wind":"7.31 km/h", "date":"02-14-2017", "day":"Tuesday" } }
The corresponding structure in ABAP would be:
"The nested data table
Types: Begin of ty_data,
location TYPE string,
temperature TYPE string,
skytext TYPE string,
etc.
End of ty_data,
ty_t_data TYPE STANDARD TABLE OF ty_data WITH NON-UNIQUE DEFAULT KEY INITIAL SIZE 0.
"the whole json structure
Types: Begin of ty_json,
apiversion TYPE string,
data TYPE ty_t_data,
End of ty_json.
DATA: ls_data TYPE ty_json.
Now you have to find a proper JSON deserializer, which handles nested tables.
Most deserializer expect a table input, so you have to add '['... ']' at the end of your JSON string and define lt_data TYPE STANDARD TABLE OF ty_json.
You can do it like that via SAP JSON-XML reader:
CLASS lcl_json DEFINITION.
PUBLIC SECTION.
TYPES: BEGIN OF map,
key TYPE string,
value TYPE string,
END OF map,
tt_map TYPE STANDARD TABLE OF map WITH DEFAULT KEY.
CLASS-METHODS: parse IMPORTING iv_json TYPE string
RETURNING VALUE(rv_map) TYPE tt_map.
ENDCLASS.
CLASS lcl_json IMPLEMENTATION.
METHOD parse.
DATA(o_reader) = cl_sxml_string_reader=>create( cl_abap_codepage=>convert_to( iv_json ) ).
TRY.
DATA(o_node) = o_reader->read_next_node( ).
WHILE o_node IS BOUND.
CASE o_node->type.
WHEN if_sxml_node=>co_nt_element_open.
DATA(op) = CAST if_sxml_open_element( o_node ).
LOOP AT op->get_attributes( ) ASSIGNING FIELD-SYMBOL(<a>).
APPEND VALUE #( key = <a>->get_value( ) ) TO rv_map ASSIGNING FIELD-SYMBOL(<json>).
ENDLOOP.
WHEN if_sxml_node=>co_nt_value.
DATA(val) = CAST if_sxml_value_node( o_node ).
<json>-value = val->get_value( ).
WHEN OTHERS.
ENDCASE.
o_node = o_reader->read_next_node( ).
ENDWHILE.
CATCH cx_root INTO DATA(e_txt).
RAISE EXCEPTION TYPE cx_sxml_parse_error EXPORTING error_text = e_txt->get_text( ).
ENDTRY.
ENDMETHOD.
ENDCLASS.
START-OF-SELECTION.
DATA(json_string) = ` {"apiVersion":"1.0", ` &&
` "data":{ "location":"Dresden", "temperature":"7",` &&
` "skytext":"Light rain", "humidity":"96", "wind":"7.31 km/h", "date":"02-14-2017", "day":"Tuesday" } } `.
TRY.
DATA(it_map) = lcl_json=>parse( json_string ).
CATCH cx_root INTO DATA(e_txt).
" do handling
ENDTRY.

error: the type of this value must be known in this context (Rust) / serde_json Value

I am using serde_json to deserialise a json document. I have a function that given a string (this is the json document), will return a serde_json Value (this is an enum that represents the json type), returns an Option.
This value is passed around to other functions as required.
However, I realised that passing around a Value is not quite what I want, because doing this, the key is not available.
To illustrate my point, if I have a json document that looks like this:
{
"root" : {
"regex" : null,
"prefixes" : [ "a_", "b_" ]
}
}
"root" is a json object, "regex" is json Null and "prefixes" is a json array.
Now, the json type Value is an enum with discriminators representing the json types, eg Object, Null, Array for the examples given above.
The serde_json crate uses std::collections::BTreeMap to represent nodes in the json document, where the String type repesents the json keys (in the above, these would be "root", "regex" and "prefixes". So passing around just references to Values is only partly helpful, I should be passing around BTreeMap instead, so that I can access the key too.
So this is the following function that I am trying to re-write:
fn get_json_content(content_s : &str) -> Option<Value> {
// instead of returning a value, we need to return a BTreeMap, so we can get the
// key and the value.
println!("===>>> json_content obtained: {}", content_s);
match serde_json::from_str(content_s) { // -> Result<Value>
Ok(some_value) => Some(some_value),
Err(_) => None
}
}
So I started to re-write the function but became up against the "the type of this value must be known in this context" error:
fn get_json_content_as_btreemap<'a>(content_s : &str) -> Option<&'a BTreeMap<String, Value>> {
match serde_json::from_str(content_s) { // -> Result<Value>
Ok(some) => {
// I expect the type of key_value_pair to be BTreeMap<String, Value>>
// (but I may be wrong!)
let key_value_pair = some.as_object().unwrap(); // Error here
},
Err(_) => None
}
}
I found other questions on stackoverflow like this one:
the type of this value must be known in this context
and using this as a helper, I tried to insert the type as follows:
let key_value_pair = some.as_object::<BTreeMap<_, _>>().unwrap();
which doesnt fix the issue. Also, tried other similar variations to no avail. So how do I fix this please?
EDIT:
I have another function in this app as follows:
fn get_root_value<'a>(json_documemt : &'a Value) -> Result<&'a Value, JsonErrorCode> {
if json_documemt.is_object() {
for (k, v) in json_documemt.as_object().unwrap().iter() {
if k == "root" {
println!("found root: {}", k);
return Ok(v)
}
}
return Err(JsonErrorCode::Custom("Failed to find root node".to_string()))
}
Err(JsonErrorCode::Custom("Not an object".to_string()))
}
... and this works fine. Here you can see that I can call as_object() and then obtain the key and value as a tuple pair. I don't understand why as_object is working in one case but not the other. I would like to pull out the BTreeMap and pass this around as a borrowed item.
You can change the return type of your initial function and serde_json will deserialize to the appropriate object if it can:
fn get_json_content(content_s : &str) -> Option<BTreeMap<String, Value>> {
// instead of returning a value, we need to return a BTreeMap, so we can get the
// key and the value.
println!("===>>> json_content obtained: {}", content_s);
match serde_json::from_str(content_s) { // -> Result<Value>
Ok(some_value) => Some(some_value),
Err(_) => None
}
// Note: this match statement can be rewritten as
// serde_json::from_str(content_s).ok()
}
Your second example won't work because you are instantiating the Value object inside the function, and then trying to return a reference to the object you just instantiated. This won't work because the object will go out of scope at the end of the function and the reference will then be invalid.

golang - decode dynamic 'flat' http-json response

I am trying to parse a dynamic http-json response in a classic go-struct. I am working with orient-db and the problem is the follow:
there are ever static element for each object
each object has custom attributes
For example, a pseudo-struct-response could be:
type orientdb_reply struct {
# the "statics" elements that we always have
element_type string `json:"#type"`
rid string `json:"#rid"`
version int `json:"#version"`
class string `json:"#class"`
# then we have custom attributes for each object
# for example, object 'server' can have
hostname string `json:"hostname"`
ip string `json:"ip"`
# object 'display' can have
model string `json:"model"`
serial_number string `json:"serial_number"`
resolution string `json:"resolution"`
# and so on
}
The difficulty in this case is that the response is "flat", if it contains sub-element the resolution is trivial and could be adding simply other structs as child.
But in this case I would like avoid to build a mega-huge struct that contains each possible attribute and I would like to keep the structure separate for object-type without repeating the constants elements in each object-struct.
Is this possible?
A possible solution, that now I see could be:
type constant_fields struct {
# the "constants" elements that we ever have
element_type string `json:"#type"`
rid string `json:"#rid"`
version int `json:"#version"`
class string `json:"#class"`
}
type server_fields struct {
constants constant_fields
# for example, object 'server' can have
hostname string `json:"hostname"`
ip string `json:"ip"`
}
type display_fields struct {
constants constant_fields
# object 'display' can have
model string `json:"model"`
serial_number string `json:"serial_number"`
resolution string `json:"resolution"`
}
But this means that I should parse twice each request (one for constants stuff and another for the attributes stuff). And I don't know if the parser likes a "strange" struct (constant_fields) not really present in the json.
Real example:
{
"result" : [
{
"name" : "cen110t",
"guest_state" : "running",
"#type" : "d",
"guest_mac_addr" : "XX:XX:XX:XX:XX:XX",
"#version" : 1,
"hostname" : "",
"vm_uid" : "vm-29994",
"guest_ip" : "10.200.1.92",
"vm_type" : "VirtualMachine",
"#rid" : "#13:103",
"guest_family" : "linuxGuest",
"storage_type" : "",
"guest_fullname" : "CentOS 4/5/6/7 (64-bit)",
"#class" : "VM"
}
]
}
I would like avoid to build a mega-huge struct that contains each possible attribute
use json.RawMessage for a attribute type in you struct definition, then this attribute will be kept in raw and not parsed.
I would like to keep the structure separate for object-type without repeating the statics element in each object-struct
of cause you can nest struct in struct, just like json object in json object. put the common(statics) attributes in one json object and nest it is a good way.
to the question update
yes it will work. json parser will handle nested struct for you, not need to parse twice. but your json string need match the Go's struct structure. like json
{
"k1":"v1",
"k2":{
"k21":"v21"
}
}
will match go struct:
type nest_struct struct{
K21 string `k21`
}
type top struct{
K1 string `k1`,
K2 nest_struct `k2`
}