I'm having a weird behavior using Kafka while sending a message with fields containing numbers as string :
{
"test1": "1",
"test2": "9000000000000000",
"test3": "9999999999999999",
"test4": "10000000000000000"
}
When I'm producing this message to my topic, Kafka transform them to Int or not depending on the value.
Here is the result after producing it in my topic :
{
"test1": "1",
"test2": "9000000000000000",
"test3": 9999999999999999,
"test4": 10000000000000000
}
test3 and test4 are converted to an Int value instead of keeping as a string.
I've tested it with a C# client but also using confluent web app.
Is there any way to avoid this automatic conversion ?
Kafka isn't doing this. Your serialization framework is. For example, if you have a JSON model that defines the fields as Integer, rather than String, it may parse and type-convert those values for you.
Related
I have two types of JSON: result and error.
I'm trying to deserialize these two JSON to my internal structure, but there's a problem.
Only for result the function works correctly, for error the structure is always blank.
Can anybody help me to solve my problem or indicate my mistake?
Here are my JSON:
{
"result": [
{
"to": "to_somebody",
"id": "some_id",
"code": "some_code"
}
]
}
{
"error": {
"name": "some_name",
"date": [],
"id": "11",
"descr": "Unknown error"
},
"result": null
}
Here is my ABAP code (there's a screen to enter the JSON):
DATA go_textedit TYPE REF TO cl_gui_textedit.
PARAMETERS: json TYPE string.
AT SELECTION-SCREEN OUTPUT.
IF go_textedit IS NOT BOUND.
CREATE OBJECT go_textedit
EXPORTING
parent = cl_gui_container=>screen0.
go_textedit->set_textstream( json ).
ENDIF.
AT SELECTION-SCREEN.
go_textedit->get_textstream( IMPORTING text = json ).
cl_gui_cfw=>flush( ).
TYPES: BEGIN OF stt_result,
to TYPE string,
id TYPE string,
code TYPE string,
END OF stt_result.
TYPES: BEGIN OF stt_error,
name TYPE string,
date TYPE string,
id TYPE string,
descr TYPE string,
result TYPE string,
END OF stt_error.
DATA: BEGIN OF ls_response_result,
result TYPE STANDARD TABLE OF stt_result,
error TYPE STANDARD TABLE OF stt_error,
END OF ls_response_result.
/ui2/cl_json=>deserialize( EXPORTING json = lv_cdata
pretty_name = /ui2/cl_json=>pretty_mode-camel_case
CHANGING data = ls_response_result ).
Revise your type declarations. There is a discrepancy in one place where you expect result as a JSON array (ABAP table) vs. a JSON object (ABAP structure).
This is the complete code I used:
CLASS the_json_parser DEFINITION PUBLIC FINAL CREATE PUBLIC.
PUBLIC SECTION.
TYPES:
BEGIN OF result_structure,
to TYPE string,
id TYPE string,
code TYPE string,
END OF result_structure.
TYPES result_table TYPE
STANDARD TABLE OF result_structure
WITH EMPTY KEY.
TYPES date_table TYPE
STANDARD TABLE OF string
WITH EMPTY KEY.
TYPES:
BEGIN OF error_structure,
name TYPE string,
date TYPE date_table,
id TYPE string,
descr TYPE string,
result TYPE string,
END OF error_structure.
TYPES:
BEGIN OF complete_result_structure,
result TYPE result_table,
error TYPE error_structure,
END OF complete_result_structure.
CLASS-METHODS parse
IMPORTING
json TYPE string
RETURNING
VALUE(result) TYPE complete_result_structure.
ENDCLASS.
CLASS the_json_parser IMPLEMENTATION.
METHOD parse.
/ui2/cl_json=>deserialize(
EXPORTING
json = json
pretty_name = /ui2/cl_json=>pretty_mode-camel_case
CHANGING
data = result ).
ENDMETHOD.
ENDCLASS.
Verified with the test class:
CLASS unit_tests DEFINITION FOR TESTING RISK LEVEL HARMLESS DURATION SHORT.
PUBLIC SECTION.
METHODS parses_result FOR TESTING.
METHODS parses_error FOR TESTING.
ENDCLASS.
CLASS unit_tests IMPLEMENTATION.
METHOD parses_result.
DATA(json) = `{` &&
`"result": [` &&
`{` &&
`"to": "to_somebody",` &&
`"id": "some_id",` &&
`"code": "some_code"` &&
`}` &&
`]` &&
`}`.
DATA(result) = the_json_parser=>parse( json ).
cl_abap_unit_assert=>assert_not_initial( result ).
cl_abap_unit_assert=>assert_not_initial( result-result ).
cl_abap_unit_assert=>assert_initial( result-error ).
ENDMETHOD.
METHOD parses_error.
DATA(json) = `{` &&
`"error": {` &&
`"name": "some_name",` &&
`"date": [],` &&
`"id": "11",` &&
`"descr": "Unknown error"` &&
`},` &&
`"result": null` &&
`}`.
DATA(result) = the_json_parser=>parse( json ).
cl_abap_unit_assert=>assert_not_initial( result ).
cl_abap_unit_assert=>assert_initial( result-result ).
cl_abap_unit_assert=>assert_not_initial( result-error ).
ENDMETHOD.
ENDCLASS.
A JSON object {...} can be mapped only by an ABAP structure.
A JSON array [...] can be mapped only by an ABAP internal table.
In your code, the error JSON is a JSON object, but the ABAP variable is an internal table.
So you should correct the ABAP variable by removing STANDARD TABLE OF so that it becomes a structure:
DATA: BEGIN OF ls_response_result,
result TYPE STANDARD TABLE OF stt_result,
error TYPE stt_error, " <=== "STANDARD TABLE OF" removed
END OF ls_response_result.
Thanks to all for your advice.
I solve my problem.
The case was:
I get from provider two types of JSON: result or error.
I can't get both of them at the same time.
I need to deserialize them to my internal structure.
{
"result": [
{
"to": "some_value",
"id": "some_id",
"code": "some_code"
}
]
}
and
{
"error": {
"name": "some_name",
"date": [some_date],
"id": "some_id",
"descr": "some_description"
},
"result": null
}
Here's my needed ABAP code, which works correctly.
P.S: My mistake was that I worked with error like with an internal table instead of structure.
TYPES: BEGIN OF stt_result,
to TYPE string,
id TYPE string,
code TYPE string,
END OF stt_result.
TYPES lt_data TYPE STANDARD TABLE OF string WITH EMPTY KEY.
TYPES: BEGIN OF stt_error,
name TYPE string,
date TYPE lt_data,
id TYPE string,
descr TYPE string,
result TYPE stt_result,
END OF stt_error.
DATA: BEGIN OF ls_response_result,
result TYPE STANDARD TABLE OF stt_result,
error TYPE stt_error,
END OF ls_response_result.
/ui2/cl_json=>deserialize( EXPORTING json = lv_cdata
pretty_name = /ui2/cl_json=>pretty_mode-camel_case
CHANGING data = ls_response_result ).
I have looked online and all I've found is people handling clobs and building JSON formatted text into CLOBs.
I'm building up a JSON object and then storing them as a clob which is then returned by a function, which works when the JSON object is small, I am generating 122k chars.
So here goes,
This is my JSON object:
jsonschemeresult json := json();
Which at the end of my PLSQL has something like this: (funds can repeat hence the 122k):
{
"report_level": {
"report_date": "15/03/2019"
},
"scheme_level":
{
"name": "A name",
"id": "123123123",
"funds": [{
"fund_name": "Fund 1",
"fund_value": 123123.12
}, {
"fund_name": "Fund 2",
"fund_value": 987987.98
}]
}
}
I can get a console output using this, which is how I know its 122k in length:
jsonschemeresult.print;
And if the JSON is small I can get the CLOB to return using this:
v_final_clob := jsonschemeresult.to_char;
RETURN v_final_clob;
I believe it's the to_char that is the restriction.
I've looked online and on here, and others use a loop and loop through the CLOB, I need to try and loop through the JSON object or something similar.
Kindly review and give feedback.
I found a solution, create a temporary lob, and use the JSON_AC procedure object_to_clob to convert the JSON object to a CLOB:
--use temporary LOB to hold the JSON
dbms_lob.createtemporary(lob_loc => v_out_json_clob, cache => TRUE);
--using the temporary LOB convert the JSON to a CLOB
utluser.json_ac.object_to_clob(p_self => jsonschemeresult, buf => v_out_json_clob);
RETURN v_out_json_clob;
One drawback, is you lose the 'pretty' JSON formatting in the returned CLOB, but syntactically, it works.
I have an AppSync GraphQL API that makes a Query to a DynamoDB and returns a JSON String, however in my Response Mapping Template I use the built-in $util.parseJson() function as listed here - but I'm still returned a JSON string in the Query window and when requesting the data in my React app.
Schema file, I have an ordinary ID & Address field that is of type AWSJSON.
type Venue {
id: ID!
address: AWSJSON
}
When running a mutation, I usually run the address object through a quick JSON.stringify(addressObj) and that formats the object as a string with the \"\" escaped, meaning that it can be inserted into DynamoDB.
Request Mapping template
{
"version": "2017-02-28",
"operation": "GetItem",
"key": {
"id": $util.dynamodb.toDynamoDBJson($ctx.args.id),
}
}
Response Mapping template
#set($result = $ctx.result)
## address - parse back to JSON
#set($result.address = $util.parseJson($ctx.result.address))
## Return the result
$util.toJson($result)
The idea to create a new variable and then assign the value to the parseJSON value was taken from How return JSON object from DynamoDB with appsync?. So, as seen below, I am parsing the value through what seems to be the correct method to turn it from stringified JSON, to an object - but it doesn't appear to work.
The current response:
{
"data": {
"getVenue": {
"id": "31538150",
"address": "{\"lng\":-1.54511300000001,\"postcode\":\"LS1 5DL\",\"short\":\"New Station St., LS1\",\"lat\":53.795231,\"full\":\"16 New Station St, Leeds LS1 5DL, UK\"}"
}
}
}
Whereas the response that I am wanting is...
{
"data": {
"getVenue": {
"id": "31538150",
"address": { "lng": -1.54511300000001, "postcode": "LS1 5DL", "short": "New Station St., LS1", "lat": 53.795231, "full": "16 New Station St, Leeds LS1 5DL, UK" }
}
}
}
Any help is greatly appreciated!
Old question but thought I would add some notes...
AWSJSON is a string value that will be parsed into DynamoDB as JSON and stringified again on fetching.
So AppSync expects a string (stringified JSON) for inputs and will return a string that can be parsed with JSON.parse.
However this data is parsed before storing in DynamoDB. So of you query DynamoDB separately than through AppSync then you can query it like it was an object. Same with inputing directly into DynamoDB.
The only way to get JSON in the AppSync result is to define each and every field in GraphQL. Sometimes this can be achieved by restructuring data. For example instead of storing:
{
bob: { age: 34 },
igor: { age: 124 }
}
Which would need to be stringified as an AWSJSON field. It can however be restructured like this:
[{
name: 'bob',
age: 34
}, {
name: 'igor',
age: 123
}]
Which can be defined in GraphQL as something like this:
type User {
age: Int,
name: String
}
So that one can now extract specific value such as age from the GraphQL without involving and JSON parse/stringify.
I'm new to Go and working hard to follow its style and I'm not sure how to proceed.
I want to push a JSON object to a Geckoboard leaderboard, which I believe requires the following format based on the API doc and the one for leaderboards specifically:
{
"api_key": "222f66ab58130a8ece8ccd7be57f12e2",
"data": {
"item": [
{ "label": "Bob", "value": 4, "previous_value": 6 },
{ "label": "Alice", "value": 3, "previous_value": 4 }
]
}
}
My instinct is to build a struct for the API call itself and another called Contestants, which will be nested under item. In order to use json.Marshall(Contestant1), the naming convention of my variables would not meet fmt's expectations:
// Contestant structure to nest into the API call
type Contestant struct {
label string
value int8
previous_rank int8
}
This feels incorrect. How should I configure my Contestant objects and be able to marshall them into JSON without breaking convention?
To output a proper JSON object from a structure, you have to export the fields of this structure. To do it, just capitalize the first letter of the field.
Then you can add some kind of annotations, to tell your program how to name your JSON fields :
type Contestant struct {
Label string `json:"label"`
Value int8 `json:"value"`
PreviousRank int8 `json:"previous_rank"`
}
i have a big json object with a list of "tickets". schema looks like below
{
"Artist": "Artist1",
"Tickets": [
{
"Id": 1,
"Attr2Array": [
{
"Att41": 1,
"Att42": "A",
"Att43": null
},
{
"Att41": 1,
"Att42": "A",
"Att43": null
},
],
.
.
.
(more properties)
"Price": "20",
"Description": "I m a ticket"
},
{
"Id": 4,
"Attr2Array": [
{
"Att41": 1,
"Att42": "A",
"Att43": null
},
{
"Att41": 1,
"Att42": "A",
"Att43": null
},
],
.
.
.
.
(more properties)
"Price": "30",
"Description": "I m a ticket"
}
]
}
each item in the list has around 25-30 properties (some simple types, and others complex array as nested objects)
i have to read the object from an api endpoint and extract only "ID" and "Description" but they need to be sorted by "Price" which is an int for example
In what order shall i proceed with this data manipulation
Shall i use the json object, deserialised it into another object with just those 2 properties (which i need) and THEN perform sort "asc" on the "Price"?
Please note that after i have the sorted list i will have to convert it back to a json list because the front end consumes a json after all.
What i dont like about this approach is the cycle of serialisation and deserialisation that happens
or
I perform a sort on the json object first (using for example a binary/bubble sort) and then use the object to create a strongly typed (deserialised) object with just those 2 properties and then serialise it back to pass to the front end
I dont know how performant the bubble sort will be and if at all i will get any gain in performance for large chunks of data processing.
I also need to keep in mind that this implementation can take into account other properties like "availabilitydate" because at a later date, this front end could add one more filter like "availabilitdate" asc
any help is much appreciated
thanks
You can deserialize your JSON string (or file) using the Microsoft System.Web.Extensions and JavaScriptSerializer.DeserializeObject.
First, you must have classes associated to your JSON. To create classes, select your JSON sample data and, in Visual Studio, go to Edit / Paste Special / Paste JSON As Classes.
Next, use this sample to deserialize a JSON string to typed objects, and to sort all Tickets by Price property using Linq.
String json = System.IO.File.ReadAllText(#"C:\Data.json");
var root = new System.Web.Script.Serialization.JavaScriptSerializer().Deserialize<Rootobject>(json);
var sortedTickets = root.Tickets.OrderBy(t => t.Price);