I have a string as follows,
s= "query : {'$and': [{'$or': [{'Component': 'pfr'}, {'Component': 'ng-pfr'}, {'Component': 'common-flow-table'}, {'Component': 'media-mon'}]}, {'Submitted-on': {'$gte': datetime.datetime(2016, 2, 21, 0, 0)}}, {'Submitted-on': {'$lte': datetime.datetime(2016, 2, 28, 0, 0)}}]}
" which is a MongoDB query stored in a string.How to convert it into a Dict or JSON format in Python
Your format is not standard, so you need a hack to get it.
import json
s = " query : {'names' :['abc','xyz'],'location':'India'}"
key, value = s.strip().split(':', 1)
r = value.replace("'", '"')
data = {
key: json.loads(r)
}
From your comment: the datetime gives problems. Then I present to you the hack of hacks: the eval function.
import datetime
import json
s = " query : {'names' :['abc','xyz'],'location':'India'}"
key, value = s.strip().split(':', 1)
# we can leave out the replacing, single quotes is fine for eval
data = {
key: eval(value)
}
NB eval -especially on unsanitized input- is very unsafe.
NB: hacks will be broken, in the first case for example because a value or key contains a quote character.
Related
I have made a new table with three columns
customer_id,media_urls,survey_taste
in a db in pgadmin with attributes as
int,text[],jsonb
respectively.
I have a csv that I was trying to import into this table using pgadmin and
the contents of that file are like this
1,"{'http://example.com','http://example.com'}","{'taste':[1,2,3,4]}"
but I am getting this error
ERROR: invalid input syntax for type json
DETAIL: Token "'" is invalid.
CONTEXT: JSON data, line 1: '...
COPY survey_taste, line 2, column survey_taste: "{'taste': [-0.19101654669350904, 0.08575981750112513, 0.07133783942655376, -0.10579014363010293, 0.0..." ```
To address your comments in reverse order. To have this entered in one field you would need to have it as:
'[{"http":"abc","http":"abc"},{"taste":[1,2,3,4]}]'
Per:
select '[{"http":"abc","http":"abc"},{"taste":[1,2,3,4]}]'::json;
json
---------------------------------------------------
[{"http":"abc","http":"abc"},{"taste":[1,2,3,4]}]
As to the quoting issue:
When you pass a dict to csv you will get:
d = {"taste":[1,2,3,4]}
print(d)
{'taste': [1, 2, 3, 4]
What you need is:
import json
json.dumps(d)
'{"test": [1, 2, 3, 4]}'
Using json.dumps will turn the dict into a proper JSON string representation.
Putting it all together:
# Create list of dicts
l = [{'http': 'abc', 'http': 'abc'}, {'taste': [1,2,3,4]}]
# Create JSON string representattion
json.dumps(l)
'[{"http": "abc"}, {"taste": [1, 2, 3, 4]}]'
I would like to parse big CSV files in ABAP in the most performant way under the following conditions:
We do not know the structure of the CSV->the parse result should be table of string_table or something simular
The parsing should happen in accordance to https://www.rfc-editor.org/rfc/rfc4180
No solution specific calls
I found a very nice blog https://blogs.sap.com/2014/09/09/understanding-csv-files-and-their-handling-in-abap/ but it has its shortcoming:
Write your own code - The code example is not sufficient
Read the file using KCD_CSV_FILE_TO_INTERN_CONVERT - solution specific (not available everywhere) and will dump on fields that are big enough
Use RTTI and dynamic programming along with FM RSDS_CONVERT_CSV - we do not know the structure in advance
Use class CL_RSDA_CSV_CONVERTER - we do not know the structure in advance
I also checked the first available solution on github - https://github.com/thedoginthewok/ZwdCSV . Unfortunately, it has macros in the code (absolutely unacceptable) and also requires you to know the structure in advance.
I also tried to use the regex to do the job, but on big files this is too slow.
Even though I am extremely annoyed by this fact, I had to create a solution myself (I cannot believe that I actually did it - it should be in the standard...)
My first solution was a direct copy paste of Java code into ABAP (https://mkyong.com/java/how-to-read-and-parse-csv-file-in-java/). Unfortunately, as my other question How to iterate over string characters in ABAP in performant way? shown, it is not that easy to iterate over string in abap as it is in Java.
I then tried a split/count approach and so far it has the best performance. Does anyone knows the better way achieve this?
REPORT z_csv_test.
CLASS lcl_csv_parser DEFINITION CREATE PRIVATE.
PUBLIC SECTION.
TYPES:
tt_string_matrix TYPE STANDARD TABLE OF string_table WITH EMPTY KEY.
CLASS-METHODS:
create
IMPORTING
!iv_delimiter TYPE string DEFAULT '"'
!iv_separator TYPE string DEFAULT ','
!iv_line_separator TYPE abap_cr_lf DEFAULT cl_abap_char_utilities=>cr_lf
RETURNING
VALUE(r_result) TYPE REF TO lcl_csv_parser.
METHODS:
parse
IMPORTING
iv_string TYPE string
RETURNING
VALUE(r_result) TYPE tt_string_matrix,
constructor
IMPORTING
!iv_delimiter TYPE string
!iv_separator TYPE string
!iv_line_separator TYPE string.
PROTECTED SECTION.
PRIVATE SECTION.
DATA:
gv_delimiter TYPE string,
gv_separator TYPE string,
gv_line_separator TYPE string,
gv_escaped_delimiter TYPE string.
METHODS parse_line_to_string_table
IMPORTING
iv_line TYPE string
RETURNING
VALUE(r_result) TYPE string_table.
ENDCLASS.
CLASS lcl_csv_parser IMPLEMENTATION.
METHOD create.
r_result = NEW #(
iv_delimiter = iv_delimiter
iv_line_separator = CONV #( iv_line_separator )
iv_separator = iv_separator ).
ENDMETHOD.
METHOD constructor.
me->gv_delimiter = iv_delimiter.
me->gv_separator = iv_separator.
me->gv_line_separator = iv_line_separator.
me->gv_escaped_delimiter = |{ iv_delimiter }{ iv_delimiter }|.
ENDMETHOD.
METHOD parse.
"get the lines
SPLIT iv_string AT me->gv_line_separator INTO TABLE DATA(lt_lines).
DATA lx_open_line TYPE abap_bool VALUE abap_false.
DATA lv_current_line TYPE string.
LOOP AT lt_lines ASSIGNING FIELD-SYMBOL(<ls_line>).
FIND ALL OCCURRENCES OF me->gv_delimiter IN <ls_line> IN CHARACTER MODE MATCH COUNT DATA(lv_count).
IF ( lv_count MOD 2 ) = 1.
IF lx_open_line = abap_true.
lv_current_line = |{ lv_current_line }{ me->gv_line_separator }{ <ls_line> }|.
lx_open_line = abap_false.
APPEND parse_line_to_string_table( lv_current_line ) TO r_result.
ELSE.
lv_current_line = <ls_line>.
lx_open_line = abap_true.
ENDIF.
ELSE.
IF lx_open_line = abap_true.
lv_current_line = |{ lv_current_line }{ me->gv_line_separator }{ <ls_line> }|.
ELSE.
APPEND parse_line_to_string_table( <ls_line> ) TO r_result.
ENDIF.
ENDIF.
ENDLOOP.
ENDMETHOD.
METHOD parse_line_to_string_table.
SPLIT iv_line AT me->gv_separator INTO TABLE DATA(lt_line).
DATA lx_open_field TYPE abap_bool VALUE abap_false.
DATA lv_current_field TYPE string.
LOOP AT lt_line ASSIGNING FIELD-SYMBOL(<ls_field>).
FIND ALL OCCURRENCES OF me->gv_delimiter IN <ls_field> IN CHARACTER MODE MATCH COUNT DATA(lv_count).
IF ( lv_count MOD 2 ) = 1.
IF lx_open_field = abap_true.
lv_current_field = |{ lv_current_field }{ me->gv_separator }{ <ls_field> }|.
lx_open_field = abap_false.
APPEND lv_current_field TO r_result.
ELSE.
lv_current_field = <ls_field>.
lx_open_field = abap_true.
ENDIF.
ELSE.
IF lx_open_field = abap_true.
lv_current_field = |{ lv_current_field }{ me->gv_separator }{ <ls_field> }|.
ELSE.
APPEND <ls_field> TO r_result.
ENDIF.
ENDIF.
ENDLOOP.
REPLACE ALL OCCURRENCES OF me->gv_escaped_delimiter IN TABLE r_result WITH me->gv_delimiter.
ENDMETHOD.
ENDCLASS.
CLASS lcl_test_csv_parser DEFINITION
FINAL
CREATE PUBLIC .
PUBLIC SECTION.
CLASS-METHODS run.
CLASS-METHODS get_file
RETURNING VALUE(r_result) TYPE string.
PROTECTED SECTION.
PRIVATE SECTION.
ENDCLASS.
CLASS lcl_test_csv_parser IMPLEMENTATION.
METHOD get_file.
DATA lv_file_line TYPE string.
DO 10 TIMES.
lv_file_line = |"1234,{ cl_abap_char_utilities=>cr_lf }567890",{ lv_file_line }|.
ENDDO.
lv_file_line = lv_file_line && cl_abap_char_utilities=>cr_lf.
DATA(lt_file_as_table) = VALUE string_table(
FOR i = 1 THEN i + 1 UNTIL i = 1000000
( lv_file_line ) ).
CONCATENATE LINES OF lt_file_as_table INTO r_result.
ENDMETHOD.
METHOD run.
DATA lv_prepare_start TYPE timestampl.
GET TIME STAMP FIELD lv_prepare_start.
DATA(lv_file) = get_file( ).
DATA lv_prepare_end TYPE timestampl.
GET TIME STAMP FIELD lv_prepare_end.
WRITE |Preparation took { cl_abap_tstmp=>subtract( tstmp1 = lv_prepare_end tstmp2 = lv_prepare_start ) }|.
DATA lv_parse_start TYPE timestampl.
GET TIME STAMP FIELD lv_parse_start.
DATA(lo_parser) = lcl_csv_parser=>create( ).
DATA(lt_file) = lo_parser->parse( lv_file ).
DATA lv_parse_end TYPE timestampl.
GET TIME STAMP FIELD lv_parse_end.
WRITE |Parse took { cl_abap_tstmp=>subtract( tstmp1 = lv_parse_end tstmp2 = lv_parse_start ) }|.
ENDMETHOD.
ENDCLASS.
START-OF-SELECTION.
lcl_test_csv_parser=>run( ).
I'd like to present a different approach using find heavily, compared to your line based approach this seems to have equivalent performance for unquoted fields but performs slightly better if quoted fields are present:
In general, this uses the pattern position = find( off = position + 1 ) to iterate over the string in chunks, and then uses substring to copy ranges into strings. What can be observed here is that in a loop that iterates a million times, every nanosecond saved has an impact on the performance, and by moving as much of it out of the inner loop one can increase performance significantly. For the "simple" case of 10 digit fields one can see that both algorithms perform equally well, however for "longer" 30 digit fields your algorithm is getting faster in comparison. For fields with quotes the scan & concat approach I've used seems to be faster than the "reconstruct" approach.
I guess although one can achieve small gains through more clever ABAP, further significant optimizations are only possible by utilizing the engine even more.
Anyways, Here's the algorithm:
CLASS lcl_csv_parser_find IMPLEMENTATION.
METHOD parse.
DATA line TYPE string_table.
DATA position TYPE i.
DATA(string_length) = strlen( i_string ).
" Dereferencing member fields is slightly slower than variable access, in a close loop this matters
DATA(separators) = me->separators.
DATA(delimiter) = me->delimiter.
CHECK string_length <> 0.
" Checking for delimiters in the DO loop is quite slow.
" By scanning the whole file once and skipping that check if no delimiter is present
" This lead to a slight performance increase of 1s for 1 million rows
DATA(next_delimiter) = find( val = i_string sub = delimiter ).
DO.
DATA(start_position) = position.
DATA(field) = ``.
" Check if field is enclosed in double quotes, as we need to unescape then
IF next_delimiter <> -1 AND i_string+position(1) = delimiter.
start_position = start_position + 1. " literal starts after opening quote
DO.
position = find( val = i_string off = position + 1 sub = delimiter ).
" literal must be closed
" ASSERT position <> -1.
DATA(subliteral_length) = position - start_position.
field = field && substring( val = i_string off = start_position len = subliteral_length ).
DATA(following_position) = position + 1.
IF position = string_length OR i_string+following_position(1) <> delimiter.
" End of literal is reached
position = position + 1. " skip closing quote
EXIT. " DO
ELSE.
" Found escape quote instead
position = following_position + 1.
field = field && me->delimiter.
" continue searching
ENDIF.
" ASSERT sy-index < 1000.
ENDDO.
ELSE.
" Unescaped field, simply find the ending comma or newline
position = find_any_of( val = i_string off = position + 1 sub = separators ).
IF position = -1.
position = string_length.
ENDIF.
field = substring( val = i_string off = start_position len = position - start_position ).
ENDIF.
APPEND field TO line.
" Check if line ended and new line is started
DATA(current) = substring( val = i_string off = position len = 2 ).
IF current = me->line_separator.
APPEND line TO r_result.
CLEAR line.
position = position + 2. " skip newline
ELSE.
" ASSERT i_string+position(1) = me->separator.
position = position + 1.
ENDIF.
" Check if file ended
IF position >= string_length.
RETURN.
ENDIF.
" ASSERT sy-index < 100000001.
ENDDO.
ENDMETHOD.
ENDCLASS.
As a sidenote, instead of creating a huge table of string fields as stated in #1, I would experiment with some kind of "visitor pattern", e.g. pass an instance of such an interface to the parser:
INTERFACE if_csv_visitor.
METHODS begin_line.
METHODS end_line.
METHODS visit_field
IMPORTING
i_field TYPE string.
ENDINTERFACE.
As in a lot of cases you'll write the CSV fields into a structure anyways,
and thus one can save allocating this quite large table.
And for further reference, here's the whole report:
*&---------------------------------------------------------------------*
*& Report Z_CSV
*&---------------------------------------------------------------------*
*&
*&---------------------------------------------------------------------*
REPORT Z_CSV.
* --------------------- Generic CSV Parser ----------------------------*
CLASS lcl_csv_parser DEFINITION ABSTRACT.
PUBLIC SECTION.
TYPES:
t_string_matrix TYPE STANDARD TABLE OF string_table WITH EMPTY KEY.
METHODS:
parse ABSTRACT
IMPORTING
i_string TYPE string
RETURNING
VALUE(r_result) TYPE t_string_matrix,
constructor
IMPORTING
i_delimiter TYPE string DEFAULT '"'
i_separator TYPE string DEFAULT ','
i_line_separator TYPE abap_cr_lf DEFAULT cl_abap_char_utilities=>cr_lf.
PROTECTED SECTION.
DATA:
delimiter TYPE string,
separator TYPE string,
line_separator TYPE string,
escaped_delimiter TYPE string,
separators TYPE string.
ENDCLASS.
CLASS lcl_csv_parser IMPLEMENTATION.
METHOD constructor.
me->delimiter = i_delimiter.
me->separator = i_separator.
me->line_separator = i_line_separator.
me->escaped_delimiter = |{ i_delimiter }{ i_delimiter }|.
me->separators = i_separator && i_line_separator.
ENDMETHOD.
ENDCLASS.
* --------------------------- Line based CSV Parser ------------------------ *
CLASS lcl_csv_parser_line DEFINITION INHERITING FROM lcl_csv_parser.
PUBLIC SECTION.
METHODS parse REDEFINITION.
PRIVATE SECTION.
METHODS parse_line_to_string_table
IMPORTING
i_line TYPE string
RETURNING
VALUE(r_result) TYPE string_table.
ENDCLASS.
CLASS lcl_csv_parser_line IMPLEMENTATION.
METHOD parse.
"get the lines
SPLIT i_string AT me->line_separator INTO TABLE DATA(lines).
DATA open_line TYPE abap_bool VALUE abap_false.
DATA current_line TYPE string.
LOOP AT lines ASSIGNING FIELD-SYMBOL(<line>).
FIND ALL OCCURRENCES OF me->delimiter IN <line> IN CHARACTER MODE MATCH COUNT DATA(count).
IF ( count MOD 2 ) = 1.
IF open_line = abap_true.
current_line = |{ current_line }{ me->line_separator }{ <line> }|.
open_line = abap_false.
APPEND parse_line_to_string_table( current_line ) TO r_result.
ELSE.
current_line = <line>.
open_line = abap_true.
ENDIF.
ELSE.
IF open_line = abap_true.
current_line = |{ current_line }{ me->line_separator }{ <line> }|.
ELSE.
APPEND parse_line_to_string_table( <line> ) TO r_result.
ENDIF.
ENDIF.
ENDLOOP.
ENDMETHOD.
METHOD parse_line_to_string_table.
SPLIT i_line AT me->separator INTO TABLE DATA(fields).
DATA open_field TYPE abap_bool VALUE abap_false.
DATA current_field TYPE string.
LOOP AT fields ASSIGNING FIELD-SYMBOL(<field>).
FIND ALL OCCURRENCES OF me->delimiter IN <field> IN CHARACTER MODE MATCH COUNT DATA(count).
IF ( count MOD 2 ) = 1.
IF open_field = abap_true.
current_field = |{ current_field }{ me->separator }{ <field> }|.
open_field = abap_false.
APPEND current_field TO r_result.
ELSE.
current_field = <field>.
open_field = abap_true.
ENDIF.
ELSE.
IF open_field = abap_true.
current_field = |{ current_field }{ me->separator }{ <field> }|.
ELSE.
APPEND <field> TO r_result.
ENDIF.
ENDIF.
ENDLOOP.
REPLACE ALL OCCURRENCES OF me->escaped_delimiter IN TABLE r_result WITH me->delimiter.
ENDMETHOD.
ENDCLASS.
*--------------- Find based CSV Parser ------------------------------------*
CLASS lcl_csv_parser_find DEFINITION INHERITING FROM lcl_csv_parser.
PUBLIC SECTION.
METHODS parse REDEFINITION.
ENDCLASS.
CLASS lcl_csv_parser_find IMPLEMENTATION.
METHOD parse.
DATA line TYPE string_table.
DATA position TYPE i.
DATA(string_length) = strlen( i_string ).
" Dereferencing member fields is slightly slower than variable access, in a close loop this matters
DATA(separators) = me->separators.
DATA(delimiter) = me->delimiter.
CHECK string_length <> 0.
" Checking for delimiters in the DO loop is quite slow.
" By scanning the whole file once and skipping that check if no delimiter is present
" This lead to a slight performance increase of 1s for 1 million rows
DATA(next_delimiter) = find( val = i_string sub = delimiter ).
DO.
DATA(start_position) = position.
DATA(field) = ``.
" Check if field is enclosed in double quotes, as we need to unescape then
IF next_delimiter <> -1 AND i_string+position(1) = delimiter.
start_position = start_position + 1. " literal starts after opening quote
DO.
position = find( val = i_string off = position + 1 sub = delimiter ).
" literal must be closed
" ASSERT position <> -1.
DATA(subliteral_length) = position - start_position.
field = field && substring( val = i_string off = start_position len = subliteral_length ).
DATA(following_position) = position + 1.
IF position = string_length OR i_string+following_position(1) <> delimiter.
" End of literal is reached
position = position + 1. " skip closing quote
EXIT. " DO
ELSE.
" Found escape quote instead
position = following_position + 1.
field = field && me->delimiter.
" continue searching
ENDIF.
" ASSERT sy-index < 1000.
ENDDO.
ELSE.
" Unescaped field, simply find the ending comma or newline
position = find_any_of( val = i_string off = position + 1 sub = separators ).
IF position = -1.
position = string_length.
ENDIF.
field = substring( val = i_string off = start_position len = position - start_position ).
ENDIF.
APPEND field TO line.
" Check if line ended and new line is started
DATA(current) = substring( val = i_string off = position len = 2 ).
IF current = me->line_separator.
APPEND line TO r_result.
CLEAR line.
position = position + 2. " skip newline
ELSE.
" ASSERT i_string+position(1) = me->separator.
position = position + 1.
ENDIF.
" Check if file ended
IF position >= string_length.
RETURN.
ENDIF.
" ASSERT sy-index < 100000001.
ENDDO.
ENDMETHOD.
ENDCLASS.
* -------------------- Tests -------------------------------------------------------- *
CLASS lcl_test_csv_parser DEFINITION
FINAL
CREATE PUBLIC .
PUBLIC SECTION.
CLASS-METHODS run.
CLASS-METHODS get_file_complex
RETURNING VALUE(r_result) TYPE string.
CLASS-METHODS get_file_simple
RETURNING VALUE(r_result) TYPE string.
CLASS-METHODS get_file_long
RETURNING VALUE(r_result) TYPE string.
CLASS-METHODS get_file_longer
RETURNING VALUE(r_result) TYPE string.
CLASS-METHODS get_file_mixed
RETURNING VALUE(r_result) TYPE string.
PROTECTED SECTION.
PRIVATE SECTION.
ENDCLASS.
CLASS lcl_test_csv_parser IMPLEMENTATION.
METHOD get_file_complex.
DATA(file_line) =
repeat( val = |"1234,{ cl_abap_char_utilities=>cr_lf }7890",| occ = 9 ) &&
|"1234,{ cl_abap_char_utilities=>cr_lf }7890"| &&
cl_abap_char_utilities=>cr_lf.
r_result = repeat( val = file_line occ = 1000000 ).
ENDMETHOD.
METHOD get_file_simple.
DATA(file_line) =
repeat( val = |1234567890,| occ = 9 ) &&
|1234567890| &&
cl_abap_char_utilities=>cr_lf.
r_result = repeat( val = file_line occ = 1000000 ).
ENDMETHOD.
METHOD get_file_long.
DATA(file_line) =
repeat( val = |12345678901234567890,| occ = 4 ) &&
|12345678901234567890| &&
cl_abap_char_utilities=>cr_lf.
r_result = repeat( val = file_line occ = 1000000 ).
ENDMETHOD.
METHOD get_file_longer.
DATA(file_line) =
repeat( val = |1234567890123456789012345678901234567890,| occ = 2 ) &&
|1234567890123456789012345678901234567890| &&
cl_abap_char_utilities=>cr_lf.
r_result = repeat( val = file_line occ = 1000000 ).
ENDMETHOD.
METHOD get_file_mixed.
DATA(file_line) =
|1234567890,1234567890,"1234,{ cl_abap_char_utilities=>cr_lf }7890",1234567890,1234567890,1234567890,"1234,{ cl_abap_char_utilities=>cr_lf }7890",1234567890,1234567890,1234567890| &&
cl_abap_char_utilities=>cr_lf.
r_result = repeat( val = file_line occ = 1000000 ).
ENDMETHOD.
METHOD run.
DATA prepare_start TYPE timestampl.
GET TIME STAMP FIELD prepare_start.
TYPES:
BEGIN OF t_file,
name TYPE string,
content TYPE string,
END OF t_file,
t_files TYPE STANDARD TABLE OF t_file WITH EMPTY KEY.
DATA(files) = VALUE t_files(
( name = `simple` content = get_file_simple( ) )
( name = `long` content = get_file_long( ) )
( name = `longer` content = get_file_long( ) )
( name = `complex` content = get_file_complex( ) )
( name = `mixed` content = get_file_mixed( ) )
).
DATA prepare_end TYPE timestampl.
GET TIME STAMP FIELD prepare_end.
WRITE |Preparation took { cl_abap_tstmp=>subtract( tstmp1 = prepare_end tstmp2 = prepare_start ) }|. SKIP 2.
WRITE: 'File', 15 'Line Parse', 30 'Find Parse', 45 'Match'. NEW-LINE.
ULINE.
LOOP AT files INTO DATA(file).
WRITE file-name UNDER 'File'.
DATA line_start TYPE timestampl.
GET TIME STAMP FIELD line_start.
DATA(line_parser) = NEW lcl_csv_parser_line( ).
DATA(line_result) = line_parser->parse( file-content ).
DATA line_end TYPE timestampl.
GET TIME STAMP FIELD line_end.
WRITE |{ cl_abap_tstmp=>subtract( tstmp1 = line_end tstmp2 = line_start ) }s| UNDER 'Line Parse'.
DATA find_start TYPE timestampl.
GET TIME STAMP FIELD find_start.
DATA(find_parser) = NEW lcl_csv_parser_find( ).
DATA(find_result) = find_parser->parse( file-content ).
DATA find_end TYPE timestampl.
GET TIME STAMP FIELD find_end.
WRITE |{ cl_abap_tstmp=>subtract( tstmp1 = find_end tstmp2 = find_start ) }s| UNDER 'Find Parse'.
" WRITE COND #( WHEN line_result = find_result THEN 'yes' ELSE 'no') UNDER 'Match'.
NEW-LINE.
ENDLOOP.
ENDMETHOD.
ENDCLASS.
START-OF-SELECTION.
lcl_test_csv_parser=>run( ).
i try to construct JSON with string that contains "\n" in it like this :
ver_str= 'Package ID: version_1234\nBuild\nnumber: 154\nBuilt\n'
proj_ver_str = 'Version_123'
comb = '{"r_content": {0}, "s_version": {1}}'.format(ver_str,proj_ver_str)
json_content = json.loads()
d =json.dumps(json_content )
getting this error:
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Dev/python/new_tester/simple_main.py", line 18, in <module>
comb = '{"r_content": {0}, "s_version": {1}}'.format(ver_str,proj_ver_str)
KeyError: '"r_content"'
The error arises not because of newlines in your values, but because of { and } characters in your format string other than the placeholders {0} and {1}. If you want to have an actual { or a } character in your string, double them.
Try replacing the line
comb = '{"r_content": {0}, "s_version": {1}}'.format(ver_str,proj_ver_str)
with
comb = '{{"r_content": {0}, "s_version": {1}}}'.format(ver_str,proj_ver_str)
However, this will give you a different error on the next line, loads() missing 1 required positional argument: 's'. This is because you presumably forgot to pass comb to json.loads().
Replacing json.loads() with json.loads(comb) gives you another error: json.decoder.JSONDecodeError: Expecting value: line 1 column 15 (char 14). This tells you that you've given json.loads malformed JSON to parse. If you print out the value of comb, you see the following:
{"r_content": Package ID: version_1234
Build
number: 154
Built
, "s_version": Version_123}
This isn't valid JSON, because the string values aren't surrounded by quotes. So a JSON parsing error is to be expected.
At this point, let's take a look at what your code is doing and what you seem to want it to do. It seems you want to construct a JSON string from your data, but your code puts together a JSON string from your data, parses it to a dict and then formats it back as a JSON string.
If you want to create a JSON string from your data, it's far simpler to create a dict with your values and use json.dumps on that:
d = json.dumps({"r_content": ver_str, "s_version": proj_ver_str})
I'm facing issue while fetching keys and values from the data using regular expression if the JSON contains \ & ".
{
"KeyOne":"Value One",
"KeyTwo": "Value \\ two",
"KeyThree": "Value \" Three",
"KeyFour": "ValueFour\\"
}
It is sample data, from this I want to read the values are keys. How can I achieve with regular expressions.
Note: I'm deserializing this JSON data in the server side(SAP ABAP).
On earlier releases less than 7.2 (from memory) you can use class /ui2/cl_json
if on 7.3 or later use kernel IXML writer which support JSON.
It is orders of magnitude faster than /ui2/cl_json
you can use identity transformation approach where the source structure is known
and you can create that structure in abap or already has an abap equivalent defined. Otherwise just traverse the JSON document.
The example string was easily parsed
EDIT: Add sample code
REPORT zjsondemo.
CLASS lcl DEFINITION CREATE PUBLIC.
PUBLIC SECTION.
METHODS json_stru_known.
METHODS json_stru_traverse.
ENDCLASS.
CLASS lcl IMPLEMENTATION.
METHOD json_stru_known.
DATA l_src_json TYPE string.
DATA l_mara TYPE mara.
WRITE: / 'DEMO 1 Known structure Identity transformation '.
l_src_json = `{"MARA":{"MATNR":"012345678", "MATKL": "DUMMY" }}`.
WRITE: / 'Conver to MARA -> ', l_src_json.
CALL TRANSFORMATION id SOURCE XML l_src_json
RESULT mara = l_mara. "
WRITE: / 'MARA - MATNR', l_mara-matnr,
/ ' MATKL', l_mara-matkl.
TYPES:
BEGIN OF lty_foo_bar,
KeyOne TYPE string,
KeyTwo Type string,
KeyThree TYPE string,
KeyFour Type string,
END OF lty_foo_bar.
DATA:
lv_json_string TYPE string,
ls_data TYPE lty_foo_bar.
" in this example we use upper case attribute names
"because we map to SAP target
" structure which has upper case names.
" if you need lowercase variables then you can not map straight to an
" SAP type. Then you need to use the traverse technique. See example 2
lv_json_string = |\{| &&
|"KEYONE":"Value One",| &&
|"KEYTWO": "Value \\\\ two", | &&
|"KEYTHREE": "Value \\" Three", | &&
|"KEYFOUR": "ValueFour\\\\" | &&
|\}|.
lv_json_string = `{"JUNK":` && lv_json_string && `}`.
CALL TRANSFORMATION id SOURCE XML lv_json_string
RESULT junk = ls_data. "
Write: / ls_data-keyone,ls_data-keytwo, ls_data-keythree , ls_data-keyfour.
ENDMETHOD.
METHOD json_stru_traverse.
DATA l_src_json TYPE string.
DATA: lo_node TYPE REF TO if_sxml_node.
DATA: lif_element TYPE REF TO if_sxml_open_element,
lif_element_close TYPE REF TO if_sxml_close_element,
lif_value_node TYPE REF TO if_sxml_value,
l_val TYPE string,
l_attr TYPE if_sxml_attribute=>attributes,
l_att_val TYPE string.
FIELD-SYMBOLS: <attr> LIKE LINE OF l_attr.
WRITE: / 'DEMO 2 Traverse any json document'.
l_src_json = `{"MATNR":"012345678", "MATKL": "DUMMY", "SOMENODE": "With this value" }`.
WRITE: / 'Parse as JSON with 3 nodes -> ', l_src_json.
DATA(reader) = cl_sxml_string_reader=>create( cl_abap_codepage=>convert_to( l_src_json ) ).
lo_node = reader->read_next_node( ). " {
IF lo_node IS INITIAL.
EXIT.
ENDIF.
DO 3 TIMES.
lif_element ?= reader->read_next_node( ).
l_attr = lif_element->get_attributes( ).
LOOP AT l_attr ASSIGNING <attr>.
l_att_val = <attr>->get_value( ).
WRITE: / 'Attribute:', l_att_val.
ENDLOOP.
lif_value_node ?= reader->read_next_node( ).
l_val = lif_value_node->get_value( ).
WRITE: '=>', l_val.
lif_element_close ?= reader->read_next_node( ).
ENDDO.
ENDMETHOD.
ENDCLASS.
START-OF-SELECTION.
DATA lo_lcl TYPE REF TO lcl.
CREATE OBJECT lo_lcl.
lo_lcl->json_stru_known( ).
lo_lcl->json_stru_traverse( ).
The SAP system is supplied with many example programs.
Search for demo*json
SAP docu on json parsing
As #mrzasa and #joanis said in their comments: Do not use RegEx to parse JSON!
For small objects or when performance is not a concern, you can use /ui2/cl_json:
TYPES:
BEGIN OF lty_foo_bar,
KeyOne TYPE string,
KeyTwo Type string,
KeyThree TYPE string,
KeyFour Type string,
END OF lty_foo_bar.
DATA:
lv_json_string TYPE string,
ls_data TYPE lty_foo_bar.
lv_json_string = |\{| &&
|"KeyOne":"Value One",| &&
|"KeyTwo": "Value \\\\ two", | &&
|"KeyThree": "Value \\" Three", | &&
|"KeyFour": "ValueFour\\\\" | &&
|\}|.
/ui2/cl_json=>deserialize(
EXPORTING
json = lv_json_string
CHANGING
data = ls_data ).
ls_data-KeyOne contains 'Value One' and so on.
For larger objects and/or better performance check lxml from #phil soadys answer below. The correct handling of upper and lower case letters still causes headache in ABAP anyways.
So I have the following dictionaries that I get by parsing a text file
keys = ["scientific name", "common names", "colors]
values = ["somename1", ["name11", "name12"], ["color11", "color12"]]
keys = ["scientific name", "common names", "colors]
values = ["somename2", ["name21", "name22"], ["color21", "color22"]]
and so on. I am dumping the key value pairs using a dictionary to a json file using a for loop where I go through each key value pair one by one
for loop starts
d = dict(zip(keys, values))
with open("file.json", 'a') as j:
json.dump(d, j)
If I open the saved json file I see the contents as
{"scientific name": "somename1", "common names": ["name11", "name12"], "colors": ["color11", "color12"]}{"scientific name": "somename2", "common names": ["name21", "name22"], "colors": ["color21", "color22"]}
Is this the right way to do it?
The purpose is to query the common name or colors for a given scientific name. So then I do
with open("file.json", "r") as j:
data = json.load(j)
I get the error, json.decoder.JSONDecodeError: Extra data:
I think this is because I am not dumping the dictionaries in json in the for loop correctly. I have to insert some square brackets programatically. Just doing json.dump(d, j) won't suffice.
JSON may only have one root element. This root element can be [], {} or most other datatypes.
In your file, however, you get multiple root elements:
{...}{...}
This isn't valid JSON, and the error Extra data refers to the second {}, where valid JSON would end instead.
You can write multiple dicts to a JSON string, but you need to wrap them in an array:
[{...},{...}]
But now off to how I would fix your code. First, I rewrote what you posted, because your code was rather pseudo-code and didn't run directly.
import json
inputs = [(["scientific name", "common names", "colors"],
["somename1", ["name11", "name12"], ["color11", "color12"]]),
(["scientific name", "common names", "colors"],
["somename2", ["name21", "name22"], ["color21", "color22"]])]
for keys, values in inputs:
d = dict(zip(keys, values))
with open("file.json", 'a') as j:
json.dump(d, j)
with open("file.json", 'r') as j:
print(json.load(j))
As you correctly realized, this code failes with
json.decoder.JSONDecodeError: Extra data: line 1 column 105 (char 104)
The way I would write it, is:
import json
inputs = [(["scientific name", "common names", "colors"],
["somename1", ["name11", "name12"], ["color11", "color12"]]),
(["scientific name", "common names", "colors"],
["somename2", ["name21", "name22"], ["color21", "color22"]])]
jsonData = list()
for keys, values in inputs:
d = dict(zip(keys, values))
jsonData.append(d)
with open("file.json", 'w') as j:
json.dump(jsonData, j)
with open("file.json", 'r') as j:
print(json.load(j))
Also, for python's json library, it is important that you write the entire json file in one go, meaning with 'w' instead of 'a'.