I want to create an endpoint that receives JSON data and should parse it as an array of strings.
POST /
{
"keys": ["foo", "bar"]
}
I'm running into problems with the type system. This is what I tried (.as(Array(String))) but it does not compile:
require "kemal"
def print_keys(keys : Array(String))
puts "Got keys: #{keys}"
end
post "/" do |env|
keys = env.params.json["keys"].as(Array(String)) # <-- ERROR
print_keys(keys)
end
Kemal.run
The error message is:
8 | keys = env.params.json["keys"].as(Array(String)) # <-- ERROR
^
Error: can't cast (Array(JSON::Any) | Bool | Float64 | Hash(String, JSON::Any) | Int64 | String | Nil) to Array(String)
If I change the code to parse not Array(String) but instead String, it compiles without problems. Why does it make a difference in the .as method that the type is Array(String) instead of String?
How can the code be changed to parse arrays of strings?
I found an example in the documentation, which uses JSON.mapping. In my concrete example, it could be written as follows:
require "kemal"
def print_keys(keys : Array(String))
puts "Got keys: #{keys}"
end
class KeyMappings
JSON.mapping({
keys: Array(String)
})
end
post "/" do |env|
json = KeyMappings.from_json env.request.body.not_nil!
print_keys(json.keys)
end
Kemal.run
Related
I am new to TextX. I am trying to create a grammar for defining data types that have fields that could be of a simple type or of the type of another data type. The grammar description is:
Library:
data_types *= DataType
;
DataType: name=ID "{"
fields*=Field
"}" ;
Field: type=([DataType] | ID) name=ID;
//Type: [DataType] | ID;
An example of a model following this grammar would be
vec {
int64 a
int64 b
int64 c
}
matrix {
vec a
vec b
}
I want to link the type of the field to either a data type that is already declared, or to some simple string. However, when compiling the above grammar with textx generate dummy.tx --target dot, I get the error Error: None:9:13: error: Expected ''((\\')|[^'])*'' or '"((\\")|[^"])*"' or re_match or rule_ref or '[' at position dummy.tx:(9, 13) => 'eld: type=*([DataType'..
Is there any way to accomplish what I want? I have tried putting the type declaration in a separate block, as seen in the comment, but that did not help. Any suggestion or hint would be highly appreciated.
A standard approach is to use custom classes and create builtins for all types that are not created by users themselves. It is best to show how it is done in code using your example. Note the use of registration so that the language with registered builtins can be available to textx CLI command. Also, see the entity example as the same techniques is used there.
from textx import metamodel_from_str
from textx.registration import (language, register_language,
metamodel_for_language)
# We use registration support to register language
# This way it will be available to textx CLI command
#language('library', '.lib')
def library_lang():
"Library language."
grammar = r'''
Library:
data_types *= DataType
;
DataType: name=ID "{"
fields*=Field
"}" ;
Field: type=[Type] name=ID;
Type: DataType | BuiltInType;
BuiltInType: name=ID;
'''
# Here we use our class for builtin types so we can
# make our own instances for builtin types.
# See textX Entity example for more.
class BuiltInType:
def __init__(self, parent, name):
self.parent = parent
self.name = name
# Create all builtin instances.
builtins = {
'int64': BuiltInType(None, 'int64'),
}
return metamodel_from_str(grammar,
# Register custom classes and builtins.
classes=[BuiltInType],
builtins=builtins)
# This should really be done through setup.{cfg,py}
# Here it is done through registration API for an example to
# be self-contained.
register_language(library_lang)
model_str = r'''
vec {
int64 a
int64 b
int64 c
}
matrix {
vec a
vec b
}
'''
# Now we can get registered language metamodel by name and
# parse our model.
model = metamodel_for_language('library').model_from_str(model_str)
# ... do something with the model
assert len(model.data_types) == 2
assert model.data_types[0].name == 'vec'
assert model.data_types[0].fields[0].name == 'a'
assert model.data_types[0].fields[0].type.name == 'int64'
I'm facing issue while fetching keys and values from the data using regular expression if the JSON contains \ & ".
{
"KeyOne":"Value One",
"KeyTwo": "Value \\ two",
"KeyThree": "Value \" Three",
"KeyFour": "ValueFour\\"
}
It is sample data, from this I want to read the values are keys. How can I achieve with regular expressions.
Note: I'm deserializing this JSON data in the server side(SAP ABAP).
On earlier releases less than 7.2 (from memory) you can use class /ui2/cl_json
if on 7.3 or later use kernel IXML writer which support JSON.
It is orders of magnitude faster than /ui2/cl_json
you can use identity transformation approach where the source structure is known
and you can create that structure in abap or already has an abap equivalent defined. Otherwise just traverse the JSON document.
The example string was easily parsed
EDIT: Add sample code
REPORT zjsondemo.
CLASS lcl DEFINITION CREATE PUBLIC.
PUBLIC SECTION.
METHODS json_stru_known.
METHODS json_stru_traverse.
ENDCLASS.
CLASS lcl IMPLEMENTATION.
METHOD json_stru_known.
DATA l_src_json TYPE string.
DATA l_mara TYPE mara.
WRITE: / 'DEMO 1 Known structure Identity transformation '.
l_src_json = `{"MARA":{"MATNR":"012345678", "MATKL": "DUMMY" }}`.
WRITE: / 'Conver to MARA -> ', l_src_json.
CALL TRANSFORMATION id SOURCE XML l_src_json
RESULT mara = l_mara. "
WRITE: / 'MARA - MATNR', l_mara-matnr,
/ ' MATKL', l_mara-matkl.
TYPES:
BEGIN OF lty_foo_bar,
KeyOne TYPE string,
KeyTwo Type string,
KeyThree TYPE string,
KeyFour Type string,
END OF lty_foo_bar.
DATA:
lv_json_string TYPE string,
ls_data TYPE lty_foo_bar.
" in this example we use upper case attribute names
"because we map to SAP target
" structure which has upper case names.
" if you need lowercase variables then you can not map straight to an
" SAP type. Then you need to use the traverse technique. See example 2
lv_json_string = |\{| &&
|"KEYONE":"Value One",| &&
|"KEYTWO": "Value \\\\ two", | &&
|"KEYTHREE": "Value \\" Three", | &&
|"KEYFOUR": "ValueFour\\\\" | &&
|\}|.
lv_json_string = `{"JUNK":` && lv_json_string && `}`.
CALL TRANSFORMATION id SOURCE XML lv_json_string
RESULT junk = ls_data. "
Write: / ls_data-keyone,ls_data-keytwo, ls_data-keythree , ls_data-keyfour.
ENDMETHOD.
METHOD json_stru_traverse.
DATA l_src_json TYPE string.
DATA: lo_node TYPE REF TO if_sxml_node.
DATA: lif_element TYPE REF TO if_sxml_open_element,
lif_element_close TYPE REF TO if_sxml_close_element,
lif_value_node TYPE REF TO if_sxml_value,
l_val TYPE string,
l_attr TYPE if_sxml_attribute=>attributes,
l_att_val TYPE string.
FIELD-SYMBOLS: <attr> LIKE LINE OF l_attr.
WRITE: / 'DEMO 2 Traverse any json document'.
l_src_json = `{"MATNR":"012345678", "MATKL": "DUMMY", "SOMENODE": "With this value" }`.
WRITE: / 'Parse as JSON with 3 nodes -> ', l_src_json.
DATA(reader) = cl_sxml_string_reader=>create( cl_abap_codepage=>convert_to( l_src_json ) ).
lo_node = reader->read_next_node( ). " {
IF lo_node IS INITIAL.
EXIT.
ENDIF.
DO 3 TIMES.
lif_element ?= reader->read_next_node( ).
l_attr = lif_element->get_attributes( ).
LOOP AT l_attr ASSIGNING <attr>.
l_att_val = <attr>->get_value( ).
WRITE: / 'Attribute:', l_att_val.
ENDLOOP.
lif_value_node ?= reader->read_next_node( ).
l_val = lif_value_node->get_value( ).
WRITE: '=>', l_val.
lif_element_close ?= reader->read_next_node( ).
ENDDO.
ENDMETHOD.
ENDCLASS.
START-OF-SELECTION.
DATA lo_lcl TYPE REF TO lcl.
CREATE OBJECT lo_lcl.
lo_lcl->json_stru_known( ).
lo_lcl->json_stru_traverse( ).
The SAP system is supplied with many example programs.
Search for demo*json
SAP docu on json parsing
As #mrzasa and #joanis said in their comments: Do not use RegEx to parse JSON!
For small objects or when performance is not a concern, you can use /ui2/cl_json:
TYPES:
BEGIN OF lty_foo_bar,
KeyOne TYPE string,
KeyTwo Type string,
KeyThree TYPE string,
KeyFour Type string,
END OF lty_foo_bar.
DATA:
lv_json_string TYPE string,
ls_data TYPE lty_foo_bar.
lv_json_string = |\{| &&
|"KeyOne":"Value One",| &&
|"KeyTwo": "Value \\\\ two", | &&
|"KeyThree": "Value \\" Three", | &&
|"KeyFour": "ValueFour\\\\" | &&
|\}|.
/ui2/cl_json=>deserialize(
EXPORTING
json = lv_json_string
CHANGING
data = ls_data ).
ls_data-KeyOne contains 'Value One' and so on.
For larger objects and/or better performance check lxml from #phil soadys answer below. The correct handling of upper and lower case letters still causes headache in ABAP anyways.
I'm trying to send a dict to javascript via port for storing the value in localStorage, and retrieve it next time the Elm app starts via flag.
Below code snippets show the dict sent as well as the raw json value received through flag. The Json decoding fails showing the error message at the bottom.
The issue seems to be the extra backslashes (as in \"{\\"Left\\") contained in the raw flag value. Interestingly, console.log shows that the flag value passed by javascript is "dict1:{"Left":"fullHeightVerticalCenter","Right":"fullHeightVerticalCenter","_default":"fullHeightVerticalBottom"}"as intended, so the extra backslashes seem to be added by Elm, but I can't figure out why. Also, I'd be interested to find out a better way to achieve passing a dict to and from javascript.
import Json.Decode as JD
import Json.Encode as JE
dict1 = Dict.fromList[("_default", "fullHeightVerticalBottom")
, ("Left", "fullHeightVerticalCenter")
, ("Right", "fullHeightVerticalCenter")]
type alias FlagsJEValue =
{dict1: String}
port setStorage : FlagsJEValue -> Cmd msg
-- inside Update function Cmd
setStorage {dict1 = JE.encode 0 (dictEncoder JE.string model.dict1)}
dictEncoder enc dict =
Dict.toList dict
|> List.map (\(k,v) -> (k, enc v))
|> JE.object
--
type alias Flags =
{dict1: Dict String String}
flagsDecoder : Decoder Flags
flagsDecoder =
JD.succeed Flags
|> required "dict1" (JD.dict JD.string)
-- inside `init`
case JD.decodeValue MyDecoders.flagsDecoder raw_flags of
Err e ->
_ = Debug.log "raw flag value" (Debug.toString (JE.encode 2 raw_flags) )
_ = Debug.log "flags error msg" (Debug.toString e)
... omitted ...
Ok flags ->
... omitted ...
-- raw flag value
"{\n \"dict1\": \"{\\\"Left\\\":\\\"fullHeightVerticalCenter\\\",\\\"Right\\\":\\\"fullHeightVerticalCenter\\\",\\\"_default\\\":\\\"fullHeightVerticalBottom\\\"}\"\n}"
--flags error msg
"Failure \"Json.Decode.oneOf failed in the following 2 ways:\\n\\n\\n\\n
(1) Problem with the given value:\\n \\n \\\"{\\\\\\\"Left\\\\\\\":\\\\\\\"fullHeightVerticalCenter\\\\\\\",\\\\\\\"Right\\\\\\\":\\\\\\\"fullHeightVerticalCenter\\\\\\\",\\\\\\\"_default\\\\\\\":\\\\\\\"fullHeightVerticalBottom\\\\\\\"}\\\"\\n \\n Expecting an OBJECT\\n\\n\\n\\n
(2) Problem with the given value:\\n \\n \\\"{\\\\\\\"Left\\\\\\\":\\\\\\\"fullHeightVerticalCenter\\\\\\\",\\\\\\\"Right\\\\\\\":\\\\\\\"fullHeightVerticalCenter\\\\\\\",\\\\\\\"_default\\\\\\\":\\\\\\\"fullHeightVerticalBottom\\\\\\\"}\\\"\\n \\n Expecting null\" <internals>”
You don't need to use JE.encode there.
You can just use your dictEncoder to produce a Json.Encode.Value and pass that directly to setStorage.
The problem you're encountering it that you've encoded the dict to a json string (using JE.encode) and then sent that string over a port and the port has encoded that string as json again. You see extra slashes because the json string is double encoded.
I've created a basic client and server that pass a string, which I've changed to JSON instead. But the JSON string is only parsable before it gets sent through TCP. After it's sent, the string version is identical (after a chomp), but on the server side it no longer processes the JSON correctly. Here is some of my code (with other bits trimmed)
Some of the client code
require 'json'
require 'socket'
foo = {'a' => 1, 'b' => 2, 'c' => 3}
puts foo.to_s + "......."
foo.to_json
puts foo['b'] # => outputs the correct '2' answer
client = TCPSocket.open('localhost', 2000)
client.puts json
client.close;
Some of the server code
require 'socket'
require 'json'
server = TCPServer.open(2000)
while true
client = server.accept # Accept client
response = client.gets
print response
response = response.chomp
response.to_json
puts response['b'] # => outputs 'b'
end
The output 'b' should be '2' instead. How do I fix this?
Thanks
In your server you wrote response.to_json. This turns a string to JSON, then throws it away. And I don't like the .chomp, either.
Try
response = client.gets
hash = JSON.parse(response)
Now hash is a Ruby Hash object with your data in it, and hash['b'] should work correctly.
The problem is that .to_json does not parse JSON inside a string and replace itself with the result. It is used to convert the string into a format that is an acceptable JSON value.
require 'json'
string = "abc"
puts string
puts string.to_json
This will output:
abc
"abc"
The method is added to the String class by the JSON generator and it uses it internally to generate the JSON document.
But why does your response['b'] return "b"?
Because Ruby strings have a method called [] that can be used to:
Return a substring: "abc"[0,2] => "ab"
Return a single character from index: "abc"[1] => "b"
Return a substring if the string contains it: "abc"["bc"] => "bc", "abc"["fg"] => nil
Return a regexp match: "abc"[/^a([a-z])c/, 1] => "b"
and possibly some other ways I can't think of right now.
So this happens because your response is a string that has the character "b" in it:
response = "something with a b"
puts response["b"]
# outputs b
puts response["x"]
# outputs a blank line because response does not contain "x".
Instead of .to_json your code has to call JSON.parse or JSON.load:
data = JSON.parse(response)
puts data['b']
I have the following code block in my REPL
#r "../packages/FSharp.Data.2.2.1/lib/net40/FSharp.Data.dll"
open FSharp.Data
[<Literal>]
let uri = "http://www.google.com/finance/option_chain?q=AAPL&output=json"
type OptionChain = JsonProvider<uri>
When I run it, FSI is returning
Error 1 The type provider 'ProviderImplementation.JsonProvider'
reported an error: Cannot read sample JSON from
'http://www.google.com/finance/option_chain?q=AAPL&output=json':
Invalid JSON starting at character 1, snippet =
---- {expiry:{y:2
----- json =
------ {expiry:{y:2015,m:5,d:8},expirations: [{y:2015,m:5,d:8},{y:2015,m:5,d:15},
This json is valid according to two other sites. Is it a bug in the TP?
The output isn't valid JSON because some keys are not quoted.
{expiry:{y:2015,m:5,d:8},expirations:[{y:2015,m:5,d:8},{y:2015,m:5,d:15},{y:2015,m:5,d:22},{y:2015,m:5,d:29},{y:2015,m:6,d:5},{y:2015,m:6,d:12},{y:2015,m:6,d:19},{y:2015,m:6,d:26},{y:2015,m:7,d:17},{y:2015,m:8,d:21},{y:2015,m:10,d:16},{y:2016,m:1,d:15},{y:2017,m:1,d:20}],
puts:[{cid:"43623726334021",s:"AAPL150508P00085000",e:"OPRA",p:"-",c:"-",b:"-",a:"-",oi:"-",vol:"-",strike:"85.00",expiry:"May 8, 2015"},
...