Clojure exception handling - exception

I'm having some issues with clojure.data.xml in that when parsing bad XML, the exception thrown is not caught. I've found some issues perhaps with run-time wrappers, but my attempts to unwrap it have been unsuccessful can anyone point out to me why this may be happening?
(defn parse-xml-from-string
"takes in valid xml as a string and turns it into
#clojure.data.xml data, if bad xml returns false"
[xml]
(try
(do (parse (java.io.StringReader. xml)))
(catch javax.xml.stream.XMLStreamException e false)
(catch Exception ex
(cond (isa? (class (.getCause ex)) javax.xml.stream.XMLStreamException) false))))
method call
(viva-api.helpers.validation/parse-xml-from-string "<?xml version=\"1.0\"encoding=\"UTF-8\"?><foo><bar><baz>The baz value</baz></bar></foos>")
output
#clojure.data.xml.Element{:tag :foo, :attrs {}, :content (user=> XMLStreamException ParseError at [row,col]:[1,84]
Message: The end-tag for element type "foo" must end with a '>' delimiter. com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next (XMLStreamReaderImpl.java:598)

I think the problem you are seeing is related to the laziness of the value returned by parse. According to its docstring "Parses the source, which can be an InputStream or Reader, and returns a lazy tree of Element records. [...]".
(ns xml
(:use clojure.data.xml))
(defn parse-xml-from-string
"takes in valid xml as a string and turns it into
#clojure.data.xml data, if bad xml returns false"
[xml]
(try
(parse (java.io.StringReader. xml))
(catch javax.xml.stream.XMLStreamException ex
false)))
(parse-xml-from-string "<bla/>") ;= #clojure.data.xml.Element{:tag :bla, :attrs {}, :content ()}
(parse-xml-from-string "<bla") ;= false
(parse-xml-from-string "<bla>") ; throws XMLStreamException
(def x (parse-xml-from-string "<bla>")) ; doesn't throw an exception unless it's evaluated
x ; throws XMLStreamException
EDIT
The value returned from parse is a lazy tree built top-down from an Element record and based on a lazy sequence of Event objects, as mentioned in the docstring for the event-tree function. The laziness lies in the :content field of the record which is realized when the field is accessed. One way I've found to force the realization of the whole tree is using the str function, this feels hacky and looks ugly but anyone who has a better idea can provide a better solution.
(defn parse-xml-from-string
"takes in valid xml as a string and turns it into
#clojure.data.xml data, if bad xml returns false"
[xml]
(try
(let [x (parse-str xml)]
(str x)
x)
(catch javax.xml.stream.XMLStreamException ex
false)))
This seems like going to great lengths to avoid laziness which is, as I understand it, one of the main reasons to use clojure.data.xml. Since you seem to want your whole XML string parsed at once maybe the clojure.xml/parse function is better suited for your needs.
(defn my-parse-str
[s]
(try
(xml/parse (java.io.ByteArrayInputStream. (.getBytes s)))
(catch Exception e false)))

Related

"Hello" |> printfn generates an error in F#

https://tryfsharp.fsbolero.io/
printfn "Hello"
works as expected without errors, however, using pipe operator
"Hello" |> printfn
The type 'string' is not compatible with the type 'Printf.TextWriterFormat'
I understood the pipe operator behavior:
f(a) is equivalent to a |> f
Why does the latter generate the error?? Thanks.
Yes. The pipe operator does what you think it does. However, printfn is "special" that it takes a kind of "formattable string" (there are different kinds of these) and the compiler does its magic only when the format string appears as a direct argument.
In other words, your "Hello" in the first example is not really a string, it is a Printf.TextWriterFormat object, magically created by the compiler.
Still, you can do what you want by using an explict format string. This approach you'll see quite a bit in real-world code:
"Hello" |> printfn "%s"
Here, %s means: give me a string. The magic of F# in this case again takes the format string, here "%s", this time with an argument of type string, and turns it into a function.
Note 1: that this "surprise effect" is being considered and the community works towards adding a printn function (i.e., without the f which stands for format) to just take a simple string: https://github.com/fsharp/fslang-suggestions/issues/1092
Note 2: if you do want to pass arguments to printfn around, you can make the type explicit, but this is done very rarely:
let x = Printf.TextWriterFormat<unit> "Hello"
x |> printfn // now it's legal
Note 3: you might wonder why the compiler doesn't apply its magic to the lh-side of |> as well. The reason is that |> is not an intrinsic to the compiler, but just another overridable operator. Fixing this is therefor technically very hard (there can be any amount of operators on the lh-side), though it has been considered at certain times.
Other alternative
In the comments, the OP suggested that he/she/they didn't like the idea of having to use printfn "%i" and the like, and ended up writing print, printInt etc functions.
If you go that route, you can write your code in a class with only static methods, while using function-style naming. Then just open the static type (this is a new feature in F# 6).
module MyPrinters =
// create a static type (no constructors)
type Printers =
static member print x = printfn "%s" x // string
static member print x = printfn "%i" x // integer
static member print x = printfn "%M" x // decimal
static member print x = printfn "%f" x // float
module X =
// open the static type
open type MyPrinters.Printers
let f() = print 42
let g() = print "test"

Rubyist way to test for a hash key in an array

I am parsing some JSON which has been converted to Ruby data structures. I have records that look either like this:
"uros"=>[{"ure"=>"zip*less", "prs"=>[{"mw"=>"ˈzip-ləs", "sound"=>{"audio"=>"ziples01", "ref"=>"c", "stat"=>"1"}}]
or like this:
"uros"=>[{"ure"=>"gas chromatographic", "fl"=>"adjective"}]
I want to pull out the hash value for the key audio when it appears.
What is the clean way to test for the presence of said key in Ruby? Here are my attempts thus far. The variable entry represents the enclosing data structure:
if entry["uros"] != nil # Nope
#if entry["uros"]["prs"] != nil # Nope
#if not entry["uros"]["prs"]["sound"]["audio"].first.instance_of? nil # Nope
#if ! entry["uros"]["prs"].instance_of? nil # Nope
puts entry["uros"].first["prs"].first["sound"]["audio"]
end
The error message I get is either:
undefined method `first' for nil:NilClass (NoMethodError)
or conversely:
undefined method `[]' for nil:NilClass (NoMethodError)
How would you do this?
Hash has a method for checking presence of a key-value pair
if entry.key?('uros')
But problem with your code is you don't check for existence of the nested keys. You check for "uros", but then "prs" might not exist.
Here's a better version that allows nils at every step
audio = entry.dig('uros', 0, 'prs', 0, 'sound', 'audio')
You can use the safe navigation operator to search around the the data structure without getting a bunch of NoMethodErrors. The safe navigation operator will call the method, or return nil if called on nil.
audio = entry["uros"]
&.first
&.[]("prs")
&.first
&.dig("sound", "audio")
This might look funny, but remember that [] is just another method. If any part of that method chain returns nil, the whole thing will return nil.
entry["uros"]&.first is roughly equivalent to...
entry["uros"].first if entry["uros"] != nil

Elm 'Json.Decode.succeed': how is it used in a decode pipeline if it is supposed to always return the same value?

I'm learning Elm and one thing that has puzzled me is 'Json.Decode.succeed'. According to the docs
succeed : a -> Decoder a
Ignore the JSON and produce a certain Elm value.
decodeString (succeed 42) "true" == Ok 42
decodeString (succeed 42) "[1,2,3]" == Ok 42
decodeString (succeed 42) "hello" == Err ...
I understand that (although, as a beginner, I don't yet see its use). But this method is also used in a Decode pipeline, thus:
somethingDecoder : Maybe Wookie -> Decoder Something
somethingDecoder maybeWookie =
Json.Decode.succeed Something
|> required "caterpillar" Caterpillar.decoder
|> required "author" (Author.decoder maybeWookie)
What is going on here? That is, if 'succeed' ignores the JSON that's passed to it, how is it used to read JSON and turn it into Elm values? Any clues appreciated!
Just to start, the intuition for a decoder pipeline is that it acts like a curried function where piping with required and optional applies arguments one-by-one. Expect that everything, both the function, its arguments and the return value are all wrapped in Decoders.
So as an example:
succeed Something
|> required (succeed 42)
|> required (succeed "foo")
is equivalent to
succeed (Something 42 "foo")
and
decodeString (succeed (Something 42 "foo")) whatever
will return Ok (Something 42 "foo") as long as whatever is valid JSON.
When everything succeeds it's just a really convoluted function call. The more interesting aspect of decoders, and the reason we use them in the first place, is in the error path. But since 'succeed' is what's of interest here, we'll ignore that and save a lot of time, text and brain cells. Just know that without considering the error path this will all seem very contrived.
Anyway, let's try to recreate this to see how it works.
Decode.map2
The key to the pipelines, apart form the pipe operator, is the Decode.map2 function. You've probably already used it, or its siblings, if you've tried writing JSON decoders without using pipelines. We can implement our example above using map2 like this:
map2 Something
(succeed 42)
(succeed "foo")
This will work exactly like the example above. But the problem with this, from a user POV, is that if we need to add another argument we also have to change map2 to map3. And also Something isn't wrapped in a decoder, which is boring.
Calling functions wrapped in Decoders
The reason this is useful anyway is because it gives us access to several values at the same time, and the ability to combine them in whatever way we want. We can use this to call a function inside a Decoder with an argument inside a Decoder and have the result also wrapped in a Decoder:
map2 (\f x -> f x)
(succeed String.fromInt)
(succeed 42)
Currying and partial application
Unfortunately this still has the problem of needing to change the map function if we need more arguments. If only there was a way to apply arguments to a function one at a time... like if we had currying and partial application. Since we have a way to call functions wrapped in decoders now, what if we return a partially applied function instead and apply the remaining arguments later?
map2 (\f x -> f x)
(succeed Something)
(succeed 42)
will return a Decoder (string -> Something), so now we just have to rinse and repeat with this and the last argument:
map2 (\f x -> f x)
(map2 (\f x -> f x)
(succeed Something)
(succeed 42))
(succeed "")
Et voila, we have now recreated JSON decode pipelines! Although it might not look like it on the surface.
Ceci n'est pas une pipe
The final trick is to use map2 with the pipe operator. The pipe is essentially defined as \x f -> f x. See how similar this looks to the function we've been using? The only difference is that the arguments are swapped around, so we need to swap the order we pass arguments as well:
map2 (|>)
(succeed "")
(map2 (|>)
(succeed 42)
(succeed Something))
and then we can use the pipe operator again to reach the final form
succeed Something
|> map2 (|>)
(succeed 42)
|> map2 (|>)
(succeed "")
It should now be apparent that required is just an alias for map2 (|>).
And that's all there is to it!

Getting property from an Javascript object

I have such a environment to start from:
(defn field-name "blah")
(defn obj (js* "{
list: [1,2,3,4,5],
blah: \"vtha\",
o: { answer: 42 }
}")
How do I get (idiomatic way) blah field using field name var?
(aget obj field-name)
works, but it is intended for arrays (docs say)
You can use goog.object/get and I think this is idiomatic way to access the properties.
I would also recommend binaryage/cljs-oops that is addressing this very problem.

Destructuring macro for ST-JSON JSO object

I have another question regarding decoding JSON in Common Lisp. I settled on ST-JSON as my tool. I am able to get a JSO object containing the JSON data, and access all fields with st-json:getjso. I wanted to write a macro similar in principle to destructuring-bind that would provide local bindings to variables named after JSON fields (since then I started doubting if this is a good idea, but that's a different question). I came up with the following:
(defmacro destructure-jso (jso &body body)
(let (keys values)
(st-json:mapjso #'(lambda (key value)
(push key keys)
(push value values))
jso)
`(destructuring-bind ,keys ,values
,#body)))
But when I try to use it on a JSO object, I get the error The value PARAMS is not of the expected type STRUCTURE. where PARAMS is the object. Can someone explain this to me?
Thanks.
Apparently, you are using destructure-jso like this:
(let ((params (st-json:read-json-from-string "{\"foo\":42,\"bar\":\"baz\"}")))
(destructure-jso params
(list foo bar)))
However, destructure-jso, being a macro, gets handled at macro expansion time, much before the JSON even gets parsed. params is passed to your macro as a symbol, without being evaluated; and even if its evaluation was attempted, it would be unbound.
So, if you want to write a destructure-jso, you will need the list of keys at macro expansion time. You could pass the list in a normal way:
> (defmacro destructure-jso-2 (vars json &body body)
`(let ,(mapcar #'(lambda (var)
(list var `(getjso ,(string-downcase (symbol-name var)) ,json)))
vars)
,#body))
DESTRUCTURE-JSO-2
> (let ((params (st-json:read-json-from-string "{\"foo\":42,\"bar\":\"baz\"}")))
(destructure-jso-2 (foo bar)
params
(list foo bar)))
(42 "baz")
Or, if you like, use a "template" JSON for creating the mappings:
> (defmacro destructure-jso-3 (template json &body body)
(let (bindings)
(st-json:mapjso #'(lambda (key val)
(declare (ignore val))
(push (list (intern (string-upcase key)) `(getjso ,key ,json))
bindings))
(st-json:read-json-from-string template))
`(let ,bindings
,#body)))
DESTRUCTURE-JSO-3
> (let ((params (st-json:read-json-from-string "{\"foo\":42,\"bar\":\"baz\"}")))
(destructure-jso-3 "{\"foo\":null,\"bar\":null}"
params
(list foo bar)))
(42 "baz")
Here, the variable bindings come from the first (template) JSON, values from the second one. The template JSON is parsed at macroexpansion time, the params JSON every time your code is executed.
Whether either of these is a useful approach for you or not, I do not know.