Inspired by this article. I was playing with translating functions from list comprehension to combinatory style. I found something interesting.
-- Example 1: List Comprehension
*Main> [x|(x:_)<-["hi","hello",""]]
"hh"
-- Example 2: Combinatory
*Main> map head ["hi","hello",""]
"hh*** Exception: Prelude.head: empty list
-- Example 3: List Comprehension (translated from Example 2)
*Main> [head xs|xs<-["hi","hello",""]]
"hh*** Exception: Prelude.head: empty list
It seems strange that example 1 does not throw an exception, because (x:_) pattern matches one of the definitions of head. Is there an implied filter (not . null) when using list comprehensions?
See the section on list comprehensions in the Haskell report. So basically
[x|(x:_)<-["hi","hello",""]]
is translated as
let ok (x:_) = [ x ]
ok _ = [ ]
in concatMap ok ["hi","hello",""]
P.S. Since list comprehensions can be translated into do expressions, a similar thing happens with do expressions, as detailed in the section on do expressions. So the following will also produce the same result:
do (x:_)<-["hi","hello",""]
return x
Pattern match failures are handled specially in list comprehensions. In case the pattern fails to match, the element is dropped. Hence you just get "hh" but nothing for the third list element, since the element doesn't match the pattern.
This is due to the definition of the fail function which is called by the list comprehension in case a pattern fails to match some element:
fail _ = []
The correct parts of this answer are courtesy of kmc of #haskell fame. All the errors are mine, don't blame him.
Yes. When you qualify a list comprehension by pattern matching, the values which don't match are filtered out, getting rid of the empty list in your Example 1. In Example 3, the empty list matches the pattern xs so is not filtered, then head xs fails. The point of pattern matching is the safe combination of constructor discrimination with component selection!
You can achieve the same dubious effect with an irrefutable pattern, lazily performing the selection without the discrimination.
Prelude> [x|(~(x:_))<-["hi","hello",""]]
"hh*** Exception: <interactive>:1:0-30: Irrefutable pattern failed for pattern (x : _)
List comprehensions neatly package uses of map, concat, and thus filter.
Related
I wonder if there's a way to return a JSON object in SWI-Prolog, such that the predicate names become the keys, and the instantiated variables become the values. For example:
get_fruit(JS_out):-
apple(A),
pear(P),
to_json(..., JS_out). # How to write this part?
apple("Gala").
pear("Bartlett").
I'm expecting JS_out to be:
JS_out = {"apple": "Gala", "pear": "Bartlett"}.
I couldn't figure out how to achieve this by using prolog_to_json/3 or other built-in functions. While there are lost of posts on reading Json into Prolog, I can't find many for the other way around. Any help is appreciated!
Given hard coded facts as shown, the simple solution is:
get_fruit(JS_out) :- apple(A), pear(P), JS_out = {"apple" : A, "pear": B}.
However, in Prolog, you don't need the extra variable. You can write this as:
get_fruit({"apple" : A, "pear": B}) :- apple(A), pear(P).
You could generalize this somewhat based upon two fruits of any kind:
get_fruit(Fruit1, Fruit2, {Fruit1 : A, Fruit2 : B}) :-
call(Fruit1, A),
call(Fruit2, B).
With a bit more work, it could be generalized to any number of fruits.
As an aside, it is a common beginner's mistake to think that is/2 is some kind of general assignment operator, but it is not. It is strictly for arithmetic expression evaluation and assumes that the second argument is a fully instantiated and evaluable arithmetic expression using arithmetic operators supported by Prolog. The first argument is a variable or a numeric value. Anything not meeting these criteria will always fail or generate an error.
org-element-parse-buffer returns a huge tree even for a small Org file. I want to transform this tree into JSON. Apparently, json.el uses recursive functions to traverse cons cells, and as Elisp doesn't support tail recursion, invocation of json-encode quickly runs out of stack. If I increase max-lisp-eval-depth and max-specpdl-size, Emacs crashes.
How do I workaround that and transform a huge tree structure into JSON? In general, how do I workaround when I have a huge data structure and a recursive function that may run out of stack?
Yes, json.el functions are recursive, but recursive functions called on Org-Element cause stack overflow not because org-element-parse-buffer returns a huge AST, but because it returns a circular list. A tree-recursive function on a circular list is like a squirrel in a cage.
I guess, the idea behind using self-references in the AST returns is that if you traverse it, at any point you can go back to parent by simply running plist-get on keyword :parent. I imagine this usage for traversing the AST up and down:
(let ((xs '#1=(:text "foo" :child (:text "bar" :parent #1#))))
(plist-get
(plist-get
xs
:child) ; (:text "bar" :parent (:text "foo" :child #0))
:parent)) ; (:text "foo" :child (:text "bar" :parent #0))
But JSON doesn't support circular lists, so you need to remove these self-references from the AST before trying to convert to any data serialization format. I haven't found the way to elegantly remove circular references in the AST, so I resorted to a dirty hack:
Convert the AST to a string
Remove references with regular expressions
Convert the string back to an Elisp data structure
Suppose I have an Org file called test.org with the following content:
* Heading
** Subheading
Text
Then variable tree contains the parsed Org data from this buffer: (setq tree (with-current-buffer "test.org" (org-element-parse-buffer))). Then to prepare this data for JSON export, I just run:
(car (read-from-string (replace-regexp-in-string ":parent #[0-9]+?" "" (prin1-to-string tree)))))
Even with all mentions of :parent removed, the new AST is still valid, so if the new AST is in variable tree2, then the following 3 expressions are equivalent:
(org-element-interpret-data tree2)
(with-current-buffer "test.org" (buffer-substring-no-properties 1 (buffer-end 1)))
"* Heading\n** Subheading\nText\n"
Note that for some reason org-element-interpret-data removes preceding whitespace, so the above is not technically true, when you have lines like text in your Org file.
Now all you need to do is to encode the new non-circular AST into JSON and write it into a file:
(f-write (json-encode tree2) 'utf-8 "test.json")
Notes
Elisp's cons cells are pairs of 2 slots: car and cdr. If cdr of each cell contains a link to another cons cell, we get a linked list. If both car and cdr point at 2 values, we get a dotted pair. Therefore (1 . (2 . (3 . nil))) is equivalent to (1 2 3). But a cdr (or car for that matter) might point at any other cons cell, including the one that already were earlier in the list, giving rise to circular linked list.
Exercise: create a complex tree data structure with several self-references to different subtrees. Then try traversing this tree and jumping by the self-references to get the idea.
With ->> threading macro from dash list manipulation library the expression is equivalent to:
(->> tree prin1-to-string (replace-regexp-in-string ":parent #[0-9]+?" "") read-from-string car)
(buffer-substring-no-properties 1 (buffer-end 1)) is like (buffer-string), but without annoying text properties attached.
f-write is a function that writes text to files from f third-party file manipulation library.
tl;dr
Here's how to remove references to parent and structure in an org tree before encoding it to json :
(let* ((tree (org-element-parse-buffer 'object nil)))
(org-element-map tree (append org-element-all-elements
org-element-all-objects '(plain-text))
(lambda (x)
(if (org-element-property :parent x)
(org-element-put-property x :parent "none"))
(if (org-element-property :structure x)
(org-element-put-property x :structure "none"))))
(json-encode tree))
I feel like I understand MAKE as being a constructor for a datatype. It takes two arguments... the first the target datatype, and the second a "spec".
In the case of objects it's fairly obvious that a block of Rebol data can be used as the "spec" to get back a value of type object!
>> foo: make object! [x: 10 y: 20 z: func [value] [print x + y + value] ]
== make object! [
x: 10
y: 20
]
>> print foo/x
10
>> foo/z 1
31
I know that if you pass an integer when you create a block, it will preallocate enough underlying memory to hold a block of that length, despite being empty:
>> foo: make block! 10
== []
That makes some sense. If you pass a string in, then you get the string parsed into Rebol tokens...
>> foo: make block! "some-set-word: {String in braces} some-word 12-Dec-2012"
== [some-set-word: "String in braces" some-word 12-Dec-2012]
Not all types are accepted, and again I'll say so far... so good.
>> foo: make block! 12-Dec-2012
** Script error: invalid argument: 12-Dec-2012
** Where: make
** Near: make block! 12-Dec-2012
By contrast, the TO operation is defined very similar, except it is for "conversion" instead of "construction". It also takes a target type as a first parameter, and then a "spec". It acts differently on values
>> foo: to block! 10
== [10]
>> foo: to block! 12-Dec-2012
== [12-Dec-2012]
That seems reasonable. If it received a non-series value, it wrapped it in a block. If you try an any-block! value with it, I'd imagine it would give you a block! series with the same values inside:
>> foo: to block! quote (a + b)
== [a + b]
So I'd expect a string to be wrapped in a block, but it just does the same thing MAKE does:
>> foo: to block! "some-set-word: {String in braces} some-word 12-Dec-2012"
== [some-set-word: "String in braces" some-word 12-Dec-2012]
Why is TO being so redundant with MAKE, and what is the logic behind their distinction? Passing integers into to block! gets the number inside a block (instead of having the special construction mode), and dates go into to block! making the date in a block instead of an error as with MAKE. So why wouldn't one want a to block! of a string to put that string inside a block?
Also: beyond reading the C sources for the interpreter, where is the comprehensive list of specs accepted by MAKE and TO for each target type?
MAKE is a constructor, TO is a converter. The reason that we have both is that for many types that operation is different. If they weren't different, we could get by with one operation.
MAKE takes a spec that is supposed to be a description of the value you're constructing. This is why you can pass MAKE a block and get values like objects or functions that aren't block-like at all. You can even pass an integer to MAKE and have it be treated like an allocation directive.
TO takes a value that is intended to be more directly converted to the target type (this value being called "spec" is just an unfortunate naming mishap). This is why the values in the input more directly correspond to the values in the output. Whenever there is a sensible default conversion, TO does it. That is why many types don't have TO conversions defined between them, the types are too different conceptually. We have fairly comprehensive conversions for some types where this is appropriate, such as to strings and blocks, but have carefully restricted some other conversions that are more useful to prohibit, such as from none to most types.
In some cases of simple types, there really isn't a complex way to describe the type. For them, it doesn't hurt to have the constructors just take self-describing values as their specs. Coincidentally, this ends up being the same behavior as TO for the same type and values. This doesn't hurt, so it's not useful to trigger an error in this case.
There are no comprehensive docs for the behavior of MAKE and TO because in Rebol 3 their behavior isn't completely finalized. There is still some debate in some cases about what the proper behavior should be. We're trying to make things more balanced, without losing any valuable functionality. We've already done a lot of work improving none and binary conversions, for instance. Once they are more finalized, and once we have a place to put them, we'll have more docs. In the meanwhile most of the Rebol 2 behavior is documented, and most of the changes so far for Rebol 3 are in CureCode.
Also: beyond reading the C sources for the interpreter, where is the
comprehensive list of specs accepted by MAKE and TO for each target
type?
May not be that useful, since it's red specific:
comparison-matrix
conversion-matrix
But it does at least mention if the behaviour is similar or different from rebol
I asked a related question here: Clojure: How do I turn clojure code into a string that is evaluatable? It mostly works but lists are translated to raw parens, which fails
The answer was great but I realized that is not exactly what I needed. I simplified the example for stackoverflow, but I am not just writing out datum, I am trying to write out function definitions and other things which contain structures that contain lists. So here is a simple example (co-opted from the last question).
I want to generate a file which contains the function:
(defn aaa []
(fff :update {:bbb "bbb" :xxx [1 2 3] :yyy (3 5 7)}))
Everything after the :update is a structure I have access to when writing the file, so I call str on it and it emerges in that state. This is fine, but the list, when I load-file on this generated function, tries to call 3 as a function (as it is the first element in the list).
So I want a file which contains my function definition that I can then call load-file and call the functions defined in it. How can I write out this function with associated data so that I can load it back in without clojure thinking what used to be lists are now function calls?
You need to quote the structure prior to obtaining the string representation:
(list 'quote foo)
where foo is the structure.
Three additional remarks:
traversing the code to quote all lists / seqs would not do at all, since the top-level (defn ...) form would also get quoted;
lists are not the only potentially problematic type -- symbols are another one (+ vs. #<core$_PLUS_ clojure.core$_PLUS_#451ef443>);
rather than using (str foo) (even with foo already quoted), you'll probably want to print out the quoted foo -- or rather the entire code block with the quoted foo inside -- using pr / prn.
The last point warrants a short discussion. pr explicitly promises to produce a readable representation if *print-readably* is true, whereas str only produces such a representation for Clojure's compound data structures "by accident" (of the implementation) and still only if *print-readably* is true:
(str ["asdf"])
; => "[\"asdf\"]"
(binding [*print-readably* false]
(str ["asdf"]))
; => "[asdf]"
The above behaviour is due to clojure.lang.RT/printString's (that's the method Clojure's data structures ultimately delegate their toString needs to) use of clojure.lang.RT/print, which in turn chooses output format depending on the value of *print-readably*.
Even with *print-readably* bound to true, str may produce output inappropriate for clojure.lang.Reader's consumption: e.g. (str "asdf") is just "asdf", while the readable representation is "\"asdf\"". Use (with-out-str (pr foo)) to obtain a string object containing the representation of foo, guaranteed readable if *print-readably* is true.
Try this instead...
(defn aaa []
(fff :update {:bbb "bbb" :xxx [1 2 3] :yyy (list 3 5 7)}))
Wrap it in a call to quote to read it without evaluating it.
I have two lists of tuples which are as follows: [(String,Integer)] and [(Float,Integer)]. Each list has several tuples.
For every Integer that has a Float in the second list, I need to check if its Integer matches the Integer in the first list, and if it does, return the String - although this function needs to return a list of Strings, i.e. [String] with all the results.
I have already defined a function which returns a list of Integers from the second list (for the comparison on the integers in the first list).
This should be solvable using "high-order functions". I've spent a considerably amount of time playing with map and filter but haven't found a solution!
You have a list of Integers from the second list. Let's call this ints.
Now you need to do two things--first, filter the (String, Integer) list so that it only contains pairs with corresponding integers in the ints list and secondly, turn this list into just a list of String.
These two steps correspond to the filter and map respectively.
First, you need a function to filter by. This function should take a (String, Integer) pair and return if the integer is in the ints list. So it should have a type of:
check :: (String, Integer) -> Bool
Writing this should not be too difficult. Once you have it, you can just filter the first list by it.
Next, you need a function to transform a (String, Integer) pair into a String. This will have type:
extract :: (String, Integer) -> String
This should also be easy to write. (A standard function like this actually exists, but if you're just learning it's healthy to figure it out yourself.) You then need to map this function over the result of your previous filter.
I hope this gives you enough hints to get the solution yourself.
One can see in this example how important it is to describe the problem accurately, not only to others but foremost to oneself.
You want the Strings from the first list, whose associated Integer does occur in the second list.
With such problems it is important to do the solutions in small steps. Most often one cannot write down a function that does it right away, yet this is what many beginners think they must do.
Start out by writing the type signature you need for your function:
findFirsts :: [(String, Integer)] -> [(Float, Integer)] -> [String]
Now, from the problem description, we can deduce, that we essentially have two things to do:
Transform a list of (String, Integer) to a list of String
Select the entries we want.
Hence, the basic skeleton of our function looks like:
findFirsts sis fis = map ... selected
where
selected = filter isWanted sis
isWanted :: (String, Integer) -> Bool
isWanted (_,i) = ....
You'll need the functions fst, elem and snd to fill out the empty spaces.
Side note: I personally would prefer to solve this with a list comprehension, which results often in better readable (for me, anyway) code than a combination of map and filter with nontrivial filter criteria.
Half of the problem is to get the string list if you have a single integer. There are various possibilities to do this, e.g. using filter and map. However you can combine both operations using a "fold":
findAll x axs = foldr extract [] axs where
extract (a,y) runningList | x==y = a:runningList
| otherwise = runningList
--usage:
findAll 2 [("a",2),("b",3),("c",2)]
--["c","a"]
For a fold you have a start value (here []) and an operation that combines the running values successively with all list elements, either starting from the left (foldl) or from the right (foldr). Here this operation is extract, and you use it to decide whether to add the string from the current element to the running list or not.
Having this part done, the other half is trivial: You need to get the integers from the (Float,Integer) list, call findAll for all of them, and combine the results.