There's a ton of questions about Ruby pretty-printing of recursive structures à la JSON (i.e, just scalars, arrays and hashes), and the answers refer to the json, pp, awesome_printer, etc. However, I have not seen a way to pretty-print a hash in Ruby syntax, that in addition would please classical Ruby linters. Something like
> pretty({a: [1, 2, {b: 3, c: 4}], d: {e: {'f g': 42}}})
=> "{a: [1, 2, {b: 3, c: 4}], d: {e: {'f g': 42}}}"
awesome_print comes close:
> ({a: [1, 2, {b: 3, c: 4}], d: {e: {'f g': 42}}}).
ai(plain: true, multiline: false, ruby19_syntax: true)
=> "{ a: [ 1, 2, { b: 3, c: 4 } ], d: { e: { \"f g\": 42 } } }"
but I didn't find a means to get rid of the inner spaces for braces and brackets, and it chose to use double-quotes for a constant string, which is disliked by Rubocop.
I can write my pretty-printer myself, but I'm surprised there's no COTS^h^h^h^hgem that does that. Did I miss something?
Related
I am currently trying to solve the following problem.
I must find all pairs of a set (with even number of elements) such that:
No two pairs have common elements
All elements are in a pair
As an example:
pairingsSet({0, 1, 2, 3})
should return something like
{
{{0, 1}, {2, 3}},
{{0, 2}, {1, 3}},
{{0, 3}, {1, 2}},
}
And a more verbose example:
pairingsSet({0, 1, 2, 3, 4, 5})
should return something like
{
{{0, 1}, {2, 3}, {4, 5}},
{{0, 1}, {2, 4}, {3, 5}},
{{0, 1}, {2, 5}, {3, 4}},
{{0, 2}, {1, 3}, {4, 5}},
{{0, 2}, {1, 4}, {3, 5}},
...
{{0, 5}, {1, 3}, {2, 4}},
{{0, 5}, {1, 4}, {2, 3}},
}
I can tell that the easiest way to solve this problem is with recursion, but I can't quite get my finger on how to do it.
I ordered the sets above because it helped me think of a solution, but the order does not matter. I still can't quite put my finger on a solution. I will likely be figuring out the answer soon, I made this question in case anyone else encountered a similar problem. If anyone figures out an alternative answer though, I would love to see it!
(I am currently working on a solution in Go although solutions in other languages are very much welcome)
Here is my solution in Go:
func allPairingSetsForAlphabet(alphabet []int) [][][2]int {
if len(alphabet) == 2 {
return [][][2]int{{{alphabet[0], alphabet[1]}}}
}
first := alphabet[0]
rest := alphabet[1:]
var pairingsSet [][][2]int
for i, v := range rest {
pair := [2]int{first, v}
withoutV := make([]int, len(rest)-1)
copy(withoutV[:i], rest[:i])
copy(withoutV[i:], rest[i+1:])
// recursive call
otherPairingsSet := allPairingSetsForAlphabet(withoutV)
for _, otherPairings := range otherPairingsSet {
thisPairing := make([][2]int, len(otherPairings)+1)
thisPairing[0] = pair
copy(thisPairing[1:], otherPairings)
pairingsSet = append(pairingsSet, thisPairing)
}
}
return pairingsSet
}
Essentially it performs the following steps:
If there are only two remaining things in the alphabet, we return a pairing set containing only those two pairs (ie {{{0, 1}}})
Pick an element (we will call it first)
Makes a new set which contains all elements of the alphabet, without first (we will call this rest)
For each element (v) in rest, we:
Make a pair of {first, v} (we will call this pair)
Create a subset of rest which contains all elements in rest except v (we will call this withoutV)
We make a recursive call, allPairingsForAlphabet(withoutV)
For each pairing (pairing) that this call returns, we add {pair} U pairing to the result.
I found a few SO posts on related issues which were unhelpful. I finally figured it out and here's how to read the contents of a .json file. Say the path is /home/xxx/dnns/test/params.json, I want to turn the dictionary in the .json into a Prolog dictionary:
{
"type": "lenet_1d",
"input_channel": 1,
"output_size": 130,
"batch_norm": 1,
"use_pooling": 1,
"pooling_method": "max",
"conv1_kernel_size": 17,
"conv1_num_kernels": 45,
"conv1_stride": 1,
"conv1_dropout": 0.0,
"pool1_kernel_size": 2,
"pool1_stride": 2,
"conv2_kernel_size": 12,
"conv2_num_kernels": 35,
"conv2_stride": 1,
"conv2_dropout": 0.514948804688646,
"pool2_kernel_size": 2,
"pool2_stride": 2,
"fcs_hidden_size": 109,
"fcs_num_hidden_layers": 2,
"fcs_dropout": 0.8559119274655482,
"cost_function": "SmoothL1",
"optimizer": "Adam",
"learning_rate": 0.0001802763794651928,
"momentum": null,
"data_is_target": 0,
"data_train": "/home/xxx/data/20180402_L74_70mm/train_2.h5",
"data_val": "/home/xxx/data/20180402_L74_70mm/val_2.h5",
"batch_size": 32,
"data_noise_gaussian": 1,
"weight_decay": 0,
"patience": 20,
"cuda": 1,
"save_initial": 0,
"k": 4,
"save_dir": "DNNs/20181203090415_11_created/k_4"
}
To read a JSON file with SWI-Prolog, query
?- use_module(library(http/json)). % to enable json_read_dict/2
?- FPath = '/home/xxx/dnns/test/params.json', open(FPath, read, Stream), json_read_dict(Stream, Dicty).
You'll get
FPath = 'DNNs/test/k_4/model_params.json',
Stream = <stream>(0x7fa664401750),
Dicty = _12796{batch_norm:1, batch_size:32, conv1_dropout:0.
0, conv1_kernel_size:17, conv1_num_kernels:45, conv1_stride:
1, conv2_dropout:0.514948804688646, conv2_kernel_size:12, co
nv2_num_kernels:35, conv2_stride:1, cost_function:"SmoothL1"
, cuda:1, data_is_target:0, data_noise_gaussian:1, data_trai
n:"/home/xxx/Downloads/20180402_L74_70mm/train_2.h5", data
_val:"/home/xxx/Downloads/20180402_L74_70mm/val_2.h5", fcs
_dropout:0.8559119274655482, fcs_hidden_size:109, fcs_num_hi
dden_layers:2, input_channel:1, k:4, learning_rate:0.0001802
763794651928, momentum:null, optimizer:"Adam", output_size:1
30, patience:20, pool1_kernel_size:2, pool1_stride:2, pool2_
kernel_size:2, pool2_stride:2, pooling_method:"max", save_di
r:"DNNs/20181203090415_11_created/k_4", save_initial:0, type
:"lenet_1d", use_pooling:1, weight_decay:0}.
where Dicty is the desired dictionary.
If you want to define this as a predicate, you could do:
:- use_module(library(http/json)).
get_dict_from_json_file(FPath, Dicty) :-
open(FPath, read, Stream), json_read_dict(Stream, Dicty), close(Stream).
Even DEC10 Prolog released 40 years ago could handle JSON just as a normal term . There should be no need for a specialized library or parser for JSON because Prolog can just parse it directly .
?- X={"a":3,"b":"hello","c":undefined,"d":null} .
X = {"a":3, "b":"hello", "c":undefined, "d":null}.
?-
Where I'm at
For this example, consider Friends.repo
Table Person has fields :id, :name, :age
Example Ecto query:
iex> from(x in Friends.Person, where: {x.id, x.age} in [{1,10}, {2, 20}, {1, 30}], select: [:name])
When I run this, I get relevant results. Something like:
[
%{name: "abc"},
%{name: "xyz"}
]
But when I try to interpolate the query it throws the error
iex> list = [{1,10}, {2, 20}, {1, 30}]
iex> from(x in Friends.Person, where: {x.id, x.age} in ^list, select: [:name])
** (Ecto.Query.CompileError) Tuples can only be used in comparisons with literal tuples of the same size
I'm assuming I need to do some sort of type casting on the list variable. It is mentioned in the docs here : "When interpolating values, you may want to explicitly tell Ecto what is the expected type of the value being interpolated"
What I need
How do I achieve this for a complex type like this? How do I type cast for a "list of tuples, each of size 2"? Something like [{:integer, :integer}] doesn't seem to work.
If not the above, any alternatives for running a WHERE (col1, col2) in ((val1, val2), (val3, val4), ...) type of query using Ecto Query?
Unfortunately, the error should be treated as it is stated in the error message: only literal tuples are supported.
I was unable to come up with some more elegant and less fragile solution, but we always have a sledgehammer as the last resort. The idea would be to generate and execute the raw query.
list = [{1,10}, {2, 20}, {1, 30}]
#⇒ [{1, 10}, {2, 20}, {1, 30}]
values =
Enum.join(for({id, age} <- list, do: "(#{id}, #{age})"), ", ")
#⇒ "(1, 10), (2, 20), (1, 30)"
Repo.query(~s"""
SELECT name FROM persons
JOIN (VALUES #{values}) AS j(v_id, v_age)
ON id = v_id AND age = v_age
""")
The above should return the {:ok, %Postgrex.Result{}} tuple on success.
You can do it with a separate array for each field and unnest, which zips the arrays into rows with a column for each array:
ids =[ 1, 2, 1]
ages=[10, 20, 30]
from x in Friends.Person,
inner_join: j in fragment("SELECT distinct * from unnest(?::int[],?::int[]) AS j(id,age)", ^ids, ^ages),
on: x.id==j.id and x.age==j.age,
select: [:name]
another way of doing it is using json:
list = [%{id: 1, age: 10},
%{id: 2, age: 20},
%{id: 1, age: 30}]
from x in Friends.Person,
inner_join: j in fragment("SELECT distinct * from jsonb_to_recordset(?) AS j(id int,age int)", ^list),
on: x.id==j.id and x.age==j.age,
select: [:name]
Update: I now saw the tag mysql, the above was written for postgres, but maybe it can be used as a base for a mySql version.
I'm trying to pass a clojurescript map to a webworker.
Before I pass it, it is of type PersistentArrayMap.
cljs.core.PersistentArrayMap {meta: null, cnt: 3, arr: Array(6), __hash: null, cljs$lang$protocol_mask$partition0$: 16647951…}
However, when it gets to the worker, it's just a plain old Object
Object {meta: null, cnt: 3, arr: Array(6), __hash: null, cljs$lang$protocol_mask$partition0$: 16647951…}
with the data seemingly intact. At this point I'd like to turn it back into a PersistentArrayMap so that I can work with it in cljs again.
Using clj->js and js->clj doesn't really work because it doesn't distinguish between keywords and strings, so some data is lost.
What's the best way to handle this situation? It's possible that I'm going about this in the entirely wrong way.
The built-in solution is to serialize the data to EDN and reading it back. clj->js is inherently lossy and should be avoided.
You start by turning the object into a string and send that over to the worker.
(pr-str {:a 1})
;; => "{:a 1}"
In the worker you read it back via cljs.reader/read-string
(cljs.reader/read-string "{:a 1}")
;; => {:a 1}
This will usually be good enough but the transit-cljs library will be slightly faster. Depending on the amount of data you plan on sending it may be worth the extra dependency.
Did you use keywordize-keys in the code?
(def json "{\"foo\": 1, \"bar\": 2, \"baz\": [1,2,3]}")
(def a (.parse js/JSON json))
;;=> #js {:foo 1, :bar 2, :baz #js [1 2 3]}
(js->clj a)
;;=> {"foo" 1, "bar" 2, "baz" [1 2 3]}
(js->clj a :keywordize-keys true)
;;=> {:foo 1, :bar 2, :baz [1 2 3]}
Full documentation is here.
I'm getting this string as query result from my database:
"%Sample.Struct{list: [], total: \"0.00\", day: 6, id: \"8vfts6\"}"
Is there any way to convert this one back to map?
I'm getting this error decoding it with poison
** (Poison.SyntaxError) Unexpected token: %
(poison) lib/poison/parser.ex:56: Poison.Parser.parse!/2
(poison) lib/poison.ex:83: Poison.decode!/2
I can't fix the way data is being added to database, i must find a proper way for a key/value route to easily retrive data from that. (this is just a sample for a more complex result)
As it was mentioned in comments, you should not use Code.eval_string. But, there is a way to safely convert your code to Elixir struct, using Code module:
ex(1)> encoded = "%Sample.Struct{list: [], total: \"0.00\", day: 6, id: \"8vfts6\"}"
"%Sample.Struct{list: [], total: \"0.00\", day: 6, id: \"8vfts6\"}"
First, get the AST from the string, but use the pattern matching to ensure it is a struct you are looking for ({:__aliases__, _, [:Sample, :Struct]}). All other (potentially malicious) code will fail this match:
iex(2)> {:ok, {:%, _, [{:__aliases__, _, [:Sample, :Struct]}, {:%{}, _, keymap}]} = ast} = Code.string_to_quoted(encoded)
{:ok,
{:%, [line: 1],
[{:__aliases__, [line: 1], [:Sample, :Struct]},
{:%{}, [line: 1], [list: [], total: "0.00", day: 6, id: "8vfts6"]}]}}
Here you have the full ast for you struct, and the keymap. You may now be tempted to use eval_quoted with the AST, to get the struct you needed:
iex(3)> {struct, _} = Code.eval_quoted(ast)
{%Sample.Struct{day: 6, id: "8vfts6", list: [], total: "0.00"}, []}
iex(4)> struct
%Sample.Struct{day: 6, id: "8vfts6", list: [], total: "0.00"}
But it is still not safe! Someone may put a function causing side effect into the string, like "%Sample.Struct{list: IO.puts \"Something\"}", which will be executed during the evaluation. So you will need to check the keymap firsts, if it contain safe data.
Or you may just use keymap directly, without evaluating anyting:
iex(5)> struct(Sample.Struct, keymap)
%Sample.Struct{day: 6, id: "8vfts6", list: [], total: "0.00"}