How to abstract `reg-sub` in reframe - clojurescript

in my code ,there is duplication like this:
(reg-sub
:hello-john
(fn [ db [ _ say-hi ]
(str (get-in db [ say-hi ]) "hello John")
)
(reg-sub
:hello-jack
(fn [ db [ _ say-hi ]
(str (get-in db [ say-hi ]) "hello Jack")
)
this pattern is quite tedious and I try to factor out with following code in sub.cljs:
(for [ [x y] [[:hello-john "hello John"]
[:hello-jack "hello Jack"]] ]
(reg-sub
x
(fn [ db [ _ say-hi ]
(str (get-in db [ say-hi ]) y ))
)
But it desn't work as expect. Thanks for reading this appreciate any help :)

Why not
(reg-sub
:say-hello
(fn [ db [ _ person say-hi ]
(str (get-in db [ say-hi ]) "hello " person)
)

Your second code block is missing the closing ".
Another thing is that for is lazy - it won't be evaluated by itself. Replace it with doseq.
Finally, a minor thing - don't use (get-in db [say-hi]), instead use (get db say-hi). And if say-hi in your case is always a keyword, you can use (say-hi db).

Related

Sorting Json struct from vibe.d

I faced a problem with wrong sorting JSON keys. I use mongo db and I need to send a creating user form command.
vibe-d JSON:
Json a2 = Json([
"createUser": Json(req.form["user"]),
"pwd": Json(req.form["password"]),
"roles": Json([
Json([
"role": Json(req.form["access"]),
"db": Json("3dstore")
])
])
]);
logInfo(a2.toString());
Output:
[main(Wbp2) INF] {"roles":[{"role":"readWrite","db":"3dstore"}],"createUser":"111","pwd":"1"}
std.json:
JSONValue a2 = JSONValue([
"createUser": JSONValue(req.form["user"]),
"pwd": JSONValue(req.form["password"]),
"roles": JSONValue([
JSONValue([
"role": JSONValue(req.form["access"]),
"db": JSONValue("3dstore")
])
])
]);
logInfo(a2.toString());
Output:
[main(vVOX) INF] {"createUser":"111","pwd":"1","roles":[{"db":"3dstore","role":"readWrite"}]}
Therefore I get an error in mongo output:
"errmsg" : "no such command: 'roles'"
Any ideas?

Convert this JSON to a tree and find the path to parent

data= {
"saturn": [
"planet",
"american_car",
"car"
],
"american_car": [
"car",
"gas_driven_automobile"
],
"planet": [
"large_object",
"celestial_body"
],
"large_object": [],
"gas_driven_automobile": [
"gas_powered_road_vehicle",
"car"
],
"car": [
"vehicle",
"motor_vehicle"
],
"vehicle": [],
"motor_vehicle": [],
"gas_powered_road_vehicle": [],
"celestial_body": []
};
I need to write an algorithm where if I give the input "saturn" I need to get all the possible paths from saturn to different parents. for example,
saturn ->planet ->large_object
saturn ->american_car->car->vehicle
saturn ->american_car->car->motor_vehicle
saturn ->american_car->gas_driven_automobile->gas_powered_road_vehicle
saturn ->american_car->gas_driven_automobile->car->vehicle
and all the other possible paths.
I was thinking of somehow converting this to a tree and then using a library to calculate the path from the child to the parent.
Working on writing an algorithm, can't figure out how to start off on converting this to a tree.
Using jq, you can simply define a recursive function:
def parents($key):
if has($key)
then if .[$key] == [] then [] else .[$key][] as $k | [$k] + parents($k) end
else []
end;
To use it to produce the "->"-style output, invoke jq with the -r command-line option, and call the above function like so:
["saturn"] + parents("saturn")
| join(" -> ")
More economically
def lineages($key):
[$key] + (lineages(.[$key][]) // []);
lineages("saturn") | join(" -> ")

Converting a CSV to RDF where one column is a set of values

I want to convert a CSV to RDF.
One of the column of that CSV is, in fact, a set of values joined with a separator character (in my case, the space character).
Here is a sample CSV (with header):
col1,col2,col3
"A","B C D","John"
"M","X Y Z","Jack"
I would like the conversion process to create a RDF similar to this:
:A :aProperty :B, :C, :D; :anotherProperty "John".
:M :aProperty :X, :Y, :Z; :anotherProperty "Jack".
I usually use Tarql for CSV conversion.
It is fine to iterate per row.
But it has no feature to sub-iterate "inside" a column value.
SPARQL-Generate may help (with iter:regex and sub-generate, as far as a I understand). But I cannot find any example that matches my use case.
PS: may be RML can help too. But I have no prior knowledge of this technology.
You can accomplish this with RML and FnO.
First, we need to access each row which can be accomplished with RML.
RML allows you to iterate over each row of the CSV file (ql:CSV) with a
LogicalSource.
Specifying the iterator (rml:iterator)
is not needed since the default iterator in RML is a row-based iterator.
This results into the following RDF (Turtle):
<#LogicalSource>
a rml:LogicalSource;
rml:source "data.csv";
rml:referenceFormulation ql:CSV.
The actually triples are generated with the help of a TriplesMap which
uses the LogicalSource to retrieve the data from each CSV row:
<#MyTriplesMap>
a rr:TriplesMap;
rml:logicalSource <#LogicalSource>;
rr:subjectMap [
rr:template "http://example.org/{col1}";
];
rr:predicateObjectMap [
rr:predicate ex:aProperty;
rr:objectMap <#FunctionMap>;
];
rr:predicateObjectMap [
rr:predicate ex:anotherProperty;
rr:objectMap [
rml:reference "col3";
];
].
The col3 CSV column be used to create the following triple:
<http://example.org/A> <http://example.org/ns#anotherProperty> "John".
However, the string in the CSV column col2 needs to be split first.
This can be achieved with Fno (Function Ontology) and an RML processor which
supports the execution of FnO functions. Such RML processor can be the
RML Mapper, but other processors can
be used too.
The following RDF is needed to invoke an FnO function which splits the input
string with a space as separator with our LogicalSource as input data:
<#FunctionMap>
fnml:functionValue [
rml:logicalSource <#LogicalSource>; # our LogicalSource
rr:predicateObjectMap [
rr:predicate fno:executes;
rr:objectMap [
rr:constant grel:string_split # function to use
];
];
rr:predicateObjectMap [
rr:predicate grel:valueParameter;
rr:objectMap [
rml:reference "col2" # input string
];
];
rr:predicateObjectMap [
rr:predicate grel:p_string_sep;
rr:objectMap [
rr:constant " "; # space separator
];
];
].
The supported FnO functions by the RML mapper are available here:
https://rml.io/docs/rmlmapper/default-functions/
You can find the function name and its parameters on that page.
Mapping rules
#base <http://example.org> .
#prefix rml: <http://semweb.mmlab.be/ns/rml#> .
#prefix rr: <http://www.w3.org/ns/r2rml#> .
#prefix ql: <http://semweb.mmlab.be/ns/ql#> .
#prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
#prefix fnml: <http://semweb.mmlab.be/ns/fnml#> .
#prefix fno: <https://w3id.org/function/ontology#> .
#prefix grel: <http://users.ugent.be/~bjdmeest/function/grel.ttl#> .
#prefix ex: <http://example.org/ns#> .
<#LogicalSource>
a rml:LogicalSource;
rml:source "data.csv";
rml:referenceFormulation ql:CSV.
<#MyTriplesMap>
a rr:TriplesMap;
rml:logicalSource <#LogicalSource>;
rr:subjectMap [
rr:template "http://example.org/{col1}";
];
rr:predicateObjectMap [
rr:predicate ex:aProperty;
rr:objectMap <#FunctionMap>;
];
rr:predicateObjectMap [
rr:predicate ex:anotherProperty;
rr:objectMap [
rml:reference "col3";
];
].
<#FunctionMap>
fnml:functionValue [
rml:logicalSource <#LogicalSource>;
rr:predicateObjectMap [
rr:predicate fno:executes;
rr:objectMap [
rr:constant grel:string_split
];
];
rr:predicateObjectMap [
rr:predicate grel:valueParameter;
rr:objectMap [
rml:reference "col2"
];
];
rr:predicateObjectMap [
rr:predicate grel:p_string_sep;
rr:objectMap [
rr:constant " ";
];
];
].
Output
<http://example.org/A> <http://example.org/ns#aProperty> "B".
<http://example.org/A> <http://example.org/ns#aProperty> "C".
<http://example.org/A> <http://example.org/ns#aProperty> "D".
<http://example.org/A> <http://example.org/ns#anotherProperty> "John".
<http://example.org/M> <http://example.org/ns#aProperty> "X".
<http://example.org/M> <http://example.org/ns#aProperty> "Y".
<http://example.org/M> <http://example.org/ns#aProperty> "Z".
<http://example.org/M> <http://example.org/ns#anotherProperty> "Jack".
Note: I contribute to RML and its technologies.
You can test this query on the playground https://ci.mines-stetienne.fr/sparql-generate/playground.html and check it behaves as expected:
BASE <http://data.example.com/>
PREFIX : <http://example.com/>
PREFIX iter: <http://w3id.org/sparql-generate/iter/>
PREFIX fun: <http://w3id.org/sparql-generate/fn/>
GENERATE {
<{?col1}> :anotherProperty ?col3.
GENERATE{
<{?col1}> :aProperty <{ ?value }> ;
}
ITERATOR iter:Split( ?col2 , " " ) AS ?value .
}
ITERATOR iter:CSVStream("http://example.com/file.csv", 20, "*") AS ?col1 ?col2 ?col3
The Tabular Data Model and related specs target this use case, although as I recall, we didn't provide for combinations of valueUrl and separator to have sub-columns generate multiple URIs.
The metadata to describe this would be something like the following:
{
"#context": "http://www.w3.org/ns/csvw",
"url": "test.csv",
"tableSchema": {
"columns": [{
"name": "col1",
"titles": "col1",
"datatype": "string",
"required": true
}, {
"name": "col2",
"titles": "col2",
"datatype": "string",
"separator": " "
}, {
"name": "col3",
"titles": "col3",
"datatype": "string",
"propertyUrl": "http://example.com/anotherProperty",
"valueUrl": "http://example.com/{col3}"
}],
"primaryKey": "col1",
"aboutUrl": http://example.com/{col1}"
}
}

How to convert JSON to node.js object

I have a JSON tree like this that is being posted to my node.js (we'll call it message for the sake of this question):
var message = ["layer1": [
"color": "Blue",
"size": "small",
"layer2": [
"item1": "TEST"
]
]
]
How can I make it so I can access individual nodes and values in node.js, something like this:
var sample1 = message.layer1
var sample2 = message.layer1.layer2.item1
if I were to console.log(sample1) it would look like this:
["color": "Blue",
"size": "small",
"layer2": [
"item1": "TEST"
]
]
and console.log(sample2) would look like this:
"TEST"
Is this possible?
The syntax of your message variable is not Javascript (your message seems to be an array, but have key:value, which is not allowed in Javascript).
You have to replace "[" by "{" and "]" by "}" in your message to have a Javascript object. Then your sample1 and sample2 variables should work.

Applescript functions

Is there a difference between the on and to keywords when declaring functions in applescript? Seems like they're interchangeable from what I've seen. Is that the case or would one be more useful than the other in some situations?
on and to are equivalent. See https://developer.apple.com/library/mac/documentation/AppleScript/Conceptual/AppleScriptLangGuide/reference/ASLR_handlers.html:
( on | to ) handlerName ¬
[ [ of | in ] directParamName ] ¬
[ ASLabel userParamName ]... ¬
[ given userLabel:userParamName [, userLabel:userParamName ]...]
[ statement ]...
end [ handlerName ]