Even though my actual data and shape graph is completely different, I want to understand the general idea of sh:order in shacl. Therefore I took examples from the official documentation. I assume that my data graph looks like this:
#prefix ex: <http://example.org/ns#> .
ex:Bob
a ex:Person ;
ex:sibling ex:John .
ex:Alice
a ex:Person ;
ex:parent ex:Bob .
ex:John
a ex:Person ;
ex:sibling ex:Bob .
ex:Jane
a ex:Person ;
ex:parent ex:John .
And the shacl rules:
#prefix ex: <http://example.org/ns#> .
#prefix sh: <http://www.w3.org/ns/shacl#> .
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
ex:
sh:declare [ sh:prefix "ex" ;
sh:namespace "http://example.org/ns#" ; ] .
ex:FamilyShape
a sh:NodeShape ;
sh:targetClass ex:Person ;
sh:rule [ a sh:SPARQLRule ;
rdfs:label "Infer uncles, i.e. male siblings of the parents of $this" ;
sh:prefixes ex: ;
sh:order 1 ;
sh:construct """
CONSTRUCT {
?p a ex:Person .
?p ex:parent ?f .
?f ex:sibling ?s .
?p ex:uncle ?s .
}
WHERE {
?p a ex:Person .
?p ex:parent ?f .
?f ex:sibling ?s .
}
""" ; ] ;
sh:rule [ a sh:SPARQLRule ;
rdfs:label "Infer cousins, i.e. the children of the uncles" ;
sh:prefixes ex: ;
sh:order 2 ;
sh:construct """
CONSTRUCT {
?p a ex:Person .
?p ex:parent ?f .
?f ex:sibling ?s .
?p ex:uncle ?uncle .
?p ex:cousin ?cousin .
}
WHERE {
?p a ex:Person .
?p ex:parent ?f .
?f ex:sibling ?s .
?p ex:uncle ?uncle .
?cousin ex:parent ?uncle .
}
""" ; ] .
What I want is firstly, to create ex:uncle prediction and secondly, relying on this result creating ex:cousin predicate. Until here everything works perfectly, as far as I set sh:order correctly. Unexpected result happens when I change the logic of rules.
For example, in the next shape firstly, I filter out (I aim to delete a specific node) a node, secondly apply second rule.
#prefix ex: <http://example.org/ns#> .
#prefix sh: <http://www.w3.org/ns/shacl#> .
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
ex:
sh:declare [ sh:prefix "ex" ;
sh:namespace "http://example.org/ns#" ; ] .
ex:FamilyShape
a sh:NodeShape ;
sh:targetClass ex:Person ;
sh:rule [ a sh:SPARQLRule ;
sh:prefixes ex: ;
sh:order 1 ;
sh:construct """
CONSTRUCT {
?p a ex:Person .
?p ex:parent ?f .
}
WHERE {
?p a ex:Person .
OPTIONAL {
?p ex:parent ?f .
OPTIONAL {
?f ex:sibling ?s .
FILTER NOT EXISTS { ?f ex:sibling ?s }
}
}
}
""" ; ] ;
sh:rule [ a sh:SPARQLRule ;
sh:prefixes ex: ;
sh:order 2 ;
sh:construct """
CONSTRUCT {
?p a ex:Person .
?p ex:parent ?f .
?p ex:uncle ?s .
}
WHERE {
?p a ex:Person .
?p ex:parent ?f .
?f ex:sibling ?s .
}
""" ; ] .
More precisely, I want to create a ex:uncle predicate if there is ex:sibling predicate. I expect the result as below:
<ex:Bob>
a <ex:Person> .
<ex:Alice>
a <ex:Person> ;
<ex:parent> <ex:Bob> .
<ex:John>
a <ex:Person> .
<ex:Jane>
a <ex:Person> ;
<ex:parent> <ex:John> .
However, the actual output is different.
<ex:Bob>
a <ex:Person> ;
<ex:sibling> <ex:John> .
<ex:Alice>
a <ex:Person> ;
<ex:parent> <ex:Bob> ;
<ex:uncle> <ex:John> .
<ex:John>
a <ex:Person> ;
<ex:sibling> <ex:Bob> .
<ex:Jane>
a <ex:Person> ;
<ex:parent> <ex:John> ;
<ex:uncle> <ex:Bob> .
I am using topbraid's shacl library (version 1.3.2).
Here I assume that the shacl takes always initial data in each rule (step) but considering only extension to it. What I expect is each of next rule (ordered by sh:order) takes the result of previous execution.
I read the official documentation, but it does not say anything about it.
Related
In my case there are SingleChoice (SC) and MultipleChoice (MC) questions. SC have a set of answers (as blank nodes) that must contain exectly one "points" and one "text" property. MC have a set of answers that must contain exectly one "points", one "text" and one "pointsNegative" property. Example as turtle:
prefix ex ...
ex:SC a ex:SingleChoice .
ex:hasAnswers [
a ex:Answer .
ex:text "Answer 1" .
ex:points 5 ;
],[ ...sameAsAbove ], ... ;
ex:MC a ex:MultipleChoice .
ex:hasAnswers [
a ex:Answer .
ex:text "Answer 1" .
ex:points 5 .
ex:pointsNegative 1 ;
],[ ...sameAsAbove ], ... ;
I managed to write shacl rules that validate all instances of class ex:Answer. But I can't make a difference to which these instances belong (SC or MC) by validating them with these rules:
ex:AnswerShape
a sh:NodeShape ;
sh:targetClass ex:Answer ;
sh:property [
a sh:PropertyShape ;
sh:path ex:Text ;
sh:minCount 1 ;
sh:maxCount 1 ;
sh:dataType xsd:string .
] .
E.g. if I add another PropertyShape for ex:pointsNegative, the shape will fail for all answers of a SC question (as these don't have ex:pointsNegative). I could omit the minCount restriction, but then answers for MC questions might have no ex:pointsNegative property.
How do I manage to have different rules executed for instances of Class ex:Answer, depending on there links (belong to SC or MC)? Is this even possible with Shacl?
Solution 1 - Create dedicated answer types
An easy solution would be to use different classes for the answers, e.g. ex:SingleChoiceAnswer and ex:MultipleChoiceAnswer. That way you can create dedicated shapes for each answer type.
# answers
prefix ex ...
ex:SC a ex:SingleChoice .
ex:hasAnswers [
a ex:SingleChoiceAnswer .
ex:text "Answer 1" .
ex:points 5 ;
],[ ...sameAsAbove ], ... ;
ex:MC a ex:MultipleChoice .
ex:hasAnswers [
a ex:MultipleChoiceAnswer .
ex:text "Answer 1" .
ex:points 5 .
ex:pointsNegative 1 ;
],[ ...sameAsAbove ], ... ;
# shapes
ex:AnswerShape
a sh:NodeShape ;
sh:targetClass ex:SingleChoiceAnswer ;
sh:property [
a sh:PropertyShape ;
...
] .
ex:AnswerShape
a sh:NodeShape ;
sh:targetClass ex:MultipleChoiceAnswer ;
sh:property [
a sh:PropertyShape ;
...
] .
Solution 2 - SHACL Property Paths
Another solution which works without changing the original schema is to use property paths. That way you could target the choice types and declare dedicated property shapes for them.
# shapes
ex:SingleChoiceShape
a sh:NodeShape ;
sh:targetClass ex:SingleChoice ;
sh:property [
a sh:PropertyShape ;
sh:path (ex:hasAnswers ex:text)
...
] .
ex:MultipleChoice
a sh:NodeShape ;
sh:targetClass ex:MultipleChoice ;
sh:property [
a sh:PropertyShape ;
sh:path (ex:hasAnswer ex:pointsNegative)
...
] .
When trying to access an SmbFile with a DFS URL, the jcifs library fails. But when I use the UNC returned by dfsutil it works.
NtlmPasswordAuthentication auth = new NtlmPasswordAuthentication( domain, user, pass );
SmbFile folder = new SmbFile(path,auth);
If path is set to
smb://mydomain.example.com/ourdfs/go/to/my/folder
the call fails with
Exception in thread "main" jcifs.smb.SmbException: The network name cannot be found.
But it is successful when invoked with the resolved name
dfsutil diag viewdfspath \\mydomain.example.com\ourdfs\go\to\my\folder
The DFS Path <\\mydomain.example.com\ourdfs\go\to\my\folder>
resolves to -> <\\someserver.example.com\sharename$\my\folder>
Then the following url works for path
smb://someserver.example.com/sharename$/my/folder
How do I set up jcifs to handle DFS properly i.e. not having to translate urls thru dfsutil?
The solution is to set the WINS configuration. IPCONFIG /ALL will show the information:
Connection-specific DNS Suffix . : MYDOMAIN.EXAMPLE.COM
Description . . . . . . . . . . . : Ethernet Connection
Physical Address. . . . . . . . . : DE-AD-BE-EF-F0-0D
DHCP Enabled. . . . . . . . . . . : Yes
Autoconfiguration Enabled . . . . : Yes
IPv4 Address. . . . . . . . . . . : 10.10.1.42(Preferred)
Subnet Mask . . . . . . . . . . . : 255.0.0.0
Lease Obtained. . . . . . . . . . : December 3, 2018 09:03:04 AM
Lease Expires . . . . . . . . . . : December 9, 2018 09:03:04 AM
Default Gateway . . . . . . . . . : 10.10.1.1
DHCPv4 Class ID . . . . . . . . . : O-mobile
DHCP Server . . . . . . . . . . . : 10.10.11.13
DNS Servers . . . . . . . . . . . : 10.10.4.48
10.10.4.56
Primary WINS Server . . . . . . . : 10.10.1.59
Secondary WINS Server . . . . . . : 10.10.2.58
NetBIOS over Tcpip. . . . . . . . : Enabled
The then configuration item has to be set as follows:
jcifs.netbios.wins=10.10.1.59
or by setting it with jcifs.Config.setProperty()
i need a regular expression to separate left and right part of this pattern . . . . . :
for e.g.
Media State . . . . . . . . . . . : Media disconnected
Connection-specific DNS Suffix . : alumnus.co.in
Description . . . . . . . . . . . : Microsoft ISATAP Adapter
Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
and store them into two variable.
i have written this regular expression
regexp {([[a-z]*[0-9]*.*[0-9]*[a-z]*]*" "):([[a-z]*[0-9]*.*[0-9]*[a-z]*]*)} 6*rag5hu. . :4ku5-1a543m match a b
but it is not working.
Any help will be appreciated.
I would do this:
set text {Media State . . . . . . . . . . . : Media disconnected
Connection-specific DNS Suffix . : alumnus.co.in
Description . . . . . . . . . . . : Microsoft ISATAP Adapter
Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes}
foreach line [split $text \n] {
if {[regexp {^(.+?)(?: \.)+ : (.+)$} $line -> name value]} {
puts "$name => $value"
}
}
outputs
Media State => Media disconnected
Connection-specific DNS Suffix => alumnus.co.in
Description => Microsoft ISATAP Adapter
Physical Address. => 00-00-00-00-00-00-00-E0
DHCP Enabled. => No
Autoconfiguration Enabled => Yes
This uses a non-greedy quantifier (+?), and that make every quantifier in the regex non-greedy. You then require the anchors so that the bits you want to capture contain all the text you need.
Borrowing the definition of text:
package require textutil
foreach line [split $text \n] {
lassign [::textutil::splitx [string trim $line] {\s*(?:\. )+:\s*}] a b
puts "a: $a\nb: $b"
}
Giving the output
a: Media State
b: Media disconnected
a: Connection-specific DNS Suffix
b: alumnus.co.in
a: Description
b: Microsoft ISATAP Adapter
a: Physical Address
b: 00-00-00-00-00-00-00-E0
a: DHCP Enabled
b: No
a: Autoconfiguration Enabled
b: Yes
Documentation:
foreach,
lassign,
package,
puts,
split,
string,
textutil (package)
I have following RDF (Turtle) file, this RDF is generated from CSV file using CSV2RDF conversion process by java language. I need to publish this RDF file on the web using linked data principles. How can i publish this RDF data on the web? thanks
#prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
#prefix csvw: <http://www.w3.org/ns/csvw#> .
#prefix dc: <http://purl.org/dc/elements/1.1/> .
#prefix dcat: <http://www.w3.org/ns/dcat#> .
#prefix foaf: <http://xmlns.com/foaf/0.1/> .
#prefix schema: <http://schema.org/> .
#prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
<_:G> a csvw:TableGroup ;
csvw:table <_:table0> .
<_:table0> a csvw:Table ;
csvw:url <file:///D:\\Junhua\\10.5.2016 prototype\\tree-ops - Copy.csv> ;
csvw:row <_:row0> .
<_:row0> a csvw:Row ;
csvw:rownum "1"^^xsd:int ;
csvw:url <file:///D:\\Junhua\\10.5.2016 prototype\\tree-ops - Copy.csv#row=2> ;
csvw:describes <_:sDef0> .
<_:sDef0> <_:col[0]> "Ming" ;
<_:col[1]> "Professor" ;
<_:col[2]> "Celtis australis" ;
<_:col[3]> "10k" ;
<_:col[4]> "Software Engineering" .
<_:table0> csvw:row <_:row1> .
<_:row1> a csvw:Row ;
csvw:rownum "2"^^xsd:int ;
csvw:url <file:///D:\\Junhua\\10.5.2016 prototype\\tree-ops - Copy.csv#row=3> ;
csvw:describes <_:sDef1> .
<_:sDef1> <_:col[0]> "Tang" ;
<_:col[1]> "Lecturer" ;
<_:col[2]> "Liquidambar styraciflua" ;
<_:col[3]> "5k" ;
<_:col[4]> "Database Management" .
<_:table0> csvw:row <_:row2> .
<_:row2> a csvw:Row ;
csvw:rownum "3"^^xsd:int ;
csvw:url <file:///D:\\Junhua\\10.5.2016 prototype\\tree-ops - Copy.csv#row=4> ;
csvw:describes <_:sDef2> .
<_:sDef2> <_:col[0]> "Fang" ;
<_:col[1]> "Assistant Professor" ;
<_:col[2]> "Bangla text" ;
<_:col[3]> "7k" ;
<_:col[4]> "Semantic Management" .
You may want to read the Best Practices document.
Off the top of my head you should tweak your conversion process:
Eliminate some of the blank nodes, so that the data can be retrieved over the web. Hash URIs would be a good choice
file:/// URIs are also no good, because they are meaningless for external consumers
You should include some links to other datasets like DBpedia or Wikidata. The links are what defines Linked Data
Finally, for starters the publishing itself could be as simple as putting your turtle as static content file.
I have installed a Jena Fuseki server on OpenShift.
The --config services.ttl configuration file is as shown below.
What I observe is the following:
If I perform a SPARQL update from the Control Panel I get Update Succeeded and some TDB files do change on the server (in ./app-root/data/DB/).
However when I perform a SPARQL query such as SELECT ?s ?p ?o WHERE { ?s ?p ?o. } again in the Control Panel I get zero statements back. This same is true for this GET request:
http://<obfuscated>.rhcloud.com/ds/query?query=SELECT+%3Fs+%3Fp+%3Fo+WHERE+{+%3Fs+%3Fp+%3Fo.+}&output=text&stylesheet=
The log file on OpenShift contains these entries:
INFO [24] GET http://<obfuscated>.rhcloud.com/ds/query?query=SELECT+%3Fs+%3Fp+%3Fo+WHERE+{+%3Fs+%3Fp+%3Fo.+}+&output=text&stylesheet=
INFO [24] Query = SELECT ?s ?p ?o WHERE { ?s ?p ?o. }
INFO [24] exec/select
INFO [24] 200 OK (2 ms)
So it appears as if RDF statements can be written to TDB but not retrieved. If I try the same on a local installation of Fuseki the problem does not manifest.
What else can I do to diagnose and resolve this problem with Fuseki on OpenShift?
UPDATE Apparently the problem does not manifest if I INSERT statements into a named GRAPH (not the default graph).
#prefix : <#> .
#prefix fuseki: <http://jena.apache.org/fuseki#> .
#prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
#prefix tdb: <http://jena.hpl.hp.com/2008/tdb#> .
#prefix ja: <http://jena.hpl.hp.com/2005/11/Assembler#> .
[] rdf:type fuseki:Server ;
fuseki:services (
<#service>
) .
[] ja:loadClass "com.hp.hpl.jena.tdb.TDB" .
tdb:DatasetTDB rdfs:subClassOf ja:RDFDataset .
tdb:GraphTDB rdfs:subClassOf ja:Model .
<#service> a fuseki:Service ;
fuseki:name "ds" ;
fuseki:serviceQuery "sparql" ;
fuseki:serviceQuery "query" ;
fuseki:serviceUpdate "update" ;
fuseki:serviceUpload "upload" ;
fuseki:serviceReadWriteGraphStore "data" ;
fuseki:dataset <#dataset> ;
.
<#dataset> a tdb:DatasetTDB ;
tdb:location "../data/DB" ;
tdb:unionDefaultGraph true ;
.
tdb:unionDefaultGraph true turned out to be the culprit. From the documentation:
An assembler can specify that the default graph for query is the union
of the named graphs. This is done by adding tdb:unionDefaultGraph.
Since this does not mention the default graph as part of the union I guess with this configuration there is no default graph other than the union of the named graph and hence updates that do not name a graph are ignored.
The described problem disappears with the alternative configuration tdb:unionDefaultGraph false.