I have a JSON being passed to a Dust template and want to compare multiple keys for the same value. For example I have a JSON like:
"data": {
"abc": "true",
"xyz": "true",
"uno": "true"
}
Is there a way apart from using "IF" condition(it's deprecated), to compare all of them at once?
I don't wanna do
{?data.abc}
{?data.xyz}
{?data.uno}
<DO something when all of them are true>
{/data.uno}
{/data.xyz}
{/data.abc}
Is there a better way to do the above conditions?
P.S. for dust-helper version 1.5.0 or lower.
After talking to a few developers and researching a lot, there are no specific dustjs filters designed for such a use case for dust-helper version 1.5.0 or lower.
Having said that, the following code seems to work pretty well,
{#select key=abc}
{#eq value="true"/}
{#eq key=xyz value="true"/}
{#eq key=uno value="true"/}
{#any}One of them is "true"{/any}
{#none}None of them is "true"{/none}
{/select}
P.S. I couldn't compare boolean values, but if I pass boolean value true as a string "true", it works perfectly.
Related
XQuery 3.1 introduced several JSON functions. I was wondering if these functions were designed with advanced JSON editing in mind.
As far as I can tell, these functions only work for simple JSONs, like for instance...
let $json:={"a":1,"b":2} return map:put($json,"c",3)
{
"a": 1,
"b": 2,
"c": 3
}
and
let $json:={"a":1,"b":2,"c":3} return map:remove($json,"c")
{
"a": 1,
"b": 2
}
The moment the JSON gets a bit more complex:
let $json:={"a":{"x":1,"y":2},"b":2} return map:put($json?a,"z",3)
{
"x": 1,
"y": 2,
"z": 3
}
let $json:={"a":{"x":1,"y":2,"z":3},"b":2} return map:remove($json?a,"z")
{
"x": 1,
"y": 2
}
Obviously map:put() and map:remove() do exactly what you tell them to do; select the "a"-object and add or remove an attribute.
However, when I want to edit a JSON document, I'd like to edit the entire document. And as far as I know that's not possible with the current implementation. Or is it? At least something like map:put($json,$json?a?z,3) or map:remove($json,$json?a?z) doesn't work.
For the removal of the "z"-attribute I did come up with a custom recursive function (which only works in this particular use-case)...
declare function local:remove($map,$key){
if ($map instance of object()) then
map:merge(
map:keys($map)[.!=$key] ! map:entry(.,local:remove($map(.),$key))
)
else
$map
};
let $json:={"a":{"x":1,"y":2,"z":3},"b":2} return
local:remove($json,"z")
...with the expected output...
{
"a": {
"x": 1,
"y": 2
},
"b": 2
}
...but I wasn't able to create a custom "add"-function.
I imagine advanced JSON editing can be done with some pretty advanced custom functions, but instead I would very much like to see that something like map:put($json,$json?a?z,3) would work, or otherwise an extra option which lets map:put() put out the entire JSON document, like map:put($json?a?z,3, <extra-option> ).
Or... I'd have to settle with the notion that XQuery isn't the right choice of course.
You're correct that doing what I call deep update of a map is quite difficult with XQuery 3.1 (and indeed XSLT 3.0) as currently defined. And it's not easy to define language constructs with clean semantics. I attempted to design a construct as an XSLT extension instruction - see https://saxonica.com/documentation10/index.html#!extensions/instructions/deep-update -- but I don't think its anywhere near a perfect solution.
I wanted the same thing, so wrote my own surrogate XSLT functions, tan:map-put() and tan:map-remove(), which do deep map replacement and removal:
https://github.com/Arithmeticus/XML-Pantry/tree/master/maps-and-arrays
These can be incorporated in an XSLT workflow via xsl:include or xsl:import, or in an XQuery one via fn:transform(). Some of the other functions may be useful, too. If these functions don't do exactly what you want, they might catalyze your own variation.
In XQuery 3.1, you are supposed to write a recursive function for such things. You could put all you functions in a module file, and then load the module when you need them...
Besides that, Xidel has an object editing extension from before JSONiq and XPath 3.1. For a global mutable variable (without let), you can write:
$json:={"a":{"x":1,"y":2,"z":3},"b":2},
(($json).a).z := 4
$json:={"a":{"x":1,"y":2,"z":3},"b":2},
$json("a")("z") := 4
#comment119588453_67652693 by #ChristianGrün:
If updates are required, we tend to convert JSON data to XML.
I'm a xidel user and last week (with a little help from #BeniBela) I've had a look at whether this could be done with json-to-xml(), Xidel's own x-replace-nodes() and xml-to-json(). The answer is yes. Thanks for the hint.
For reference and for anyone interested, here's 1 example.
To change key "c" in {"x":{"a":1,"b":2,"c":3},"y":2} to "d":
$ xidel -s '{"x":{"a":1,"b":2,"c":3},"y":2}' -e '
xml-to-json(
x:replace-nodes(
json-to-xml(
serialize($json,{"method":"json"})
)//fn:map[#key="x"]/fn:number[#key="c"]/#key,
attribute key {"d"}
)
)
'
{"x":{"a":1,"b":2,"d":3},"y":2}
Xidel online tester.
I am new to Ruby and have been stuck at this for a while now. I am getting a JSON responses as mentioned below and aim to search for the substring where the value of its substring is something as specified by me.
For example, I am getting the response below:
{
"00:00:00:CC:00:CC": {
"family": "lladdr"
},
"10.0.0.20": {
"family": "inet",
"prefixlen": "24",
"netmask": "255.255.255.0",
"broadcast": "10.0.0.255",
"scope": "Global"
},
"ff00::f00:00ff:fff0:00f0": {
"family": "inet6",
"prefixlen": "64",
"scope": "Link",
"tags": []
}
}
I need to get the value of the parent where the key family has a value equal to inet. In this case, I just want 10.0.0.20 as output when family equals inet.
I went through multiple questions here, and Google did not help. I understand that I will need to parse the JSON using JSON.parse, and then use maybe find or select to get my answer, but I was not able to get it working.
I am not sure if there is any other way I can do this like you would do in Bash using grep or awk. One hack might be to use something like foo.[46..54] which will output the IP, but again I believe that would be a bad way of solving this.
Use Hash#invert
Assuming that your Hash is already stored in response using JSON#parse, one way to solve the problem is to invert the Hash with the Hash#invert method. For example:
# Return an Array of IPv4, then pop the last/only String value.
response.invert.select { |h| h.values.include? 'inet' }.values.pop
#=> "10.0.0.20"
This is quick and simple, and it works with your provided data. However, there are some minor caveats.
Caveats
Assumes there is only one IPv4 address in the response Hash. If you have more than one key with inet as a value, don't use pop and deal with the resulting Array as you see fit. For example:
response.invert.select { |h| h.values.include? 'inet' }.values
#=> ["10.0.0.20"]
It assumes the key for each top-level JSON object is an IP address. We're not really validating anything.
It works for the JSON you have, but isn't solving for arbitrarily nested or varied data structures. If you have different kinds of inputs, consider it "some assembly required."
If you have no inet family, then {}.values.pop may return nil. Make sure you plan for that in your application.
None of these are show-stoppers for your particular use case, but they are certainly worth keeping in mind. Your mileage may vary.
If you want the family inet,
result = JSON.parse(response)
family = result.detect { |k,v| v['family'] == 'inet' }
family[0] # 10.0.0.20
Note that detect will return an array.
I am failing to query a string property set with a numerical value. Example:
//entity in orion
{
"id": "Test.2",
"type": "Test",
"nombre": "1"
}
//query
http://<some-ip>:<some-port>/v2/entities?type=Test&q=nombre==1
//response
[]
I changed the attribute to store a number and the query works well then. Anyway, it should be possible to query numerical string values, shouldn't it?
EDIT
I found this problem will be issued in version 0.26
As described in the issue cited by #nespapu, NGSIv2 will allow that posibility in the following way:
//query
http://<some-ip>:<some-port>/v2/entities?type=Test&q=nombre=='1'
However, current Orion version at the time of writting this (0.24.0) doesn't implemented yet such functionality.
EDIT: implemented since Orion 1.3.0
Referencing https://www.rfc-editor.org/rfc/rfc6902#appendix-A.14:
A.14. ~ Escape Ordering
An example target JSON document:
{
"/": 9,
"~1": 10
}
A JSON Patch document:
[
{"op": "test", "path": "/~01", "value": 10}
]
The resulting JSON document:
{
"/": 9,
"~1": 10
}
I'm writing an implementation of this RFC, and I'm stuck on this. What is this trying to achieve, and how is it supposed to work?
Assuming the answer to the first part is "Allowing json key names containing /s to be referenced," how would you do that?
The ~ character is a keyword in JSON pointer. Hence, we need to "encode" it as ~0. To quote jsonpatch.com,
If you need to refer to a key with ~ or / in its name, you must escape the characters with ~0 and ~1 respectively. For example, to get "baz" from { "foo/bar~": "baz" } you’d use the pointer /foo~1bar~0
So essentially,
[
{"op": "test", "path": "/~01", "value": 10}
]
when decoded yields
[
{"op": "test", "path": "/~1", "value": 10}
]
~0 expands to ~ so /~01 expands to /~1
I guess they mean that you shouldn't "double expand" so that expanded /~1 should not be expanded again to // and thus must not match the documents "/" key (which would happen if you double expanded). Neither should you expand literals in the source document so the "~1" key is literally that and not equivalent to the expanded "/". But I repeat that's my guess about the intention of this example, the real intention may be different.
The example is indeed really bad, in particular since it's using a "test" operation and doesn't specify the result of that operation. Other examples like the next one at A.15 at least says its test operation must fail, A.14 doesn't tell you if the operation should succeed or not. I assume they meant the operation should succeed, so that implies /~01 should match the "~1" key. That's probably all about that example.
If I were to write an implementation I'd probably not worry too much about this example and just look at what other implementations do - to check if I'm compatible with them. It's also a good idea to look for test suites of other projects, for example I found one from http://jsonpatch.com/ at https://github.com/json-patch/json-patch-tests
I think the example provided in RFC isn't exactly best thought-out, especially that it tries to document a feature only through example, which is vague at best - without providing any kind of commentary.
You might be interested in interpretation presented in following documents:
Documentation of Rackspace API
Documentation of OpenStack API
These seem awfully similar and I think it's due to nature of relation between Rackspace and OpenStack:
OpenStack began in 2010 as a joint project of Rackspace Hosting and NASA (...)
It actually provides some useful details including grammar it accepts and rationale behind introducing these tokens, as opposed to the RFC itself.
Edit: it seems that JSON pointers have separate RFC 6901, which is available here and OpenStack and Rackspace specifications above are consistent with it.
A coworker and I are in a heated debate regarding the design of a REST service. For most of our API, GET calls to collections return something like this:
GET /resource
[
{ "id": 1, ... },
{ "id": 2, ... },
{ "id": 3, ... },
...
]
We now must implement a call to a collection of properties whose identifying attribute is "name" (not "id" as in the example above). Furthermore, there is a finite set of properties and the order in which they are sent will never matter. The spec I came up with looks like this:
GET /properties
[
{ "name": "{PROPERTY_NAME}", "value": "{PROPERTY_VALUE}", "description": "{PROPERTY_DESCRIPTION}" },
{ "name": "{PROPERTY_NAME}", "value": "{PROPERTY_VALUE}", "description": "{PROPERTY_DESCRIPTION}" },
{ "name": "{PROPERTY_NAME}", "value": "{PROPERTY_VALUE}", "description": "{PROPERTY_DESCRIPTION}" },
...
]
My coworker thinks it should be a map:
GET /properties
{
"{PROPERTY_NAME}": { "value": "{PROPERTY_VALUE}", "description": "{PROPERTY_DESCRIPTION}" },
"{PROPERTY_NAME}": { "value": "{PROPERTY_VALUE}", "description": "{PROPERTY_DESCRIPTION}" },
"{PROPERTY_NAME}": { "value": "{PROPERTY_VALUE}", "description": "{PROPERTY_DESCRIPTION}" },
...
}
I cite consistency with the rest of the API as the reason to format the response collection my way, while he cites that this particular collection is finite and the order does not matter. My question is, which design best adheres to RESTful design and why?
IIRC how you return the properties of a resource does not matter in a RESTful approach.
http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm
From an API client point of view I would prefer your solution, considering it is explicitly stating that the name of a property is XYZ.
Whereas your coworkers solution would imply it is the name, but how would I know for sure (without reading the API documenation). Try not to assume anything regarding your consuming clients, just because you know what it means (and probably is easy enough to assume to what it means) it might not be so obvious for your clients.
And on top of that, it could break consuming clients if you are ever to decide to revert that value from being a name back to ID. Which in this case you have done already in the past. Now all the clients need to change their code, whereas they would not have to in your solution, unless they need the newly added id (or some other property).
To me the approach would depend on how you need to use the data. Are the property names known before hand by the consuming system, such that having a map lookup could be used to directly access the record you want without needing to iterate over each item? Would there be a method such as...
GET /properties/{PROPERTY_NAME}
If you need to look up properties by name and that sort of method is NOT available, then I would agree with the map approach, otherwise, I would go with the array approach to provide consistent results when querying the resource for a full collection.
I think returning a map is fine as long as the result is not paginated or sorted server side.
If you need the result to be paginated and sorted on the server side, going for the list approach is a much safer bet, as not all clients might preserve the order of a map.
In fact in JavaScript there is no built in guarantee that maps will stay sorted (see also https://stackoverflow.com/a/5467142/817385).
The client would need to implement some logic to restore the sort order, which can become especially painful when server and client are using different collations for sorting.
Example
// server sent response sorted with german collation
var map = {
'ä':{'first':'first'},
'z':{'second':'second'}
}
// but we sort the keys with the default unicode collation algorigthm
Object.keys(map).sort().forEach(function(key){console.log(map[key])})
// Object {second: "second"}
// Object {first: "first"}
A bit late to the party, but for whoever stumbles upon this with similar struggles...
I would definitely agree that consistency is very important and would generally say that an array is the most appropriate way to represent a list. Also APIs should be designed to be useful in general, preferably without optimizing for a specific use-case. Sure, it could make implementing the use-case you're facing today a bit easier but it will probably make you want to hit yourself when you're implementing a different one tomorrow. All that being said, of course for quite some applications the map-formed response would just be easier (and possibly faster) to work with.
Consider:
GET /properties
[
{ "name": "{PROPERTY_NAME}", "value": "{PROPERTY_VALUE}", "description": "{PROPERTY_DESCRIPTION}" },
...
]
and
GET /properties/*
{
"{PROPERTY_NAME}": { "value": "{PROPERTY_VALUE}", "description": "{PROPERTY_DESCRIPTION}" },
...
}
So / gives you a list whereas /* gives you a map. You might read the * in /* as a wildcard for the identifier, so you're actually requesting the entities rather than the collection. The keys in the response map are simply the expansions of that wildcard.
This way you can maintain consistency across your API while the client can still enjoy the map-format response when preferred. Also you could probably implement both options with very little extra code on your server side.