How to search a JSON db - json

Do you by any chance know how I should structure my search to access the 'a', 'b', 'c' in the following:
{"_default": {"1": {"Pc": "2429546524130460032", "Pf": "2429519276857919232", "Points": [{"P": "2428316170619108992", "a": "0.0690932185744956", "b": "2.6355498567408557", "c": "0.4369495787854096"}...
Where there are a total of 10 Points in several thousand objects (the "1" at the beginning is the first object). I am able to access "Pc" and "Pf", but if I try:
db.search(Point.Points['a'] == '0.0690932185744956'
I get an empty set.
Thoughts?

Related

MySQL: Update specific values in JSON array of objects

I'm using MySQL 5.7.12, and have a JSON column with the following data:
[{"tpe": "I", "val": 1}, {"tpe": "C", "val": 2}, {"tpe": "A", "val": 3}]
I would like to UPDATE val from 2 into 20 WHERE tpe='C'.
Here is my attempt:
UPDATE user SET data = JSON_SET(data->"$[1]", '$.val', 20);
This does update the value but it trims the other elements in the array and it becomes only a json-object, here how it looks after the update:
{"tpe": "C", "val": 20}
How can I get this right?
2nd question: is there a way to dynamically get the json object in the array so I don't have to hard code "$[1]" ? I tried to use JSON_SEARCH ??

Count elements in nested JSON with jq

I am trying to count all elements in a nested JSON-document with jq?
Given the following JSON-document
{"a": true, "b": [1, 2], "c": {"a": {"aa":1, "bb": 2}, "b": "blue"}}
I want to calculate the result 6.
In order to do this, I tried the following:
echo '{"a": true, "b": [1, 2], "c": {"a": {"aa":1, "bb": 2}, "b": "blue"}}' \
| jq 'reduce (.. | if (type == "object" or type == "array")
then length else 0 end) as $counts
(1; . + $counts)'
# Actual output: 10
# Desired output: 6
However, this counts the encountered objects and arrays as well and therefore yields 10 opposing to the desired output: 6
So, how can I only count the document's elements/leaf-nodes?
Thanks already in advance for you help!
Edit: What would be an efficient approach to count empty arrays and objects as well?
You can use the scalars filter to find leaf nodes. Scalars are all "simple" JSON values, i.e. null, true, false, numbers and strings. Alternatively you can compare the type of each item and use length to determine if an object or array has children.
I've expanded your input data a little to distinguish a few more corner cases:
Input:
{
"a": true,
"b": [1, 2],
"c": {
"a": {
"aa": 1,
"bb": 2
},
"b": "blue"
},
"d": [],
"e": [[], []],
"f": {}
}
This has 15 JSON entities:
5 of them are arrays or objects with children.
4 of them are empty arrays or objects.
6 of them are scalars.
Depending on what you're trying to do, you might consider only scalars to be "leaf nodes", or you might consider both scalars and empty arrays and objects to be leaf nodes.
Here's a filter that counts scalars:
[..|scalars]|length
Output:
6
And here's a filter that counts all entities which have no children. It just checks for all the scalar types explicitly (there are only six possible types for a JSON value) and if it's not one of those it must be an array or object, where we can check how many children it has with length.
[
..|
select(
(type|IN("boolean","number","string","null")) or
length==0
)
]|
length
Output:
10

How can I spread an object's properties in jq?

If I need to access the properties of an object, I'm currently accessing each property manually:
echo '{"a": {"a1":1, "a2": 2}, "b": 3}' | jq '{a1:.a.a1, a2: .a.a2,b}'
{
"a1": 1,
"a2": 2,
"b": 3
}
I'd like to avoid specifying every property. Is there an equivalent to the Object spread operator in JS, something like jq '{...a, b}'?
You can add objects together to combine their contents. If a key exists in both the left and right objects the value from the right object will remain.
echo '{"a": {"a1":1, "a2": 2}, "b": 3}' | jq '.a+{b}'
{
"a1": 1,
"a2": 2,
"b": 3
}
If you want a completely generic solution:
[..|objects|with_entries(select(.value|type!="object"))]|add
Or if you want a depth-first approach, replace add by reverse|add.
The above of course comes with the understanding that add resolves conflicts in a lossy way. If you don’t want any lossiness, choose a different method for combining objects, or maybe don’t combine them at all.
Here is a solution that only examines the top-level values, without referring to any key by name:
with_entries(if .value|type=="object" then .value|to_entries[] else . end)
For the example, this produces:
{
"a1": 1,
"a2": 2,
"b": 3
}
Note that even though this solution doesn't use add explicitly, it comes with a similar caveat about key collisions.

Can JSON schemas specify cross-field constraints?

Lets say I have a simple contact database with a "Last Called" history, stored in JSON format:
{
"contacts": [
{"id": 10001, "name": "Fred"},
{"id": 10006, "name": "Helen"},
{"id": 10009, "name": "John"},
{"id": 10030, "name": "Tara"},
{"id": 10101, "name": "Jason"}
],
"history": [
{"id": 10006, "time": 1513567986},
{"id": 10001, "time": 1513567243},
{"id": 10101, "time": 1513566511},
{"id": 10030, "time": 1513565012},
{"id": 10006, "time": 1513562390}
]
}
I'd like to thoroughly validate this. Specifying the overall structure (e.g. make sure that each contact has both a numeric id and a string name) and value limitations (e.g. ids are numbers between 10000 and 99999) is straightforward. However, I'd like to implement more sophisticated checks, e.g.:
Each contacts object has a unique id value
Each history object's id value matches a contact's `id' value
Can these constraints be specified using a JSON schema? (Note that the solution shouldn't depend on the two id fields being relatively close to each other in the structure; in a complete application they might each be a couple of levels deep in separate value trees.)
AFAIK, you can't do this with JSON Schema. You can check for unique items in an array, but not the object values.
You may have already thought of this, the option would be to loop through the object keys and validate it by hand. For example, you can add the ids to a Set and compare the set size and array length.

Replacing JSON file with CSV for d3js

http://bl.ocks.org/robschmuecker/7880033
I'm new to javascript and d3. The above example is a dendrogram. I can create my own. However, if I wanted to use it for something like employee data, it seems like it would be a pain to always having to be editing the json unless I'm missing some easier trick.
A csv in excel, that I've used in other charts, would seem like it would work well. Is It possible to replace the flare.json with a csv with the data? if so , how?
No, it's not possible directly. To know why, you'll have to understand the way the function d3.csv creates an array. Suppose you have this CSV:
foo, bar, baz
21, 33, 5
1, 14, 42
When parsed, it will generate a single array of objects, without nested arrays or nested objects. The first row defines the key names, and the other rows the values. This is the array generated for that CSV:
[
{"foo": 21, "bar": 33, "baz": 5},
{"foo": 1, "bar": 14, "baz": 42}
]
Or, if you don't change the type, with the numbers as strings:
[
{"foo": "21", "bar": "33", "baz": "5"},
{"foo": "1", "bar": "14", "baz": "42"}
]
You will not get anywhere close of what you want, which is an array of objects containing arrays containing objects containing arrays etc...
You can modify this array later to create the nested children you need (look at #torresomar comment below), but it's way easier to simply edit your JSON.