Search PSQL JSONB array of tuples - json

I have a JSONB column "deps" like this
[
[{
"name": "A"
}, "823"],
[{
"name": "B"
}, "332"],
[{
"name": "B"
}, "311"]
]
I want to set a column "stats" to NULL for all rows where the JSON array in the column "deps" contains a tuple with "name" "B". In the example above the column "deps" has two such tuples.
Is it possible?
The dictionary {"name": "B"} always comes first in the tuple.
Would the same search in this JSON be faster:
[{
"id": {
"name": "A"
},
"value": "823"
}, {
"id": {
"name": "B"
},
"value": "332"
},
{
"id": {
"name": "B"
},
"value": "311"
}
]

You could use the #? psql JSON operator in combination with jsonpath to check if your JSONB blob contains any {"name": "B"} for any first value of all occurring tuples.
Here is an example with the JSONB blob from the stated question:
-- returns 'true' if any first tuple value contains an object with key "name" and value "B"
SELECT '[[{"name": "A"}, "823"], [{"name": "B"}, "332"], [{"name": "B"}, "311"]]'::JSONB #? '$[*][0].name ? (# == "B")';
Now you can combine this with your UPDATE logic:
UPDATE my_table
SET stats = NULL
WHERE deps #? '$[*][0].name ? (# == "B")';

You can use the contains operator #> if the content has the structure of your first example
update the_table
set stats = null
where deps #> '[[{"name": "B"}]]'
For the structure in the second example, you would need to use:
where deps #> '[{"id": {"name": "B"}}]'

Related

jq merge array of keys with array of values

I have a json fragment with an array of keys and a separate array of values. Key 1 should match up to Value 1, etc. I'm trying to reformat with jq but not having much luck.
Original JSON:
{
"result": {
"event.KeyValues{}.Key": [
"name",
"gender",
"employee",
"email"
],
"event.KeyValues{}.Value": [
"tyler",
"male",
"yes",
"tyler#nowhere.com"
],
"foo": "1",
"bar": "2"
}
}
Desired Output:
{
"name": "tyler",
"gender": "male",
"employee": "yes",
"email": "tyler#nowhere.com"
}
Use transpose to pair keys and values. Then you can make an object out of each pair and add them together to get the desired structure.
.result
| [."event.KeyValues{}.Key", ."event.KeyValues{}.Value"]
| transpose
| map({(.[0]): .[1]})
| add
Online demo

Will there be a performance overhead when using an index having Object_Pairs (in case of a covered query) - Couchbase

Suppose I create an index on Object_pair(values).val.data.
Will my index store the “values” field as an array (with elements name for ID and val for data due to object_pair)?
If so, and also if my n1ql query is a covered query (fetching only Object_pair(values).val.data via select clause), will there still be a performance overhead? (because I am under the impression that in the above case, as index would already contain “values” field as an array, no actual object_pair transformation would take place hence avoiding the overhead. Only in the case of a non-covered query will the actual document be accessed and object_pair transformation done on “values” field).
Couchbase document:
"values": {
"item_1": {
"data": [{
"name": "data_1",
"value": "A"
},
{
"name": "data_2",
"value": "XYZ"
}
]
},
"item_2": {
"data": [{
"name": "data_1",
"value": "123"
},
{
"name": "data_2",
"value": "A23"
}
]
}
}
}```
UPDATE:
suppose if we plan to create index on Object_pair(values)[*].val.data & Object_pair(values)[*].name
Index: CREATE INDEX idx01 ON ent_comms_tracking(ARRAY { value.name, value.val.data} FOR value IN object_pairs(values) END)
Query: SELECT ARRAY { value.name, value.val.data} FOR value IN object_pairs(values) END as values_array FROM bucket
Can you please paste your full create index statement?
Creating index on OBJECT_PAIRS(values).val.data indexes nothing.
You can check it out by creating a primary index and then running below query:
SELECT OBJECT_PAIRS(`values`).val FROM mybucket
Output is:
[
{}
]
OBJECT_PAIRS(values) returns arrays of values which contain the attribute name and value pairs of the object values -
SELECT OBJECT_PAIRS(`values`) FROM mybucket
[
{
"$1": [
{
"name": "item_1",
"val": {
"data": [
{
"name": "data_1",
"value": "A"
},
{
"name": "data_2",
"value": "XYZ"
}
]
}
},
{
"name": "item_2",
"val": {
"data": [
{
"name": "data_1",
"value": "123"
},
{
"name": "data_2",
"value": "A23"
}
]
}
}
]
}
]
It's an array, so val of it is not directly referenced

MYSQL JSON - extracting part of JSON field using WHERE on its sibling key value

I'm trying to extract subset of JSON document value based on adjacent key value.
My JSON string:
[
{
"_metadata": {
"id": 1
},
"_children": [
"A",
"B",
"C"
]
},
{
"_metadata": {
"id": 2
},
"_children": [
"X",
"Y",
"Z"
]
}
]
Is it possible to return just [X,Y,Z] when setting WHERE clause like $._metadata.id="2" ?
Thank you!
One option is:
SELECT
`der`.`_children`
FROM
JSON_TABLE(
#`json`,
'$[*]'
COLUMNS(
`id` INT PATH '$._metadata.id',
`_children` JSON PATH '$._children'
)
) `der`
WHERE
`der`.`id` = 2;
See dbfiddle.

How to query to JSON column in a postgres table , to retrieve all the values from all json for a particular json key

We are using postgres , in that we had a table containing column of type of JSON , in that we have below type of JSON documents ,
please find sample here
i want all the values of json key student_id from all the JSON docum
i mean we have such documents , for each row for that column and each document contains contains that JSON key , am trying to get all the values for that json key from all the documents(all rows for that column)
You are probably looking for json_array_elements.
Find everything you need in the json functions page.
For your sample you could do something like this:
select json_array_elements(json_array_elements('{
"srs_student_information": {
"header": {
"name": "kkkk"
},
"beginning_segment": {
"age": 12
},
"loop_id_sls": [{
"student_level_details": {
"class": "12"
},
"parent_details": [{
"name": "assa"
}],
"student_identification": [{
"student_id_qual": "BM",
"student_id": "00547311"
}, {
"student_id_qual": "CN",
"student_id": "467931496024"
}, {
"student_id_qual": "CN",
"student_id": "467931496035"
}, {
"student_id_qual": "CN",
"student_id": "467931496046"
}]
}]
}
}'::json->'srs_student_information'->'loop_id_sls')->'student_identification')->>'student_id' student_id

How do I give a where clause or condition to get a value in json response using rest asssured

I need to extract the value of id where name == 'abc'. How can I do that?
here is the example of response:
{
"Text": [
{
"id": "123",
"name": "ABC"
},
{
"id": "456",
"name": "XYZ"
},
{
"id": "789",
"name": "DEF"
}
]
}
So I need to extract the value of id where name =='ABC' should return me id value as 123.
I need to use jayway restassured.
Use GPath findAll feature
when().
get("/restapi").
then().
body("text.findAll{ it.name == 'ABC' }.id", hasItem("123"));