I have got one column with type jsonb. The data in this column looks as below
{
"random_number1":
{
"random_number2":
{
"Param1": 2,
"Param2": 0,
"Param3": 0,
"Param4": 6,
"Param5": 3
}
}
}
How to write select for this column if I want f.e. all rows where "Param3"=6 ?
I tried something like that
SELECT * FROM table WHERE column->'Param3' #> '6'::jsonb;
It depends on your expectations.
Get the value of a specified path:
select *
from my_table
where my_col->'random_number1'->'random_number2'->>'Param3' = '6'
Get the value of the key Param3 of any object on the third level:
select t.*
from my_table t,
jsonb_each(my_col) as value1(key1, value1),
jsonb_each(value1) as value2(key2, value2)
where jsonb_typeof(my_col) = 'object'
and jsonb_typeof(value1) = 'object'
and value2->>'Param3' = '6';
In the second case you may want to use distinct as the query may yield duplicated rows.
Related
In PostgreSql I can't find in the docs a function that could allow me to combine n json entities, whilst summing the value part in case of existing key/value pair
English not being my main language, I suspect I don't know how to search with the right terms
In other words
from a table with 2 columns
name data
'didier' {'vinyl': 2, 'cd': 3)
'Anne' {'cd' : 1, 'tape' : 4}
'Pierre' {'cd' : 1, 'tape': 9, 'mp3':2}
I want to produce the following result :
{ 'vinyl' : 2, 'cd' : 5, 'tape':13, mp3 : 2}
With is a "combine and sum" function
Thanks in advance for any idea
Didier
Using the_table CTE for illustration, first 'normalize' data column then sum per item type (k) and finally aggregate into a JSONB object.
with the_table("name", data) as
(
values
('didier', '{"vinyl": 2, "cd": 3}'::jsonb),
('Anne', '{"cd" : 1, "tape" : 4}'),
('Pierre', '{"cd" : 1, "tape": 9, "mp3":2}')
)
select jsonb_object_agg(k, v) from
(
select lat.k, sum((lat.v)::integer) v
from the_table
cross join lateral jsonb_each(data) as lat(k, v)
group by lat.k
) t;
-- {"cd": 5, "mp3": 2, "tape": 13, "vinyl": 2}
I have a table with JSON field which contains an array of JSON objects. I need to select objects by some condition.
Create and fill a table:
CREATE TABLE test (
id INT AUTO_INCREMENT PRIMARY KEY,
json_list JSON
);
INSERT INTO test(json_list) VALUES
("{""list"": [{""type"": ""color"", ""value"": ""red""}, {""type"": ""shape"", ""value"": ""oval""}, {""type"": ""color"", ""value"": ""green""}]}"),
("{""list"": [{""type"": ""shape"", ""value"": ""rect""}, {""type"": ""color"", ""value"": ""olive""}]}"),
("{""list"": [{""type"": ""color"", ""value"": ""red""}]}")
;
Now I need to select all objects with type = color from all rows.
I want to see this output:
id extracted_value
1 {"type": "color", "value": "red"}
1 {"type": "color", "value": "green"}
2 {"type": "color", "value": "olive"}
3 {"type": "color", "value": "red"}
It would be good to get this too:
id color
1 red
1 green
2 olive
3 red
I can't change the DB or JSON.
I'm using MySQL 5.7
My current solution
My solution is to cross join the table with some index set and then extract all elements of JSON array.
I don't like it as if possible object count in one array is large it is required to have all indexes till the maximum one. It makes the query slow as it won't stop calculation of JSON value when the end of array is reached.
SELECT
test.id,
JSON_EXTRACT(test.json_list, CONCAT('$.list[', ind.ind, ']')),
ind.ind
FROM
test
CROSS JOIN
(SELECT 0 AS ind UNION ALL SELECT 1 AS ind UNION ALL SELECT 2 AS ind) ind
WHERE
JSON_LENGTH(json_list, "$.list") > ind.ind
AND JSON_EXTRACT(json_list, CONCAT('$.list[', ind.ind, '].type')) = "color";
It is easy to get only values by changing JSON_EXTRACT path. But is it there a better way?
Edits
Added a check for json_list.list length. This filtered out 67% of derived table rows in this case.
SELECT JSON_EXTRACT(json_list, '$.list[*]')
FROM `test`
where JSON_CONTAINS(json_list, '{"type":"color"}', '$.list')
So current best solution is mine:
SELECT
test.id,
JSON_EXTRACT(test.json_list, CONCAT('$.list[', ind.ind, ']')),
ind.ind
FROM
test
CROSS JOIN
(SELECT 0 AS ind UNION ALL SELECT 1 AS ind UNION ALL SELECT 2 AS ind) ind
WHERE
JSON_LENGTH(json_list, "$.list") > ind.ind
AND JSON_EXTRACT(json_list, CONCAT('$.list[', ind.ind, '].type')) = "color";
Gleaning several articles online, including this one with a CTE, and this one WITHOUT a CTE, I have been successful in getting the data I need, including a count of the results. However, I need this count to be in a specific place in the JSON object... Basically, I know how to get a rowset into a specific JSON structure with FOR JSON PATH, ROOT ('data'), etc.
However, I do not know how to get the "recordsFiltered" into the root of my JSON output. This count is is derived using COUNT(*) OVER () AS recordsFiltered
Basically, I need my structure to look like this (see below)... How do I get "recordsFiltered" into the root $. of the JSON result without it repeating a billion times under the "data":[] section?
The best idea I can come up with is to create a temporary table, and then use that to structure the JSON. But, I want to do it the fancy SQL way, if one exists, using SELECT statements or CTEs where applicable.
{
"draw": 1,
"recordsTotal": 57,
"recordsFiltered": 57, // <<<--- need records filtered HERE
"data": [
{
"DT_RowId": "row_3",
"recordsFiltered": "69,420", // <<<---- NOT HERE!!!
"first_name": "Angelica",
"last_name": "Ramos",
"position": "System Architect",
"office": "London",
"start_date": "9th Oct 09",
"salary": "$2,875"
},
...
]
}
Here is the example SQL code:
SELECT
COUNT(*) OVER () AS recordsFiltered,
id,
a,
b
FROM t1
WHERE
(#Search IS NULL OR
id LIKE '%'+#Search+'%' OR
a LIKE '%'+#Search+'%' OR
b LIKE '%'+#Search+'%')
ORDER BY
CASE
WHEN #SortDir = 'ASC' THEN
CASE #SortCol
WHEN 0 THEN id
WHEN 1 THEN a
WHEN 2 THEN b
END
END desc,
CASE
WHEN #SortDir = 'desc' THEN
CASE #SortCol
WHEN 0 THEN id
WHEN 1 THEN a
WHEN 2 THEN b
END
END DESC
OFFSET #DisplayStart ROWS
FETCH NEXT #DisplayLength ROWS ONLY
for json path, root ('data')
Looks like you need to generate your table results, then use two (or more?) sub-queries
Here's a simplified example:
declare #tbl table (ID int identity, Col1 varchar(50), Col2 int)
insert into #tbl (Col1, Col2) values ('A',1),('B',2),('C',3)
select
(select count(1) from #tbl) as 'total',
(select * from #tbl for json path) as 'data'
for json path
produces:
[
{
"total": 3,
"data": [
{
"ID": 1,
"Col1": "A",
"Col2": 1
},
{
"ID": 2,
"Col1": "B",
"Col2": 2
},
{
"ID": 3,
"Col1": "C",
"Col2": 3
}
]
}
]
Without knowing the rest of your code/schema, here's my guess at your needed query:
select
*
into
#MyTable
from
t1
WHERE
(#Search IS NULL OR
id LIKE '%'+#Search+'%' OR
a LIKE '%'+#Search+'%' OR
b LIKE '%'+#Search+'%')
select
(select count(*) from #MyTable) as recordsFiltered,
(
select
id,
a,
b
from
#MyTable
ORDER BY
CASE
WHEN #SortDir = 'ASC' THEN
CASE #SortCol
WHEN 0 THEN id
WHEN 1 THEN a
WHEN 2 THEN b
END
END desc,
CASE
WHEN #SortDir = 'desc' THEN
CASE #SortCol
WHEN 0 THEN id
WHEN 1 THEN a
WHEN 2 THEN b
END
END DESC
OFFSET #DisplayStart ROWS
FETCH NEXT #DisplayLength ROWS ONLY
for json path
) as [data]
for json path
Using a CTE:
with cte as ()
select
*
from
t1
WHERE
(#Search IS NULL OR
id LIKE '%'+#Search+'%' OR
a LIKE '%'+#Search+'%' OR
b LIKE '%'+#Search+'%')
)
select
(select count(*) from cte) as recordsFiltered,
(
select
id,
a,
b
from
cte
ORDER BY
CASE
WHEN #SortDir = 'ASC' THEN
CASE #SortCol
WHEN 0 THEN id
WHEN 1 THEN a
WHEN 2 THEN b
END
END desc,
CASE
WHEN #SortDir = 'desc' THEN
CASE #SortCol
WHEN 0 THEN id
WHEN 1 THEN a
WHEN 2 THEN b
END
END DESC
OFFSET #DisplayStart ROWS
FETCH NEXT #DisplayLength ROWS ONLY
for json path
) as [data]
for json path
In my postgres database I have json that looks similar to this:
{
"myArray": [
{
"myValue": 1
},
{
"myValue": 2
},
{
"myValue": 3
}
]
}
Now I want to rename myValue to otherValue. I can't be sure about the length of the array! Preferably I would like to use something like set_jsonb with a wildcard as the array index, but that does not seem to be supported. So what is the nicest solution?
You have to decompose a whole jsonb object, modify individual elements and build the object back.
The custom function will be helpful:
create or replace function jsonb_change_keys_in_array(arr jsonb, old_key text, new_key text)
returns jsonb language sql as $$
select jsonb_agg(case
when value->old_key is null then value
else value- old_key || jsonb_build_object(new_key, value->old_key)
end)
from jsonb_array_elements(arr)
$$;
Use:
with my_table (id, data) as (
values(1,
'{
"myArray": [
{
"myValue": 1
},
{
"myValue": 2
},
{
"myValue": 3
}
]
}'::jsonb)
)
select
id,
jsonb_build_object(
'myArray',
jsonb_change_keys_in_array(data->'myArray', 'myValue', 'otherValue')
)
from my_table;
id | jsonb_build_object
----+------------------------------------------------------------------------
1 | {"myArray": [{"otherValue": 1}, {"otherValue": 2}, {"otherValue": 3}]}
(1 row)
Using json functions are definitely the most elegant, but you can get by on using character replacement. Cast the json(b) as text, perform the replace, then change it back to json(b). In this example I included the quotes and colon to help the text replace target the json keys without conflict with values.
CREATE TABLE mytable ( id INT, data JSONB );
INSERT INTO mytable VALUES (1, '{"myArray": [{"myValue": 1},{"myValue": 2},{"myValue": 3}]}');
INSERT INTO mytable VALUES (2, '{"myArray": [{"myValue": 4},{"myValue": 5},{"myValue": 6}]}');
SELECT * FROM mytable;
UPDATE mytable
SET data = REPLACE(data :: TEXT, '"myValue":', '"otherValue":') :: JSONB;
SELECT * FROM mytable;
http://sqlfiddle.com/#!17/1c28a/9/4
I'm storing a java class A as A_DOC in a clob column in my database.
The structure of A is like:
{
id : 123
var1: abc
subvalues : [{
id: 1
value : a
},
{
id: 1
value :b
}
...
}
]}
I know I can do things like
select json_query(a.A_DOC, '$.subvalues.value') from table_name a;
and so on, but how I'm looking for a way to count the number of elements in the subvalues array through an sql query. Is this possible?
the function exists in Oracle 18 only
SELECT json_query('[19, 15, [16,2,3]]','$[*].size()' WITH ARRAY WRAPPER) FROM dual;
SELECT json_value('[19, 15, [16,2,3]]','$.size()') FROM dual;
You can use JSON_TABLE:
SELECT
id, var1, count(sub_id) subvalues
FROM
JSON_TABLE (
to_clob('{ id: 123, var1: "abc", subvalues : [{ id: 1, value: "a", }, { id: 2, value: "b" } ]}'),
'$'
COLUMNS (
id NUMBER PATH '$.id',
var1 VARCHAR PATH '$.var1',
NESTED PATH '$.subvalues[*]'
COLUMNS (
sub_id NUMBER PATH '$.id'
)
)
)
GROUP BY id, var1