I got clients with contracts stored in json format in a mariadb database (10.3.21).
The parent keys are id's of contract types, cb = checkbox if contract is checked (1) or not (0), p & pt are price related and not really relevant for my question.
The simplified json format is structured as follows:
{
"1": {
"cb": "0",
"p": "1",
"pt": "m"
},
"2": {
"cb": "1",
"p": "395",
"pt": "y"
},
"3": {
"cb": "0",
"p": "",
"pt": "m"
},
"7": {
"cb": "1",
"p": "120",
"pt": "m"
}
}
I can query the database so I can get results of all the companies that have a specific contract type.
SELECT
`id`
, `company`
FROM
`db`.`clients`
WHERE
JSON_VALUE(`contracts`, '$.2.cb')=1
But I want to query json so I get an array of parent keys where child key cb = 1.
So it would give ["2","7"] as result for this json.
I thought something in the line of following, but that's not working.
Getting an empty result set instead of what I hoped for.
SELECT
`id`
, `company`
, JSON_QUERY(`contracts`, '$') AS `contracttypes`
FROM
`db`.`clients`
WHERE
JSON_VALUE(`contracts`, '$.%.cb')=1
Can't find (yet) if wildcards in paths are possible and if so what the syntax should be, so I get the results I want.
Alternative approach
Tried alternative approach with better results, but not there yet.
SELECT
`id`
, `company`
, json_search(`contracts`,'all','1') AS `contracttypes`
FROM
`db`.`clients`
Results: ["$.1.p", "$.2.cb", "$.7.cb"]
The keys of the cb items are what I want, but the first key due to the p item is not wanted.
Trying to do json_search on key with value like following give null result.
json_search(`contracts`,'all','"cb":"1"')
/* or with curlies */
json_search(`contracts`,'all','{"cb":"1"}')
Try:
SELECT
REGEXP_REPLACE(
JSON_SEARCH(#`json`, 'all', '1', NULL, '$**.cb'),
'[$.]|[.cb]',
SPACE(0)
) `json_result`;
See dbfiddle.
Related
I have a table which has ID & JSON columns. ID is auto incrementing column. Here are my sample data.
Row 1
1 | {
"HeaderInfo":
{
"Name": "ABC",
"Period": "2010",
"Code": "123"
},
"HData":
[
{ "ID1": "1", "Value": "$1.00", "Code": "A", "Desc": "asdf" },
{ "ID1": "2", "Value": "$1.00", "Code": "B", "Desc": "pqr" },
{ "ID1": "3", "Value": "$1.00", "Code": "C", "Desc": "xyz" }
]
}
Row 2
2 | {
"HeaderInfo":
{
"Name": "ABC",
"Period": "2010",
"Code": "123"
},
"HData":
[
{ "ID1": "76", "Value": "$1.00", "Code": "X", "Desc": "asdf" },
{ "ID1": "25", "Value": "$1.00", "Code": "Y", "Desc": "pqr" },
{ "ID1": "52", "Value": "$1.00", "Code": "Z", "Desc": "lmno" },
{ "ID1": "52", "Value": "$1.00", "Code": "B", "Desc": "xyz" }
]
}
and it keep goes. Items inside the HData section is infinite. It can be any numbers of items.
On this JSON I need to update the Value = "$2.00" where "Code" is "B". I should be able to do this with 2 scenarios. My parameter inputs are #id=2, #code="B", #value="$2.00". #id sometimes will be null. So,
If #id is null then the update statement should go through all records and update the Value="$2.00" for all items inside the HData section which has Code="B".
If #id = 2 then the update statement should update only the second row which Id is 2 for the items which Code="b"
Appreciate your help in advance.
Thanks
See DB Fiddle for an example.
declare #id bigint = 2
, #code nvarchar(8) = 'B'
, #value nvarchar(8) = '$2.00'
update a
set json = JSON_MODIFY(json, '$.HData[' + HData.[key] + '].Value', #value)
from so75416277 a
CROSS APPLY OPENJSON (json, '$.HData') HData
CROSS APPLY OPENJSON (HData.Value, '$')
WITH (
ID1 bigint
, Value nvarchar(8)
, Code nvarchar(8)
, [Desc] nvarchar(8)
) as HDataItem
WHERE id = #id
AND HDataItem.Code = #Code
The update / set statement says we want to replace the value of json with a new generated value / functions exactly the same as it would in any other context; e.g. update a set json = 'something' from so75416277 a where a.column = 'some condition'
The JSON_MODIFY does the manipulation of our json.
The first input is the original json field's value
The second is the path to the value to be updated.
The third is the new value
'$.HData[' + HData.[key] + '].Value' says we go from our JSON's root ($), find the HData field, filter the array of values for the one we're after (i.e. key here is the array item's index), then use the Value field of this item.
key is a special term; where we don't have a WITH block accompanying our OPENJSON statement we get back 3 items: key, value and type; key being the identifier, value being the content, and type saying what sort of content that is.
CROSS APPLY allows us to perform logic on a value from a single DB rowto return potentially multiple rows; e.g. like a join but against its own contents.
OPENJSON (json, '$.HData') HData says to extract the HData field from our json column, and return this with the table alias HData; as we've not included a WITH, this HData column has 3 fields; key, value, and type, as mentioned above (this is the same key we used in our JSONMODIFY).
The next OPENJSON works on HData.Value; i.e. the contents of the array item under HData. Here we take the object from this array (i.e. that's the root from the current context; hence $), and use WITH to parse it into a specific structure; i.e. ID1, Value, Code, and Desc (brackets around Desc as it's a keyword). We give this the alias HDataItem.
Finally we filter for the bit of the data we're interested in; i.e. on id to get the row we want to update, then on HDataItem.Code so we only update those array items with code 'B'.
Try the below SP.
CREATE PROC usp_update_75416277
(
#id Int = null,
#code Varchar(15),
#value Varchar(15)
)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #SQLStr Varchar(MAX)=''
;WITH CTE
AS
( SELECT ROW_NUMBER()OVER(PARTITION BY YourTable.Json ORDER BY (SELECT NULL))RowNo,*
FROM YourTable
CROSS APPLY OPENJSON(YourTable.Json,'$.HData')
WITH (
ID1 Int '$.ID1',
Value Varchar(20) '$.Value',
Code Varchar(20) '$.Code',
[Desc] Varchar(20) '$.Desc'
) HData
WHERE (#id IS NULL OR ID =#id)
)
SELECT #SQLStr=#SQLStr+' UPDATE YourTable
SET [JSON]=JSON_MODIFY(YourTable.Json,
''$.HData['+CONVERT(VARCHAR(15),RowNo-1)+'].Value'',
'''+CONVERT(VARCHAR(MAX),#value)+''') '+
'WHERE ID ='+CONVERT(Varchar(15),CTE.ID) +' '
FROM CTE
WHERE Code=#code
AND (#id IS NULL OR ID =#id)
EXEC( #SQLStr)
END
I have a JSON column "jobs" that looks like this:
[
{
"id": "1",
"done": "100",
"target": "100",
"startDate": "123123132",
"lastAction": "123123132",
"status": "0"
},
{
"id": "2",
"done": "10",
"target": "20",
"startDate": "2312321",
"lastAction": "2312321",
"status": "1"
}
]
I want to filter the array by object key values. For example: To find all items that have target > done, status != 0 and lastAction is yesterday to get response like this:
[
{
"id": "1",
"done": "19",
"target": "100",
"startDate": "123123132",
"lastAction": "123123132",
"status": "0"
}
]
I know I can extract the data to a JSON_TABLE() to do the filtering but I don't get the original object back(unless I recreate it back) and the solution is not dynamic.
Can this kind of array filtering can really be done in MySQL?
SELECT JSON_PRETTY(JSON_EXTRACT(jobs.jobs, CONCAT('$[', j.rownum-1, ']'))) AS object
FROM jobs
CROSS JOIN JSON_TABLE(
jobs.jobs, '$[*]' COLUMNS(
rownum for ordinality,
done int path '$.done',
target int path '$.target',
status int path '$.status'
)
) as j
WHERE j.target > j.done AND j.status != 0;
You also mentioned a condition on lastAction, but the example values you gave are not valid dates, so I'll leave that enhancement to you. The example above demonstrates the technique.
Yes it is possible to do it using the JSON_EXTRACT and JSON_SEARCH functions.
Let's say your table is named tbl_Jobs and the jobs column is of type JSON.
SELECT * FROM tbl_Jobs
WHERE JSON_EXTRACT(jobs, "$[*].target") = JSON_EXTRACT(jobs, "$[*].done")
AND JSON_EXTRACT(jobs, "$[*].status") != 0
AND JSON_SEARCH(jobs, 'one', DATE_SUB(CURDATE(), INTERVAL 1 DAY), NULL, "$[*].lastAction") IS NOT NULL
I have a table, let's call it myTable with the following structure
ID | data
____________
uuid | jsonb
The data in the jsonb field is an array structured in the following way:
[
{
"valueA": "500",
"valueB": "ABC",
},
{
"valueA": "300",
"valueB": "CDE",
}
]
What I want to do is transform that data by converting valueB to be an object, with newKey that corresposnds to the current value of "valueB"
This is the result I want:
[
{
"valueA": "500",
"valueB": {"newKey": "ABC"},
},
{
"valueA": "300",
"valueB": {"newKey": "CDE"},
}
]
I tried doing it with the following query:
UPDATE myTable
SET data = (
SELECT jsonb_agg (
jsonb_insert(elems, '{valueB, newKey}', elems->'valueB')
)
FROM jsonb_array_elements(data) elems
);
It doesn't seem to do anything unfortunately.
Another idea I have is to create a new field, initialize it as an object, then delete the old onde and rename the new one, but it seems there must be a way to do what I want directly?
Solved using jsonb_build_object()
UPDATE myTable
SET data = (
SELECT jsonb_agg (
jsonb_insert(elems, '{valueB}', jsonb_build_object('newKey', elems->'valueB'))
)
FROM jsonb_array_elements(data) elems
);
I has table "Product" with two columns:
Id - Bigint primary key
data - Jsonb
Here example of json:
{
"availability": [
{
"qty": 10,
"price": 42511,
"store": {
"name": "my_best_store",
"hours": null,
"title": {
"en": null
},
"coords": null,
"address": null,
I insert json to column "data".
Here sql get find "my_best_store"
select *
from product
where to_tsvector(product.data) ## to_tsquery('my_best_store')
Nice. It's work fine.
But I need to find "my_best_store" only in section "availability".
I try this but result is empty:
select *
from product
where to_tsvector(product.data) ## to_tsquery('availability & my_best_store')
Assuming you want to search in the name attribute, you can do the following:
select p.*
from product p
where exists (select *
from jsonb_array_elements(p.data -> 'availability') as t(item)
where to_tsvector(t.item -> 'store' ->> 'name') ## to_tsquery('my_best_store'))
With Postgres 12, you can simplify that to:
select p.*
from product p
where to_tsvector(jsonb_path_query_array(data, '$.availability[*].store.name')) ## to_tsquery('my_best_store')
I have the next table in Postgres 9.4
CREATE TABLE public.neuro (
nid int4 NOT NULL DEFAULT nextval('neuro_nid_seq'::regclass),
data jsonb,
CONSTRAINT neuro_pkey PRIMARY KEY (nid) NOT DEFERRABLE INITIALLY IMMEDIATE
);
with the records:
{"item2": {"1": "0", "uid": "0", "nota": "weqcqwe qwe wq", "fecha": "23-02-2015", "examen": "aesc", "puntaje": "0", "paciente": "103636426"}}
{"item2": {"1": "0", "uid": "0", "nota": "text", "fecha": "23-02-2015", "examen": "aesc", "puntaje": "0", "paciente": "103636426"}}
{"item3": {"1": "3", "2": "1", "3": "3", "uid": "0", "fecha": "23-02-2015", "examen": "fab", "puntaje": "7", "paciente": "103636426"}}
...
How to select all records with examen ='aesc'? I tried using the operators ->> or #. How to select some fields in the same way? I need to maintain the initial "itemX" in a jSON column.
Since your JSON objects seem to be nested exactly one level, you could unnest one level with jsonb_each() and search the value part of the dynamic key name (which we ignore this way):
SELECT n.*
FROM public.neuro n, jsonb_each(n.data) d
WHERE d.value->>'examen' = 'aesc'
Related:
How to filter rows on nested values in a json column?
How do I query using fields inside the new PostgreSQL JSON datatype?