How to remove an element in jsonb integer array in PostgreSQL - json

On Postgres, in a table called "photo" I have a jsonb column called "id_us" containing a json integer array, simply like this one [1,2,3,4]
I would like to find the query to remove the element 3 for example.
The closer I could get is this
SELECT jsonb_set(id_us, ''
, (SELECT jsonb_agg(val)
FROM jsonb_array_elements(p.id_us) x(val)
WHERE val <> jsonb '3')
) AS id_us
FROM photo p;
Any idea how to solve this?
Thank you!

You can use a subquery containing JSONB_AGG() function while filtering out the index value 3(by starting indexing from 1) such as
WITH p AS
(
SELECT JSONB_AGG(j) AS js
FROM photo
CROSS JOIN JSONB_ARRAY_ELEMENTS(id_us)
WITH ORDINALITY arr(j,idx)
WHERE idx != 3
)
UPDATE photo
SET id_us = js
FROM p
Demo
Edit : If you need to remove the value but not index as mentioned in the comment, just use the variable j casted as numeric
WITH p AS
(
SELECT JSONB_AGG(j) AS js
FROM photo
CROSS JOIN JSONB_ARRAY_ELEMENTS(id_us)
WITH ORDINALITY arr(j,idx)
WHERE j::INT != 18
)
UPDATE photo
SET id_us = js
FROM p
Demo
P.S: using JSONB_SET(), the comma-seperated place for the removed element along with quotes will still remain in such a way that in the following
WITH p AS
(
SELECT ('{'||idx-1||'}')::TEXT[] AS idx
FROM photo
CROSS JOIN JSONB_ARRAY_ELEMENTS(id_us)
WITH ORDINALITY arr(j,idx)
WHERE j::INT = 18
)
UPDATE photo
SET id_us = JSONB_SET(id_us,idx,'""')
FROM p;
SELECT * FROM photo;
id_us
-----------------
[127, 52, "", 44]

I've run across a similar issue, and it stems from the - operator. This operator is overloaded to accept either text or integer, but acts differently for each type. Using text will remove by value, and using an integer will remove by index. But what if your value IS an integer? Well then you're shit outta luck...
If possible, you can try changing your jsonb integer array to a jsonb string array (of integers), and then the - operator should work smoothly.
e.g.
'{1,2,3}' - 2 = '{1,2}' -- removes index 2
'{1,2,3}' - '2' = '{1,2,3}' -- removes values == '2' (but '2' != 2)
'{"1","2","3"}' - 2 = '{"1","2"}' -- removes index 2
'{"1","2","3"}' - '2' = '{"1","3"}' -- removes values == '2'

Related

Query SQL database with JSON Value

Here is my JSON:
[{"Key":"schedulerItemType","Value":"schedule"},{"Key":"scheduleId","Value":"82"},{"Key":"scheduleEventId","Value":"-1"},{"Key":"scheduleTypeId","Value":"2"},{"Key":"scheduleName","Value":"Fixed Schedule"},{"Key":"moduleId","Value":"5"}]
I want to query the database by FileMetadata column
I've tried this:
SELECT * FROM FileSystemItems WHERE JSON_VALUE(FileMetadata, '$.Key') = 'scheduleId' and JSON_VALUE(FileMetadata, '$.Value') = '82'
but it doesn't work!
I had it working with just a dictionary key/value pair, but I needed to return the data differently, so I am adding it with key and value into the json now.
What am I doing wrong?
With the sample data given you'd have to supply an array index to query the 1th element (0-based array indexes), e.g.:
select *
from dbo.FileSystemItems
where json_value(FileMetadata, '$[1].Key') = 'scheduleId'
and json_value(FileMetadata, '$[1].Value') = '82'
If the scheduleId key can appear at arbitrary positions in the array then you can restructure the query to use OPENJSON instead, e.g.:
select *
from dbo.FileSystemItems
cross apply openjson(FileMetadata) with (
[Key] nvarchar(50) N'$.Key',
Value nvarchar(50) N'$.Value'
) j
where j.[Key] = N'scheduleId'
and j.Value = N'82'

Retrieving json elements with a specific key name from a complex nested structure in postgres

I have a complex nested json structure in a postgres json field. I want to list all element values with key '$type' no matter where in the nested structure they appear. The structure contains arrays nested within arrays to several levels deep. What is the sql query I should use?
The table structure is:
create table if not exists documents
(
id text not null
constraint documents_pkey primary key,
value json not null
)
This recursive function extracts all attributes from a complex jsonb object:
create or replace function jsonb_extract_all(jsonb_data jsonb, curr_path text[] default '{}')
returns table(path text[], value text)
language plpgsql as $$
begin
if jsonb_typeof(jsonb_data) = 'object' then
return query
select (jsonb_extract_all(val, curr_path || key)).*
from jsonb_each(jsonb_data) e(key, val);
elseif jsonb_typeof(jsonb_data) = 'array' then
return query
select (jsonb_extract_all(val, curr_path || ord::text)).*
from jsonb_array_elements(jsonb_data) with ordinality e(val, ord);
else
return query
select curr_path, jsonb_data::text;
end if;
end $$;
Example usage:
with my_table(data) as (
select
'{
"$type": "a",
"other": "x",
"nested_object": {"$type": "b"},
"array_1": [{"other": "y"}, {"$type": "c"}],
"array_2": [{"$type": "d"}, {"other": "z"}]
}'::jsonb
)
select f.*
from my_table
cross join jsonb_extract_all(data) f
where path[cardinality(path)] = '$type';
path | value
-----------------------+-------
{$type} | "a"
{array_1,2,$type} | "c"
{array_2,1,$type} | "d"
{nested_object,$type} | "b"
(4 rows)
You can use a resursive query. I have done most of the work here:
with recursive dived(jkey, jval, jtype) as (
select t.key, t.value,
json_typeof(t.value) jtype
from json_each('{"id":"243769","name":"domains","type":"TABLE","adata":{"sfield":"name"},"fields":{"id":{"ind":1,"enum":null,"refs":[null,null],"reqd":true,"type":"int4","constr":["p",null],"default":null},"name":{"ind":2,"enum":null,"refs":[null,null],"reqd":true,"type":"text","constr":["u",null],"default":null},"appid":{"ind":5,"enum":null,"refs":["apps","id"],"reqd":true,"type":"int4","constr":[null,null],"default":null},"userid":{"ind":8,"enum":null,"refs":["users","id"],"reqd":true,"type":"int8","constr":[null,null],"default":null},"createdat":{"ind":6,"enum":null,"refs":[null,null],"reqd":true,"type":"timestamptz","constr":[null,null],"default":null},"updatedat":{"ind":7,"enum":null,"refs":[null,null],"reqd":true,"type":"timestamptz","constr":[null,null],"default":null},"subdomainforward":{"ind":4,"enum":null,"refs":[null,null],"reqd":false,"type":"text","constr":[null,null],"default":null},"wilcardsubdomain":{"ind":3,"enum":null,"refs":[null,null],"reqd":false,"type":"bool","constr":[null,null],"default":null}},"schema":"web","relchecks":0,"relhasrules":false,"relhastriggers":true,"relrowsecurity":false,"relforcerowsecurity":false}'::json) t
union all
select t.key, t.value,
json_typeof(t.value) jtype
from dived, json_each(dived.jval) as t
where dived.jtype in ('object' /*, 'array'*/)
)
select * From dived where jkey = 'yourkey' limit 100
You will simply need to add in an case when or some logic when it comes to arrays and json_array_elements.
Iterating through nested arrays with json is not too difficult with a recursive query but I find it tedious.
Place the CASE WHEN in front of the json_each as something like:
CASE WHEN dived.jtype = 'array' then
json_array_elements(dived.jval) t
It may be possible to handle the situation with the case when scenario, otherwise you may need a separate recursive query specifically for arrays and then do a union with the object keys/values.
You also may find more info here:
Collect Recursive JSON Keys In Postgres
I hope this helps!

Finding duplicates in ABAP internal table via grouping

We all know these excellent ABAP statements which allows finding unique values in one-liner:
it_unique = VALUE #( FOR GROUPS value OF <line> IN it_itab
GROUP BY <line>-field WITHOUT MEMBERS ( value ) ).
But what about extracting duplicates? Can one utilize GROUP BY syntax for that task or, maybe, table comprehensions are more useful here?
The only (though not very elegant) way I found is:
LOOP AT lt_marc ASSIGNING FIELD-SYMBOL(<fs_marc>) GROUP BY ( matnr = <fs_marc>-matnr
werks = <fs_marc>-werks )
ASSIGNING FIELD-SYMBOL(<group>).
members = VALUE #( FOR m IN GROUP <group> ( m ) ).
IF lines( members ) > 1.
"throw error
ENDIF.
ENDLOOP.
Is there more beautiful way of finding duplicates by arbitrary key?
So, I just put it as answer, as we with Florian weren't able to think out something better. If somebody is able to improve it, just do it.
TYPES tt_materials TYPE STANDARD TABLE OF marc WITH DEFAULT KEY.
DATA duplicates TYPE tt_materials.
LOOP AT materials INTO DATA(material)
GROUP BY ( id = material-matnr
status = material-pstat
size = GROUP SIZE )
ASCENDING REFERENCE INTO DATA(group_ref).
CHECK group_ref->*-size > 1.
duplicates = VALUE tt_materials( BASE duplicates FOR <status> IN GROUP group_ref ( <status> ) ).
ENDLOOP.
Given
TYPES: BEGIN OF key_row_type,
matnr TYPE matnr,
werks TYPE werks_d,
END OF key_row_type.
TYPES key_table_type TYPE
STANDARD TABLE OF key_row_type
WITH DEFAULT KEY.
TYPES: BEGIN OF group_row_type,
matnr TYPE matnr,
werks TYPE werks_d,
size TYPE i,
END OF group_row_type.
TYPES group_table_type TYPE
STANDARD TABLE OF group_row_type
WITH DEFAULT KEY.
TYPES tt_materials TYPE STANDARD TABLE OF marc WITH DEFAULT KEY.
DATA(materials) = VALUE tt_materials(
( matnr = '23' werks = 'US' maabc = 'B' )
( matnr = '42' werks = 'DE' maabc = 'A' )
( matnr = '42' werks = 'DE' maabc = 'B' ) ).
When
DATA(duplicates) =
VALUE key_table_type(
FOR key IN VALUE group_table_type(
FOR GROUPS group OF material IN materials
GROUP BY ( matnr = material-matnr
werks = material-werks
size = GROUP SIZE )
WITHOUT MEMBERS ( group ) )
WHERE ( size > 1 )
( matnr = key-matnr
werks = key-werks ) ).
Then
cl_abap_unit_assert=>assert_equals(
act = duplicates
exp = VALUE tt_materials( ( matnr = '42' werks = 'DE') ) ).
Readability of this solution is so bad that you should only ever use it in a method with a revealing name like collect_duplicate_keys.
Also note that the statement's length increases with a growing number of key fields, as the GROUP SIZE addition requires listing the key fields one by one as a list of simple types.
What about the classics? I'm not sure if they are deprecated or so, but my first think is about to create a table clone, DELETE ADJACENT-DUPLICATES on it and then just compare both lines( )...
I'll be eager to read new options.

MySQL update with sequential number yields NULL

I want to update the values of a column with the name of a related entry appended with a number for each row. I've seen a couple of other questions which give roughly the same answer, but when I try the below, I get NULL added instead.
SET #i = 0;
UPDATE matrix_data md
SET col_id_4 = concat((SELECT title from titles t WHERE t.entry_id = md.entry_id), (#i:=#i+1));
If I replace (#i:=#i+1) with a static value, then the update works OK.
The col_id_4 column is set to text. Does the above only work with numeric column types? And if so, how do I achieve what I want to do?
How about you take the primary key of the titles table instead of using the iterator, thus filling in col_id_4 with a concatenation of titles.title and titles.entry_id, thus…
UPDATE matrix_data
INNER JOIN titles ON matrix_data.entry_id = titles.entry_id
SET matrix_data.col_id_4 = CONCAT(titles.title, "_", titles.entry_id)
Or maybe it’s a type issue; casting the iterator to char(50) as we concatenate with the string should work. Tested in a rudimentary database, not #craftcms or #eecms specifically.
SET #i := 0;
UPDATE matrix_data
INNER JOIN titles ON matrix_data.entry_id = titles.entry_id
SET matrix_data.col_id_4 = CONCAT(titles.title, "_", CAST(#i := #i + 1 as CHAR(50)))

SQL Server 2008: Error converting data type nvarchar to float

Presently troubleshooting a problem where running this SQL query:
UPDATE tblBenchmarkData
SET OriginalValue = DataValue, OriginalUnitID = DataUnitID,
DataValue = CAST(DataValue AS float) * 1.335
WHERE
FieldDataSetID = '6956beeb-a1e7-47f2-96db-0044746ad6d5'
AND ZEGCodeID IN
(SELECT ZEGCodeID FROM tblZEGCode
WHERE(ZEGCode = 'C004') OR
(LEFT(ZEGParentCode, 4) = 'C004'))
Results in the following error:
Msg 8114, Level 16, State 5, Line 1
Error converting data type nvarchar to float.
The really odd thing is, if I change the UPDATE to SELECT to inspect the values that are retrieved are numerical values:
SELECT DataValue
FROM tblBenchmarkData
WHERE FieldDataSetID = '6956beeb-a1e7-47f2-96db-0044746ad6d5'
AND ZEGCodeID IN
(SELECT ZEGCodeID
FROM tblZEGCode WHERE(ZEGCode = 'C004') OR
(LEFT(ZEGParentCode, 4) = 'C004'))
Here are the results:
DataValue
2285260
1205310
Would like to use TRY_PARSE or something like that; however, we are running on SQL Server 2008 rather than SQL Server 2012. Does anyone have any suggestions? TIA.
It would be helpful to see the schema definition of tblBenchmarkData, but you could try using ISNUMERIC in your query. Something like:
SET DataValue = CASE WHEN ISNUMERIC(DataValue)=1 THEN CAST(DataValue AS float) * 1.335
ELSE 0 END
Order of execution not always matches one's expectations.
If you set a where clause, it generally does not mean the calculations in the select list will only be applied to the rows that match that where. SQL Server may easily decide to do a bulk calculation and then filter out unwanted rows.
That said, you can easily write try_parse yourself:
create function dbo.try_parse(#v nvarchar(30))
returns float
with schemabinding, returns null on null input
as
begin
if isnumeric(#v) = 1
return cast(#v as float);
return null;
end;
So starting with your update query that's giving an error (please forgive me for rewriting it for my own clarity):
UPDATE B
SET
OriginalValue = DataValue,
OriginalUnitID = DataUnitID,
DataValue = CAST(DataValue AS float) * 1.335
FROM
dbo.tblBenchmarkData B
INNER JOIN dbo.tblZEGCode Z
ON B.ZEGCodeID = Z.ZEGCodeID
WHERE
B.FieldDataSetID = '6956beeb-a1e7-47f2-96db-0044746ad6d5'
AND (
Z.ZEGCode = 'C004' OR
Z.ZEGParentCode LIKE 'C004%'
)
I think you'll find that a SELECT statement with exactly the same expressions will give the same error:
SELECT
OriginalValue,
DataValue NewOriginalValue,
OriginalUnitID,
DataUnitID OriginalUnitID,
DataValue,
CAST(DataValue AS float) * 1.335 NewDataValue
FROM
dbo.tblBenchmarkData B
INNER JOIN dbo.tblZEGCode Z
ON B.ZEGCodeID = Z.ZEGCodeID
WHERE
B.FieldDataSetID = '6956beeb-a1e7-47f2-96db-0044746ad6d5'
AND (
Z.ZEGCode = 'C004' OR
Z.ZEGParentCode LIKE 'C004%'
)
This should show you the rows that can't convert:
SELECT
B.*
FROM
dbo.tblBenchmarkData B
INNER JOIN dbo.tblZEGCode Z
ON B.ZEGCodeID = Z.ZEGCodeID
WHERE
B.FieldDataSetID = '6956beeb-a1e7-47f2-96db-0044746ad6d5'
AND (
Z.ZEGCode = 'C004' OR
Z.ZEGParentCode LIKE 'C004%'
)
AND IsNumeric(DataValue) = 0
-- AND IsNumeric(DataValue + 'E0') = 0 -- try this if the prior doesn't work
The trick in the last commented line is to tack on things to the string to force only valid numbers to be numeric. For example, if you wanted only integers, IsNumeric(DataValue + '.0E0') = 0 would show you those that aren't.