Update the value of JSON elements in Postgresql - json

I have table with following table structure:
create table instances(
id bigint,
createdate timestamp,
createdby bigint,
lastmodifieddate timestamp,
lastmodifiedby bigint,
context text
)
Field context contains a JSON data i.e.
insert into instances values
(1, '2020-06-01 22:10:04', 20112,'2020-06-01 22:10:04',20112,
'{"id":1,"details":[{"binduserid":90182}]}')
I need to replace all values of JSON element binduserid with value 90182 using postgres query.
I have achieved this by using REPLACE function:
update instances
set context = replace(context, '"binduserid":90182','"binduserid":1000619')
Is there any other way to do this by using Postgres JSON Functions

Firstly, let's consider storing the column as JSON or JSONB those are already defined to hold the data properly and use in a productive manner such as no needed conversions among types like holding a DATE value in DATE format rather than a STRING.
In this case I consider context column in JSONB data type.
You can use JSONB_SET() function in order to get the desired result where the first argument(target) might be in array format through use of JSONB_BUILD_ARRAY() function with indexes (as 0 in '{0,details}' for this case ) to manipulate easily by the below DML Statement :
UPDATE instances
SET context =
JSONB_SET(JSONB_BUILD_ARRAY(context), '{0,details}','[{"binduserid":1000619}]')
Demo

Related

How do I update data inside a stringified JSON object in SQL?

So I have three databases - an Oracle one, SQL Server one, and a Postgres one. I have a table that has two columns: name, and value, both are texts. The value is a stringified JSON object. I need to update the nested value.
This is what I currently have:
name: 'MobilePlatform',
value:
'{
"iosSupported":true,
"androidSupported":false,
}'
I want to add {"enableTwoFactorAuth": false} into it.
In PostgreSQL you should be able to do this:
UPDATE mytable
SET MobilePlatform = jsonb_set(MobilePlatform::jsonb, '{MobilePlatform,enableTwoFactorAuth}', 'false');
In Postgres, the plain concatenation operator || for jsonb could do it:
UPDATE mytable
SET value = value::jsonb || '{"enableTwoFactorAuth":false}'::jsonb
WHERE name = 'MobilePlatform';
If a top-level key "enableTwoFactorAuth" already exists, it is replaced. So it's an "upsert" really.
Or use jsonb_set() for manipulating nested values.
The cast back to text works implicitly as assignment cast. (Results in standard format; any insignificant whitespace is removed effectively.)
If the content is valid JSON, the storage type should be json to begin with. In Postges, jsonb would be preferable as it's easier to manipulate, but that's not directly portable to the other two RDBMS mentioned.
(Or, possibly, a normalized design without JSON altogether.)
For ORACLE 21
update mytable
set json_col = json_transform(
json_col,
INSERT '$.value.enableTwoFactorAuth' = 'false'
)
where json_exists(json_col, '$?(#.name == "MobilePlatform")')
;
With json_col being JSON or VARCHAR2|CLOB column with IS JSON constraint.
(but must be JSON if you want a multivalue index on json_value.name:
create multivalue index ix_json_col_name on mytable t ( t.json_col.name.string() );
)
Two of the databases you are using support JSON data type, so it doesn't make sense to have them as stringified JSON object in a Text column.
Oracle: https://docs.oracle.com/en/database/oracle/oracle-database/21/adjsn/json-in-oracle-database.html
PostgreSQL: https://www.postgresql.org/docs/current/datatype-json.html
Apart from these, MSSQL Server also provides methods to work with JSON data type.
MS SQL Server: https://learn.microsoft.com/en-us/sql/relational-databases/json/json-data-sql-server?view=sql-server-ver16
Using a JSON type column in any of the above databases would enable you to use their JSON functions to perform the tasks that you are looking for.
If you've to use Text only then you can use replace to add the key-value pair at the end of your JSON
update dataTable set value = REPLACE(value, '}',",\"enableTwoFactorAuth\": false}") where name = 'MobilePlatform'
Here dataTable is the name of table.
The cleaner and less riskier way would be connect to db using the application and use JSON methods such as JSON.parse in Javascript and JSON.loads in Python. This would give you the JSON object (dictionary in case of Python) to work on. You can look for similar methods in other languages as well.
But i would suggest, if possible use JSON columns instead of Text to store the JSON value wherever possible.

Read Json Value from a SQL Server table

I have a Json value stored in SQL server table as ntext:
JSON (column: json_val):
[{"prime":{"image":{"id":"123","logo":"","productId":"4000","enable":true},"accountid":"78","productId":"16","parentProductId":"","aprx":"4.599"}}]
select JSON_VALUE(cast(json_val as varchar(8000)), '$.prime.aprx') as px
from table_1
where id = 1
Whenever I execute it, i receive a null. What's wrong with the query?
Thanks for your help!
The JSON string is an array with a single item. You need to specify the array index to retrieve a specific item, eg :
declare #t table (json_val nvarchar(4000))
insert into #t
values ('[{"prime":{"image":{"id":"123","logo":"","productId":"4000","enable":true},"accountid":"78","productId":"16","parentProductId":"","aprx":"4.599"}}]')
select JSON_VALUE(cast(json_val as varchar(8000)), '$[0].prime.aprx') as px
from #t
This returns 4.599
If you want to search all array entries, you'll have to use OPENJSON. If you need to do that though ...
Avoid JSON if possible
JSON storage is not an alternative to using a proper table design though. JSON fields can't be indexed, so filtering by a specific field will always result in a full table scan. Given how regular this JSON string is, you should consider using proper tables instead
As Panagiotis said in the comments:
As for the JSON path, this JSON string is an array with a single element
Instead, therefore, you can use OPENJSON which would inspect each array:
DECLARE #JSON nvarchar(MAX) = N'[{"prime":{"image":{"id":"123","logo":"","productId":"4000","enable":true},"accountid":"78","productId":"16","parentProductId":"","aprx":"4.599"}}]';
SELECT aprx
FROM (VALUES(#JSON))V(json_val)
CROSS APPLY OPENJSON(V.json_val)
WITH (aprx decimal(4,3) '$.prime.aprx');
As also mentioned, your JSON should already be a string data type (should be/probably an nvarchar(MAX)) so there's no reason to CAST it.

Generate UUID for Postgres JSON document

I'm inserting into a Postgres table with a JSON document and I want to generate a unique ID for the document. I can do that on my own, of course, but I was wondering if there was a way to have PG do it.
INSERT INTO test3 (data) VALUES ('{"key": "value", "unique": ????}')
The docs seem to indicate that JSON records fit into various SQL data types, but I don't see how that actually works.
How about just concatenating? Assuming your column is of type json/jsonb, something like the following should work:
INSERT INTO test3 (data) VALUES (('{"key": "value", "unique": "' || uuid_generate_v4() || '"}')::jsonb)
If you're looking to generate a UUID and store it at the same time as a value within a JSON data field, here is something some may find to be a little more sane:
WITH
-- Create a temporary view named "new_entry" containing your data
new_entry
-- This is how you name the view's columns
("key", "unique")
AS (
VALUES
-- This is the actual row returned by the view
(
'value',
uuid_generate_v4()
)
)
INSERT INTO
test3(
data
)
SELECT
-- Convert row to JSON. Column name = key, column value = value.
ROW_TO_JSON(new_entry.*)
FROM
new_entry
First, we're creating a temporary view named new_entry, which containing all of the data want to store in a JSON data field.
Second, we're grabbing that entry and passing it to the ROW_TO_JSON function which converts it to a valid JSON data type. Once converted, it's then inserting the row into the test3 table.
My reasoning for the "sanity" is that more than likely, your JSON object will end up containing more than just two key/value pairs... Rather, you'll end up with a hand full of keys and values, in which it'll be up to you to ensure you don't miss any quotes and escape user input appropriately. Why glue all of this together manually when you can have Postgres do it for you (with the help of ROW_TO_JSON()) while at the same time, making it easier to read and debug?

How to find the minimum value in a jsonb data using postgres?

I have this jsonb data format-
data
{"20161110" : {"6" : ["12", "11", "171.00"],"12" : ["13", "11", "170.00"],"18" : ["16", "11", "174.00"]}}
I want to find the minimum value out of the prices,
In this case-170.00 .
I have tried indexing but able to find data for specific terms(6,12,18) but not the minimum out of them.
What I have tried-
data::json->(select key from json_each_text(data::json) limit 1))::json#>>'{6,2}'
which gives me result for 6th term that is 171.00
If you want the minimum value of the third element in the arrays, then you will have to unpack the JSON document to get to the array to compare values. That goes somewhat like this (and assuming you indeed have a jsonb column and a primary key called id):
SELECT id, min((arr ->> 2)::numeric) AS min_price
FROM ( SELECT id, jdoc
FROM my_table, jsonb_each(data) d (key, jdoc) ) sub,
jsonb_each(jdoc) doc (key, arr)
GROUP BY 1;
In PostgreSQL there are table functions, functions that return a set of rows, like jsonb_each(). You should use these functions in the FROM list. These table functions can implicitly refer to columns from tables defined earlier in the list, like FROM my_table, jsonb_each(my_table.data), in which case a link between the two sources is made as if a join condition were specified between the two; in practice, the function gets called once for each of the rows of the source table and the function output is added to the list of available columns.
The JSON functions work only on the level of the JSON document that is explicitly specified. That could be the entire document (my_table.data in this case) or down to some path that you can specify. I am assuming here that the first key is a date value and that you therefore do not know the key in advance. The same goes for the sub-document. In these cases you use functions like jsonb_each(). The array position you apparently know exactly, so you can just index the array to find the price information. Note that these are apparently also in JSON format, so you should get the price as a text value with the ->> operator and then cast that to numeric so you can feed it to the min() function.
I created this funciton for this. Hope it helps
CREATE
OR
replace FUNCTION min_json (data json)
returns numeric AS $$
BEGIN
RETURN
(
SELECT Min(value)
FROM (
SELECT (Hstore(Json_each_text(data))->'value')::numeric AS value) AS t);
END;
$$ language plpgsql;

Cast JSON to HSTORE in Postgres 9.3+?

I've read the docs and it appears that there's no discernible way to perform an ALTER TABLE ... ALTER COLUMN ... USING statement to directly convert a json type column to an hstore type. There's no function available (that I'm aware of) to perform the cast.
The next best alternative I have is to create a new column of type hstore, copy my JSON data to that new column using some external tool, drop the old json column and rename the new hstore column to the old column's name.
Is there a better way?
What I have so far is:
$ CREATE TABLE blah (unstructured_data JSON);
$ ALTER TABLE blah ALTER COLUMN unstructured_data
TYPE hstore USING CAST(unstructured_data AS hstore);
ERROR: cannot cast type json to hstore
Unfortunately, PostgreSQL doesn't allow all kind of expressions within the USING clause of ALTER TABLE ... SET DATA TYPE ... (f.ex. sub-queries are disallowed).
But, you can write a function to overcome this, you just need to decide what to do with advanced types (in object's values), like arrays & objects. Here is an example, which simply converts them to string:
CREATE OR REPLACE FUNCTION my_json_to_hstore(json)
RETURNS hstore
IMMUTABLE
STRICT
LANGUAGE sql
AS $func$
SELECT hstore(array_agg(key), array_agg(value))
FROM json_each_text($1)
$func$;
After that, you can use this in your ALTER TABLE, like:
ALTER TABLE blah
ALTER COLUMN unstructured_data
SET DATA TYPE hstore USING my_json_to_hstore(unstructured_data);
There is "trap" for repeated keys - allowed by both json and hstore input, but unfortunately resolved differently (!). Consider this example value:
json '{"double_key":"key1","foo":null,"double_key":"key2"}'
In json, 'double_key is effectively 'key2'. The manual:
Because the json type stores an exact copy of the input text, it will
preserve semantically-insignificant white space between tokens, as
well as the order of keys within JSON objects. Also, if a JSON object
within the value contains the same key more than once, all the
key/value pairs are kept. (The processing functions consider the last value as the operative one.)
Bold emphasis mine.
In hstore, however, for the same order of key/value pairs, 'double_key' might effectively be 'key1'. The manual:
Each key in an hstore is unique. If you declare an hstore with
duplicate keys, only one will be stored in the hstore and there is no guarantee as to which will be kept:
Typically, the first instance of a key, but that's an implementation details that might change.
A simple and fast option to always preserve the effective, operative value: cast to jsonb before the conversion. The manual again:
[...] jsonb does not preserve white space, does not preserve
the order of object keys, and does not keep duplicate object keys.
If duplicate keys are specified in the input, only the last value is kept.
Modifying #pozs's conversion function:
CREATE OR REPLACE FUNCTION json2hstore(json)
RETURNS hstore AS
$func$
SELECT hstore(array_agg(key), array_agg(value))
FROM jsonb_each_text($1::jsonb) -- !
$func$ LANGUAGE sql IMMUTABLE STRICT;
Requires Postgres 9.4 or later. Postgres 9.3 has the json type, but not jsonb, yet. A no-op in PL/v8 might be alternative there, like #jpmc mentioned.