I'm working on a Web project where the client application communicates with the DB via JSONs.
The initial implementation took place with SQL Server 2012 (NO JSON support and hence we implemented a Stored Function that handled the parsing) and now we are moving to 2016 (YES JSON support).
So far, we are reducing processing time by a significant factor (in some cases, over 200 times faster!).
There are some interactions that contain arrays that need to be converted into tables. To achieve that, the OPENJSON function does ALMOST what we need.
In some of these (array-based) cases, records within the arrays have one or more fields that are also OBJECTS (in this particular case, also arrays), for instance:
[{
"Formal_Round_Method": "Floor",
"Public_Round_Method": "Closest",
"Formal_Precision": "3",
"Public_Precision": "3",
"Formal_Significant_Digits": "3",
"Public_Significant_Digits": "3",
"General_Comment": [{
"Timestamp": "2018-07-16 09:19",
"From": "1",
"Type": "Routine_Report",
"Body": "[To + Media + What]: Comment 1",
"$$hashKey": "object:1848"
}, {
"Timestamp": "2018-07-16 09:19",
"From": "1",
"Type": "User_Comment",
"Body": "[]: Comment 2",
"$$hashKey": "object:1857"
}, {
"Timestamp": "2018-07-16 09:19",
"From": "1",
"Type": "Routine_Report",
"Body": "[To + Media + What]: Comment 3",
"$$hashKey": "object:1862"
}]
}, {
"Formal_Round_Method": "Floor",
"Public_Round_Method": "Closest",
"Formal_Precision": "3",
"Public_Precision": "3",
"Formal_Significant_Digits": "3",
"Public_Significant_Digits": "3",
"General_Comment": []
}]
Here, General_Comment is also an array.
When running the command:
SELECT *
FROM OPENJSON(#_l_Table_Data)
WITH ( Formal_Round_Method NVARCHAR(16) '$.Formal_Round_Method' ,
Public_Round_Method NVARCHAR(16) '$.Public_Round_Method' ,
Formal_Precision INT '$.Formal_Precision' ,
Public_Precision INT '$.Public_Precision' ,
Formal_Significant_Digits INT '$.Formal_Significant_Digits' ,
Public_Significant_Digits INT '$.Public_Significant_Digits' ,
General_Comment NVARCHAR(4000) '$.General_Comment'
) ;
[#_l_Table_Data is a variable holding the JSON string]
we are getting the column General_Comment = NULL even though the is data in there (at least in the first element of the array).
I guess that I should be using a different syntax for those columns that may contain OBJECTS and not SIMPLE VALUES, but I have no idea what that syntax should be.
I found a Microsoft page that actually solves the problem.
Here is how the query should look like:
SELECT *
FROM OPENJSON(#_l_Table_Data)
WITH ( Formal_Round_Method NVARCHAR(16) '$.Formal_Round_Method' ,
Public_Round_Method NVARCHAR(16) '$.Public_Round_Method' ,
Formal_Precision INT '$.Formal_Precision' ,
Public_Precision INT '$.Public_Precision' ,
Formal_Significant_Digits INT '$.Formal_Significant_Digits' ,
Public_Significant_Digits INT '$.Public_Significant_Digits' ,
General_Comment NVARCHAR(MAX) '$.General_Comment' AS JSON
) ;
So, you need to add AS JSON at the end of the column definition and (God knows why) the type MUST be NVARCHAR(MAX).
Very simple indeed!!!
Create Function ParseJson:
Create or Alter FUNCTION [dbo].[ParseJson] (#JSON NVARCHAR(MAX))
RETURNS #Unwrapped TABLE
(
[id] INT IDENTITY, --just used to get a unique reference to each json item
[level] INT, --the hierarchy level
[key] NVARCHAR(100), --the key or name of the item
[Value] NVARCHAR(MAX),--the value, if it is a null, int,binary,numeric or string
type INT, --0 TO 5, the JSON type, null, numeric, string, binary, array or object
SQLDatatype sysname, --whatever the datatype can be parsed to
parent INT, --the ID of the parent
[path] NVARCHAR(4000) --the path as used by OpenJSON
)
AS begin
INSERT INTO #Unwrapped ([level], [key], Value, type, SQLDatatype, parent,
[path])
VALUES
(0, --the level
NULL, --the key,
#json, --the value,
CASE WHEN Left(ltrim(#json),1)='[' THEN 4 ELSE 5 END, --the type
'json', --SQLDataType,
0 , --no parent
'$' --base path
);
DECLARE #ii INT = 0,--the level
#Rowcount INT = -1; --the number of rows from the previous iteration
WHILE #Rowcount <> 0 --while we are still finding levels
BEGIN
INSERT INTO #Unwrapped ([level], [key], Value, type, SQLDatatype, parent,
[path])
SELECT [level] + 1 AS [level], new.[Key] AS [key],
new.[Value] AS [value], new.[Type] AS [type],
-- SQL Prompt formatting off
/* in order to determine the datatype of a json value, the best approach is to a determine
the datatype that can be parsed. It JSON, an array of objects can contain attributes that arent
consistent either in their name or value. */
CASE
WHEN new.Type = 0 THEN 'bit null'
WHEN new.[type] IN (1,2) then COALESCE(
CASE WHEN TRY_CONVERT(INT,new.[value]) IS NOT NULL THEN 'int' END,
CASE WHEN TRY_CONVERT(NUMERIC(14,4),new.[value]) IS NOT NULL THEN 'numeric' END,
CASE WHEN TRY_CONVERT(FLOAT,new.[value]) IS NOT NULL THEN 'float' END,
CASE WHEN TRY_CONVERT(MONEY,new.[value]) IS NOT NULL THEN 'money' END,
CASE WHEN TRY_CONVERT(DateTime,new.[value],126) IS NOT NULL THEN 'Datetime2' END,
CASE WHEN TRY_CONVERT(Datetime,new.[value],127) IS NOT NULL THEN 'Datetime2' END,
'nvarchar')
WHEN new.Type = 3 THEN 'bit'
WHEN new.Type = 5 THEN 'object' ELSE 'array' END AS SQLDatatype,
old.[id],
old.[path] + CASE WHEN old.type = 5 THEN '.' + new.[Key]
ELSE '[' + new.[Key] COLLATE DATABASE_DEFAULT + ']' END AS path
-- SQL Prompt formatting on
FROM #Unwrapped old
CROSS APPLY OpenJson(old.[Value]) new
WHERE old.[level] = #ii AND old.type IN (4, 5);
SELECT #Rowcount = ##RowCount;
SELECT #ii = #ii + 1;
END;
return
END
For Usage:
select * from ParseJson(jsonString)
Related
I have a table which has ID & JSON columns. ID is auto incrementing column. Here are my sample data.
Row 1
1 | {
"HeaderInfo":
{
"Name": "ABC",
"Period": "2010",
"Code": "123"
},
"HData":
[
{ "ID1": "1", "Value": "$1.00", "Code": "A", "Desc": "asdf" },
{ "ID1": "2", "Value": "$1.00", "Code": "B", "Desc": "pqr" },
{ "ID1": "3", "Value": "$1.00", "Code": "C", "Desc": "xyz" }
]
}
Row 2
2 | {
"HeaderInfo":
{
"Name": "ABC",
"Period": "2010",
"Code": "123"
},
"HData":
[
{ "ID1": "76", "Value": "$1.00", "Code": "X", "Desc": "asdf" },
{ "ID1": "25", "Value": "$1.00", "Code": "Y", "Desc": "pqr" },
{ "ID1": "52", "Value": "$1.00", "Code": "Z", "Desc": "lmno" },
{ "ID1": "52", "Value": "$1.00", "Code": "B", "Desc": "xyz" }
]
}
and it keep goes. Items inside the HData section is infinite. It can be any numbers of items.
On this JSON I need to update the Value = "$2.00" where "Code" is "B". I should be able to do this with 2 scenarios. My parameter inputs are #id=2, #code="B", #value="$2.00". #id sometimes will be null. So,
If #id is null then the update statement should go through all records and update the Value="$2.00" for all items inside the HData section which has Code="B".
If #id = 2 then the update statement should update only the second row which Id is 2 for the items which Code="b"
Appreciate your help in advance.
Thanks
See DB Fiddle for an example.
declare #id bigint = 2
, #code nvarchar(8) = 'B'
, #value nvarchar(8) = '$2.00'
update a
set json = JSON_MODIFY(json, '$.HData[' + HData.[key] + '].Value', #value)
from so75416277 a
CROSS APPLY OPENJSON (json, '$.HData') HData
CROSS APPLY OPENJSON (HData.Value, '$')
WITH (
ID1 bigint
, Value nvarchar(8)
, Code nvarchar(8)
, [Desc] nvarchar(8)
) as HDataItem
WHERE id = #id
AND HDataItem.Code = #Code
The update / set statement says we want to replace the value of json with a new generated value / functions exactly the same as it would in any other context; e.g. update a set json = 'something' from so75416277 a where a.column = 'some condition'
The JSON_MODIFY does the manipulation of our json.
The first input is the original json field's value
The second is the path to the value to be updated.
The third is the new value
'$.HData[' + HData.[key] + '].Value' says we go from our JSON's root ($), find the HData field, filter the array of values for the one we're after (i.e. key here is the array item's index), then use the Value field of this item.
key is a special term; where we don't have a WITH block accompanying our OPENJSON statement we get back 3 items: key, value and type; key being the identifier, value being the content, and type saying what sort of content that is.
CROSS APPLY allows us to perform logic on a value from a single DB rowto return potentially multiple rows; e.g. like a join but against its own contents.
OPENJSON (json, '$.HData') HData says to extract the HData field from our json column, and return this with the table alias HData; as we've not included a WITH, this HData column has 3 fields; key, value, and type, as mentioned above (this is the same key we used in our JSONMODIFY).
The next OPENJSON works on HData.Value; i.e. the contents of the array item under HData. Here we take the object from this array (i.e. that's the root from the current context; hence $), and use WITH to parse it into a specific structure; i.e. ID1, Value, Code, and Desc (brackets around Desc as it's a keyword). We give this the alias HDataItem.
Finally we filter for the bit of the data we're interested in; i.e. on id to get the row we want to update, then on HDataItem.Code so we only update those array items with code 'B'.
Try the below SP.
CREATE PROC usp_update_75416277
(
#id Int = null,
#code Varchar(15),
#value Varchar(15)
)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #SQLStr Varchar(MAX)=''
;WITH CTE
AS
( SELECT ROW_NUMBER()OVER(PARTITION BY YourTable.Json ORDER BY (SELECT NULL))RowNo,*
FROM YourTable
CROSS APPLY OPENJSON(YourTable.Json,'$.HData')
WITH (
ID1 Int '$.ID1',
Value Varchar(20) '$.Value',
Code Varchar(20) '$.Code',
[Desc] Varchar(20) '$.Desc'
) HData
WHERE (#id IS NULL OR ID =#id)
)
SELECT #SQLStr=#SQLStr+' UPDATE YourTable
SET [JSON]=JSON_MODIFY(YourTable.Json,
''$.HData['+CONVERT(VARCHAR(15),RowNo-1)+'].Value'',
'''+CONVERT(VARCHAR(MAX),#value)+''') '+
'WHERE ID ='+CONVERT(Varchar(15),CTE.ID) +' '
FROM CTE
WHERE Code=#code
AND (#id IS NULL OR ID =#id)
EXEC( #SQLStr)
END
This question already has an answer here:
Query json dictionary data in SQL
(1 answer)
Closed 1 year ago.
In Oracle 12c, having a column with JSON data in this format:
{
"user_name": "Dave",
"phone_number": "13326415",
"married": false,
"age": 18
}
How can I select it in this format:
key val
-------------- ----------
"user_name" "Dave"
"phone_number" "13326415"
"married" "false"
"age" "18"
As stated in the comment, there is no way to get the keys of a JSON object using just SQL. With PL/SQL you can create a pipelined function to get the information you need. Below is a very simple pipelined function that will get the keys of a JSON object and print the type each key is, as well as the key name and the value.
First, you will need to create the types that will be used by the function
CREATE OR REPLACE TYPE key_value_table_rec FORCE AS OBJECT
(
TYPE VARCHAR2 (100),
key VARCHAR2 (200),
VALUE VARCHAR2 (200)
);
/
CREATE OR REPLACE TYPE key_value_table_t AS TABLE OF key_value_table_rec;
/
Next, create the pipelined function that will return the information in the format of the types defined above.
CREATE OR REPLACE FUNCTION get_key_value_table (p_json CLOB)
RETURN key_value_table_t
PIPELINED
AS
l_json json_object_t;
l_json_keys json_key_list;
l_json_element json_element_t;
BEGIN
l_json := json_object_t (p_json);
l_json_keys := l_json.get_keys;
FOR i IN 1 .. l_json_keys.COUNT
LOOP
l_json_element := l_json.get (l_json_keys (i));
PIPE ROW (key_value_table_rec (
CASE
WHEN l_json_element.is_null THEN 'null'
WHEN l_json_element.is_boolean THEN 'boolean'
WHEN l_json_element.is_number THEN 'number'
WHEN l_json_element.is_timestamp THEN 'timestamp'
WHEN l_json_element.is_date THEN 'date'
WHEN l_json_element.is_string THEN 'string'
WHEN l_json_element.is_object THEN 'object'
WHEN l_json_element.is_array THEN 'array'
ELSE 'unknown'
END,
l_json_keys (i),
l_json.get_string (l_json_keys (i))));
END LOOP;
RETURN;
EXCEPTION
WHEN OTHERS
THEN
CASE SQLCODE
WHEN -40834
THEN
--JSON format is not valid
NULL;
ELSE
RAISE;
END CASE;
END;
/
Finally, you can call the pipelined function from a SELECT statement
SELECT * FROM TABLE (get_key_value_table (p_json => '{
"user_name": "Dave",
"phone_number": "13326415",
"married": false,
"age": 18
}'));
TYPE KEY VALUE
__________ _______________ ___________
string user_name Dave
string phone_number 13326415
boolean married false
number age 18
If your JSON values are stored in a column in a table, you can view the keys/values using CROSS JOIN
WITH
sample_table (id, json_col)
AS
(SELECT 1, '{"key1":"val1","key_obj":{"nested_key":"nested_val"},"key_bool":false}'
FROM DUAL
UNION ALL
SELECT 2, '{"key3":3.14,"key_arr":[1,2,3]}' FROM DUAL)
SELECT t.id, j.*
FROM sample_table t CROSS JOIN TABLE (get_key_value_table (p_json => t.json_col)) j;
ID TYPE KEY VALUE
_____ __________ ___________ ________
1 string key1 val1
1 object key_obj
1 boolean key_bool false
2 number key3 3.14
2 array key_arr
I need to pull the data from a seperate JSON File into my SQL table however , I cant seem to come right with a result.
It keeps returning 'NULL' and I dont quite understand why - I suspect it is the JSON path listed in the OPENJSON() but cant seem to come right.
JSON
{
"isMoneyClient": false,
"showPower": true,
"removeGamePlay": true,
"visualTweaks": {
"0": {
"value": true,
"name": "Clock"
},
"1": {
"value": true,
"name": "CopperIcon"
}
}
}
SQL
DECLARE #JSON VARCHAR(MAX)
SELECT #JSON = BulkColumn
FROM OPENROWSET
(BULK 'C:\config.json', SINGLE_CLOB)
AS A
UPDATE dbo.CommonBaseline
SET CommonBaseline.Config_isMoneyClient = isMoneyClient ,
CommonBaseline.Config_showPower = showPower ,
CommonBaseline.Config_removeGamePlay = removeGamePlay
FROM OPENJSON (#JSON)
WITH (
isMoneyClient Varchar (50),
showPower Varchar (50),
removeGamePlay Varchar (50)
)
AS REQUESTED I HAVE BELOW THE COMMON BASELINE SCHEME
CREATE TABLE CommonBaseline (
ServerID int NOT NULL PRIMARY KEY,
Config_isMoneyClient varchar(255) ,
Config_showPower varchar(255) ,
Config_removeGamePlay varchar(255)
);
Update:
This is an attempt to improve this answer. When you miss a column path definition in OPENJSON() call using WITH clause, matches between the keys in JSON expression and the column names are case sensitive. This is another possible reason for unexpected NULL results, because you don't use column paths in your OPENJSON() call.
Example:
DECLARE #json nvarchar(max) = N'{
"isMoneyClient": false,
"showPower": true,
"removeGamePlay": true,
"visualTweaks": {
"0": {
"value": true,
"name": "Clock"
},
"1": {
"value": true,
"name": "CopperIcon"
}
}
}'
-- NULL results.
-- The column names (IsMoneyClient) and
-- the keys in JSON expression (isMoneyClient) don't match.
SELECT *
FROM OPENJSON (#json) WITH (
IsMoneyClient Varchar (50),
ShowPower Varchar (50),
RemoveGamePlay Varchar (50)
)
-- Expected results. The correct column paths are used
SELECT *
FROM OPENJSON (#json) WITH (
IsMoneyClient Varchar(50) '$.isMoneyClient',
ShowPower Varchar(50) '$.showPower',
RemoveGamePlay Varchar(50) '$.removeGamePlay'
)
Original answer:
One possible explanation for your unexpected result is the fact, that in some cases, even if your JSON content is invalid, OPENJSON() reads only part of this content. I'm able to reproduce this with the next example:
Statement:
-- JSON
-- Valid JSON is '[{"name": "A"},{"name": "B"}]'
DECLARE #json nvarchar(max) = N'
{"name": "A"},
{"name": "B"}
'
-- Read JSON content
SELECT * FROM OPENJSON(#json, '$')
SELECT * FROM OPENJSON(#json, '$') WITH (
[name] nvarchar(100) '$.name'
)
Output (OPENJSON() reads only {"name": "A"} part of the JSON input):
----------------
key value type
----------------
name A 1
----
name
----
A
One solution here is to check your JSON content with ISJSON:
IF ISJSON(#json) = 1 PRINT 'Valid JSON' ELSE PRINT 'Not valid JSON';
If it is possible, try to fix the input JSON:
Statement:
-- JSON
-- Valid JSON is '[{"name": "A"},{"name": "B"}]'
DECLARE #json nvarchar(max) = N'
{"name": "A"},
{"name": "B"}
'
-- Read JSON content
SELECT * FROM OPENJSON(CONCAT('[', #json, ']'), '$') WITH (
[name] nvarchar(100) '$.name'
)
Output:
----
name
----
A
B
Please check your table schema of CommonBaseline. Datatype of columns Config_isMoneyClient, Config_showPower, Config_removeGamePlay
As per below scenario it is working fine
DECLARE #JSON VARCHAR(MAX)
SELECT #JSON = BulkColumn
FROM OPENROWSET
(BULK 'e:\config.json', SINGLE_CLOB)
AS A
or
SELECT #JSON = '{
"isMoneyClient": false,
"showPower": true,
"removeGamePlay": true,
"visualTweaks": {
"0": {
"value": true,
"name": "Clock"
},
"1": {
"value": true,
"name": "CopperIcon"
}
}
}'
declare #CommonBaseline as Table
( id int,
Config_isMoneyClient bit,
Config_showPower bit,
Config_removeGamePlay bit )
insert into #CommonBaseline ( id ) values ( 1 ) ---- please check your table CommonBaseline as atleast 1 record to update the same
UPDATE #CommonBaseline
SET Config_isMoneyClient = isMoneyClient ,
Config_showPower = showPower ,
Config_removeGamePlay = removeGamePlay
FROM OPENJSON (#JSON)
WITH (
isMoneyClient Varchar (50),
showPower Varchar (50),
removeGamePlay Varchar (50)
)
select * from #CommonBaseline
Note: Please check you already have your row in CommonBaseline. Since you are using update.
We recently upgraded our backend DB from SQL Server 2012 to SQL Server 2016 which I realized supports JSON objects. One of our web services return data in the below format, and I am trying to consume it directly using OPENJSON function.
{
"RESULT_1": {
"columns": ["col1", "col2", "col3", "col4"],
"data": [
["0", null, "12345", "other"],
["1", "a", "54321", "MA"],
["0", null, "76543", "RI"]
]
}
}
I want to make sure that I read the column names from the "columns section" and make sure that the correct data is read and pushed in SQL Server 2016.
DECLARE #Json_Array2 nvarchar(max) = '{
"RESULT_1": {
"columns": ["col1", "col2", "col3", "col4"],
"data": [
["0", null, "12345", "other"],
["1", "a", "54321", "MA"],
["0", null, "76543", "RI"]
]
} }';
SELECT [key], value
FROM OPENJSON(#Json_Array2,'$.RESULT_1.columns')
SELECT [key], value
FROM OPENJSON(#Json_Array2,'$.RESULT_1.data')
With above statements, I am able to extract the columns and the data array. Would it be possible to dump this data in a flat table (already existing) with the same column names?
I am able to see all the values of one particular row by:
SELECT [key], value
FROM OPENJSON(#Json_Array2,'$.RESULT_1.data[0]')
key value
0 0
1 NULL
2 12345
3 other
Any pointers are appreciated.
Thank you
If I understand correctly you are trying to extract all elements from the "data" array
Col1 Col2 Col3 Col4
0 NULL 12345 other
1 a 54321 MA
0 NULL 76543 RI
Suppose your Json Structure will not change you can achieve that using the following query:
SET NOCOUNT ON
IF OBJECT_ID ('tempdb..##ToPvt') IS NOT NULL DROP TABLE ##ToPvt
DECLARE #Cmd NVARCHAR(MAX)=''
DECLARE #Table TABLE (Col1 nvarchar(100), Col2 nvarchar(100), Col3 nvarchar(100) , Col4 nvarchar(100))
DECLARE #Json_Array2 nvarchar(max) = '{
"RESULT_1": {
"columns": ["col1", "col2", "col3", "col4"],
"data": [
["0", null, "12345", "other"],
["1", "a", "54321", "MA"],
["0", null, "76543", "RI"]
]
} }';
;WITH Cols ([Key], [Value]) as
(
SELECT [key], value
FROM OPENJSON(#Json_Array2,'$.RESULT_1.columns')
)
, [Rows] as
(
SELECT ROW_NUMBER () OVER (ORDER BY [Key]) Seq, [key], value
FROM OPENJSON(#Json_Array2,'$.RESULT_1.data')
)
,ToPvt as
(
SELECT c.[Key]+1 Cols, x.Value , 'col'+CONVERT(VARCHAR(10),ROW_NUMBER () OVER (PARTITION BY c.Value ORDER BY t.[Key])) Seq
FROM [Rows] t
INNER JOIN Cols C
ON t.[key] = c.[key]
CROSS APPLY OPENJSON(t.value) X
)
SELECT *
INTO ##ToPvt
FROM ToPvt
;WITH Final (Cmd) as
(
SELECT DISTINCT 'SELECT [col1], [col2], [col3],[col4] FROM ##ToPvt
PIVOT
(
MAX([Value]) FOR Seq IN ([col1], [col2], [col3],[col4])
) T
WHERE Cols = '+CONVERT(VARCHAR(10),Cols)+'
;'
FROM ##ToPvt
)
SELECT #Cmd += Cmd
FROM Final
INSERT INTO #Table
EXEC sp_executesql #Cmd
SELECT *
FROM #Table
I'm using PostgreSQL 9.4.5. I'd like to update a jsonb column.
My table is structured this way:
CREATE TABLE my_table (
gid serial PRIMARY KEY,
"data" jsonb
);
JSON strings are like this:
{"files": [], "ident": {"id": 1, "country": null, "type ": "20"}}
The following SQL doesn't do the job (syntax error - SQL state = 42601):
UPDATE my_table SET "data" -> 'ident' -> 'country' = 'Belgium';
Is there a way to achieve that?
Ok there are two functions:
create or replace function set_jsonb_value(p_j jsonb, p_key text, p_value jsonb) returns jsonb as $$
select jsonb_object_agg(t.key, t.value) from (
select
key,
case
when jsonb_typeof(value) = 'object' then set_jsonb_value(value, p_key, p_value)
when key = p_key then p_value
else value
end as value from jsonb_each(p_j)) as t;
$$ language sql immutable;
First one just changes the value of the existing key regardless of the key path:
postgres=# select set_jsonb_value(
'{"files": [], "country": null, "ident": {"id": 1, "country": null, "type ": "20"}}',
'country',
'"foo"');
set_jsonb_value
--------------------------------------------------------------------------------------
{"files": [], "ident": {"id": 1, "type ": "20", "country": "foo"}, "country": "foo"}
(1 row)
create or replace function set_jsonb_value(p_j jsonb, p_path text[], p_value jsonb) returns jsonb as $$
select jsonb_object_agg(t.key, t.value) from (
select
key,
case
when jsonb_typeof(value) = 'object' then set_jsonb_value(value, p_path[2:1000], p_value)
when key = p_path[1] then p_value
else value
end as value from jsonb_each(p_j)
union all
select
p_path[1],
case
when array_length(p_path,1) = 1 then p_value
else set_jsonb_value('{}', p_path[2:1000], p_value) end
where not p_j ? p_path[1]) as t;
$$ language sql immutable;
Second one changes the value of the existing key using the path specified or creates it if the path does not exists:
postgres=# select set_jsonb_value(
'{"files": [], "country": null, "ident": {"id": 1, "type ": "20"}}',
'{ident,country}'::text[],
'"foo"');
set_jsonb_value
-------------------------------------------------------------------------------------
{"files": [], "ident": {"id": 1, "type ": "20", "country": "foo"}, "country": null}
(1 row)
postgres=# select set_jsonb_value(
'{"files": [], "country": null, "ident": {"id": 1, "type ": "20"}}',
'{ident,foo,bar,country}'::text[],
'"foo"');
set_jsonb_value
-------------------------------------------------------------------------------------------------------
{"files": [], "ident": {"id": 1, "foo": {"bar": {"country": "foo"}}, "type ": "20"}, "country": null}
(1 row)
Hope it will help to someone who uses the PostgreSQL < 9.5
Disclaimer: Tested on PostgreSQL 9.5
In PG 9.4 you are out of luck with "easy" solutions like jsonb_set() (9.5). Your only option is to unpack the JSON object, make the changes and re-build the object. That sounds very cumbersome and it is indeed: JSON is horrible to manipulate, no matter how advanced or elaborate the built-in functions.
CREATE TYPE data_ident AS (id integer, country text, "type" integer);
UPDATE my_table
SET "data" = json_build_object('files', "data"->'files', 'ident', ident.j)::jsonb
FROM (
SELECT gid, json_build_object('id', j.id, 'country', 'Belgium', 'type', j."type") AS j
FROM my_table
JOIN LATERAL jsonb_populate_record(null::data_ident, "data"->'ident') j ON true) ident
WHERE my_table.gid = ident.gid;
In the SELECT clause "data"->'ident' is unpacked into a record (for which you need to CREATE TYPE a structure). Then it is built right back into a JSON object with the new country name. In the UPDATE that "ident" object is re-joined with the "files" object and the whole thing cast to a jsonb.
A pure thing of beauty -- just so long as speed is not your thing...
My previous solution relied on 9.5 functionality.
I would recommend instead either going with abelisto's solutions below or using pl/perlu, plpythonu, or plv8js to write json mutators in a language that has better support for them.