I need to globally replace a particular string that occurs multiple places in a nested JSON structure, thats stored as jsonb in a postgres table. For example:
{
"location": "tmp/config",
"alternate_location": {
"name": "config",
"location": "tmp/config"
}
}
...should become:
{
"location": "tmp/new_config",
"alternate_location": {
"name": "config",
"location": "tmp/new_config"
}
}
I've tried:
UPDATE files SET meta_data = to_json(replace(data::TEXT, 'tmp/config', 'tmp/new_config'));
Unfortunately this results in malformed JSON, with triple escaped quotes.
Any ideas how to do this?
Use a simple cast to jsonb instead of to_json(), e.g.:
with files(meta_data) as (
values(
'{
"location": "tmp/config",
"alternate_location": {
"name": "config",
"location": "tmp/config"
}
}'::jsonb)
)
select replace(meta_data::text, 'tmp/config', 'tmp/new_config')::jsonb
from files;
replace
--------------------------------------------------------------------------------------------------------
{"location": "tmp/new_config", "alternate_location": {"name": "config", "location": "tmp/new_config"}}
(1 row)
Use update:
UPDATE files
SET meta_data = replace(data::TEXT, 'tmp/config', 'tmp/new_config')::jsonb;
Related
I am trying to develop database table structure from following JSON Structure. "requiredfields" are straight forward and I table setup for those. However, I am stuck on how to created schema to store "Configuration" and its nested properties in SQL Database. Any help will be appreciated. Thanks.
{
"requiredFields": [
"hello",
"world"
],
"configuration": {
"hello": {
"fallbacks": [
{
"type": "constant",
"value": "30"
}
]
},
"world": {
"fallbacks": [
{
"type": "fromInputFile",
"value": "patientFirstName"
},
{
"type": "fromInputFile",
"value": "subscriberFirstName"
},
{
"type": "constant",
"value": "alpha"
}
]
}
}
}
It looks like you could store that in a single table
CREATE TABLE ConfigurationFallbacks (
field nvarchar(100),
type nvarchar(100),
value nvarchar(100)
);
You may also want to add another table to store just the field values, and the table above would be foreign-keyed to that.
You can insert like this:
INSERT ConfigurationFallbacks (field, type, value)
SELECT
keys.[key],
j.type,
j.value
FROM OPENJSON(#json, '$.configuration') keys
CROSS APPLY OPENJSON(keys.value, '$.fallbacks')
WITH (
type nvarchar(100),
value nvarchar(100)
) j;
The first OPENJSON call does not have a schema, so it returns a set of key value pairs.
In my DB I have a column storing JSON. The JSON looks like this:
{
"views": [
{
"id": "1",
"sections": [
{
"id": "1",
"isToggleActive": false,
"components": [
{
"id": "1",
"values": [
"02/24/2021"
]
},
{
"id": "2",
"values": []
},
{
"id": "3",
"values": [
"5393",
"02/26/2021 - Weekly"
]
},
{
"id": "5",
"values": [
""
]
}
]
}
]
}
]
}
I want to create a migration script that will extract a value from this JSON and store them in its own column.
In the JSON above, in that components array, I want to extract the second value from the component with an ID of "3" (among other things, but this is a good example). So, I want to extract the value "02/26/2021 - Weekly" to store in its own column.
I was looking at the JSON_VALUE docs, but I only see examples for specifing indexes for the json properties. I can't figure out what kind of json path I'd need. Is this even possible to do with JSON_VALUE?
EDIT: To clarify, the views and sections components can have static array indexes, so I can use views[0].sections[0] for them. Currently, this is all I have with my SQL query:
SELECT
*
FROM OPENJSON(#jsonInfo, '$.views[0].sections[0]')
You need to use OPENJSON to break out the inner array, then filter it with a WHERE and finally select the correct value with JSON_VALUE
SELECT
JSON_VALUE(components.value, '$.values[1]')
FROM OPENJSON (#jsonInfo, '$.views[0].sections[0].components') components
WHERE JSON_VALUE(components.value, '$.id') = '3'
I have JSON document which is stored under single column of type jsonb inside postgresql which looks like below:
{
"resourceType": "Bundle",
"type": "transaction",
"entry": [
{
"fullUrl": "urn:uuid:100",
"resource": {
"resourceType": "Encounter",
"id": "110",
"status": "planned",
"priority": {
"coding": [
{
"code": "ASAP"
}
]
},
"subject": {
"reference": "Patient/123"
},
"appointment": [
{
"reference": "Appointment/12213#42"
}
],
"diagnosis": [
{
"condition": {
"reference": "Condition/condReferenceValue"
},
"use": {
"coding": [
{
"system": "http://terminology.hl7.org/CodeSystem/diagnosis-role",
"code": "AD"
},
{
"system": "http://terminology.hl7.org/CodeSystem/diagnosis-role",
"code": "DD"
}
]
}
}
],
"hospitalization": {
"preAdmissionIdentifier": {
"system": "https://system.html"
}
},
"location": [
{
"location": {
"display": "Mumbai"
},
"status": "active"
},
{
"status": "planned"
}
]
},
"request": {
"method": "POST",
"url": "Encounter"
}
}
]
}
Now, I want to update value for reference under subject attribute. So, I tried below way but it throws an error:
update fhir.testing set names = jsonb_set(names,'{"subject":{"reference"','"Patient/1"',true) where id = 10;
Error:
SQL Error [22P02]: ERROR: malformed array literal: "{"subject":{"reference""
Detail: Unexpected array element.
I referred this link but didn't work out for me. How can I do it?
I don't use Postgres that much but from what i read in the relative jsonb_set example in the documentation of JSON functions (and since you want to update) shouldn't it be
jsonb_set(names, '{entry,0,subject,reference}','Patient/1', false)
instead of
jsonb_set(names,'{"subject":{"reference"','"Patient/1"',true)
jsonb
jsonb_set(target jsonb, path text[], new_value jsonb [, create_missing
boolean])
Returns target with the section designated by path replaced by
new_value, or with new_value added if create_missing is true (default
is true) and the item designated by path does not exist. As with the
path oriented operators, negative integers that appear in path count
from the end of JSON arrays.
EDIT
To explain the path used in jsonb_set, check this example.
jsonb_set('[{"f1":1,"f2":null},2,null,3]', '{0,f1}','[2,3,4]', false)
returns
[{"f1":[2,3,4],"f2":null},2,null,3]
As i understand if a sub-element in a complex JSON document is an array, you need to specify it's index e.g. 0,1,2,...
EDIT
Always look very carefully the structure of the JSON document. I simply write this because i did not see that subject was a child of resource and that is causing you the error.
So the correct path is actually '{entry,0,resource,subject,reference}'
Correct Query for your requirement is:
update fhir.testing
set names= jsonb_set(names, '{entry,0,resource,subject,reference}', '"Patient/1"' , false)
where id = 10;
Explanation
json_set takes 4 parameter
target_json (jsonb) - which accept jsonb type data. In your case it is names field.
path (text[]) - which accepts a text array. in your case it is '{entry,0,resource,subject,reference}'.
new_value (jsonb) - in your case you want to change it to '"Patient/1"'.
create_missing (boolean) - in your case it should be false. as you want to replace the existing one. if you want to create the reference with given value in case of not found then just mark it true.
the value is not valid json, try this out:
update fhir.testing set names = jsonb_set(names, '{"entry": [{"resource": {"subject":{"reference":"Patient/1"} }}]}',true) where id = 10;
You have to create a valid json, closing every { and every ], yours was
'{"subject":{"reference"'
{
"metadata": {
"id": "2",
"uri": "3",
"type": "2"
},
"Number": "2323600002913",
"Date": "04/21/2009",
"postingDate": "00/00/0000",
"ata": {
"results": [
{
"metadata": {
"id": "r",
"uri": "e2",
"type": "s2"
},
"item": "000010",
"data":"ad"
}
]
}
}
want to remove metadata property from above json message and output should be like below
{
"Number": "2323600002913",
"Date": "04/21/2009",
"postingDate": "00/00/0000",
"ata": {
"results": [
{
"item": "000010",
"data":"ad"
}
]
}
}
I tried with removeProperty() which is working for root level metadata but inside metadata not removed.
how to use replace() in this case or anything else to only remove metadata.
The simplest way is use inline code, cause even with removeProperty() expression to remove the metadata under results, it will return the results array data not the whole json data. Then you will have to combine them, it's not a convenient way.
And with inline code you could refer to my below picture. The variable json is the value from triggerbody, then just delete the node or key and return the json variable. And with this way, even you want to delete many metadata in the array, you could add a for loop to delete it, just think of it as plain js code.
Update:if you want to get value from variable,cause no support expression to get value from variable so use the below expression.
var json =wworkflowContext.actions.Initialize_variable.inputs.variables[0].value;
And about how to loop the array in the json refer to my below pic.
I have a JSON with the following structure
{
"name": "name",
"id": [
"abcdef"
],
"input_dataobjects": [
{
"id": "someid1",
"name": "somename1",
"provider": "someprovider",
"datatype": "somedatatype1"
},
{
"name": "some_name2",
"datatype": "some_datatype2",
"id": "some_id2"
}
]
}
What I am trying to achieve
in input_dataobjects if datatype == somedatatype1 then name = sonemewname1.
I can use the index of the input_dataobjects since my json always has the same structure. But is there any different way to achieve it by parsing through the input_dataobjects and find the index to replace? I am using jq to do JSON operations.
I tried with using the index like .input_dataobjects[0].name="someting" because i know the position of the datatype always.
The simplest and perhaps most efficient solution to the problem as stated is:
.input_dataobjects |=
map( if .datatype == "somedatatype1"
then .name = "sonemewname1"
else . end )