using postgresql db for persistence. one of my table's column's data type is json and stored data format is like
{
"Terms": [
{
"no": 1,
"name": "Vivek",
"salary": 123
},
{
"no": 2,
"name": "Arjun",
"salary": 123
},
{
"no":3,
"name": "Ashok",
"salary": 123
}
]
}
I need to update any of no or name or salary of 1st Term object only.
Used native queried to load and for better performance, I should use native query only for UPDATE. I tried postgresql jsonb_set function for the update, but unable to update.
I tried:
UPDATE table_name
SET terms = jsonb_set(terms->'Terms','{0,name}','"VVVV"',FALSE)
WHERE some condition
and response message in pgAdmin tool is
Query returned successfully: 0 rows affected, x msec execution time.
Can any one help me with this one?
Related
I want to take the data from here: https://raw.githubusercontent.com/usnistgov/oscal-content/master/examples/ssp/json/ssp-example.json
which I've pulled into a mySQL database called "ssp_models" into a JSON column called 'json_data', and I need add a new 'name' and 'type' entry into the 'parties' node with a new uuid in the same format as the example.
So in my mySQL database table, "ssp_models", I have this entry: Noting that I should be able to write the data by somehow referencing "66c2a1c8-5830-48bd-8fdd-55a1c3a52888" as the record to modify.
All the example I've seen online seem to force me to read out the entire JSON into a variable, make the addition, and then cram it back into the json_data column, which seems costly, especially with large JSON data-sets.
Isn't there a simple way I can say
"INSERT INTO ssp_models JSON_INSERT <somehow burrow down to 'system-security-plan'.metadata.parties (name, type) VALUES ('Raytheon', 'organization') WHERE uuid = '66c2a1c8-5830-48bd-8fdd-55a1c3a52888'
I was looking at this other stackoverflow example for inserting into JSON:
How to create and insert a JSON object using MySQL queries?
However, that's basically useful when you are starting from scratch, vs. needing to add JSON data to data that already exists.
You may want to read https://dev.mysql.com/doc/refman/8.0/en/json-function-reference.html and explore each of the functions, and try them out one by one, if you're going to continue working with JSON data in MySQL.
I was able to do what you describe this way:
update ssp_models set json_data = json_array_append(
json_data,
'$."system-security-plan".metadata.parties',
json_object('name', 'Bingo', 'type', 'farmer')
)
where uuid = '66c2a1c8-5830-48bd-8fdd-55a1c3a52888';
Then I checked the data:
mysql> select uuid, json_pretty(json_data) from ssp_models\G
*************************** 1. row ***************************
uuid: 66c2a1c8-5830-48bd-8fdd-55a1c3a52888
json_pretty(json_data): {
"system-security-plan": {
"uuid": "66c2a1c8-5830-48bd-8fdd-55a1c3a52888",
"metadata": {
"roles": [
{
"id": "legal-officer",
"title": "Legal Officer"
}
],
"title": "Enterprise Logging and Auditing System Security Plan",
"parties": [
{
"name": "Enterprise Asset Owners",
"type": "organization",
"uuid": "3b2a5599-cc37-403f-ae36-5708fa804b27"
},
{
"name": "Enterprise Asset Administrators",
"type": "organization",
"uuid": "833ac398-5c9a-4e6b-acba-2a9c11399da0"
},
{
"name": "Bingo",
"type": "farmer"
}
]
}
}
}
I started with data like yours, but for this test, I truncated everything after the parties array.
Background: I work for a company that basically sells passes. Every order that is placed by the customer will contain N number of passes.
Issue: I have these JSON event-transaction files coming into a S3 bucket on a daily basis from DocumentDB (MongoDB). This JSON file is associated to the relevant type of event (insert, modify or delete) for every document key (which is an order in my case). The example below illustrates a "Insert" type of event that came through to the S3 bucket:
{
"_id": {
"_data": "11111111111111"
},
"operationType": "insert",
"clusterTime": {
"$timestamp": {
"t": 11111111,
"i": 1
}
},
"ns": {
"db": "abc",
"coll": "abc"
},
"documentKey": {
"_id": {
"$uuid": "abcabcabcabcabcabc"
}
},
"fullDocument": {
"_id": {
"$uuid": "abcabcabcabcabcabc"
},
"orderNumber": "1234567",
"externalOrderId": "12345678",
"orderDateTime": "2020-09-11T08:06:26Z[UTC]",
"attraction": "abc",
"entryDate": {
"$date": 2020-09-13
},
"entryTime": {
"$date": 04000000
},
"requestId": "abc",
"ticketUrl": "abc",
"tickets": [
{
"passId": "1111111",
"externalTicketId": "1234567"
},
{
"passId": "222222222",
"externalTicketId": "122442492"
}
],
"_class": "abc"
}
}
As we see above, every JSON file might contain N number of passes and every pass is - in turn - is associated to an external ticket id, which is a different column (as seen above). I want to use Pentaho Kettle to read these JSON files and load the data into the DW. I am aware of the Json input step and Row Normalizer that could then transpose "PassID 1", "PassID 2", "PassID 3"..."PassID N" columns into 1 unique column "Pass" and I would have to have to apply a similar logic to the other column "External ticket id". The problem with that approach is that it is quite static, as in, I need to "tell" Pentaho how many Passes are coming in advance in the Json input step. However what if tomorrow I have an order with 10 different passes? How can I do this dynamically to ensure the job will not break?
If you want a tabular output like
TicketUrl Pass ExternalTicketID
---------- ------ ----------------
abc PassID1Value1 ExTicketIDvalue1
abc PassID1Value2 ExTicketIDvalue2
abc PassID1Value3 ExTicketIDvalue3
And make incoming value dynamic based on JSON input file values, then you can download this transformation Updated Link
I found everything work dynamic in JSON input.
I have a json column in a postgres table.
The column contains the following json data:
{
"data": {
"id": "1234",
"sites": [
{
"site": {
"code": "1",
"display": "Site1"
}
},
{
"site": {
"code": "2",
"display": "Site2"
},
"externalSite": true
},
{
"site": {
"code": "3",
"display": "Site3"
}
}
]
}
}
I need to create an update query that adds another attribute ('newAttribute' in the sample below) to all array items that have '"externalSite": true', so, after running the update query the second array element will be:
{
"site": {
"code": "2",
"display": "Site2"
},
"externalSite": true,
"newAttribute": true
}
The following query returns the array elements that need to be updated:
select * from myTable, jsonb_array_elements(data -> 'sites') sites
where sites ->'externalSite' = 'true'
What is the syntax of the update query?
Thanks
Kobi
Assuming your table is called test and your column is called data, you can update it like so:
UPDATE test SET data =
(select jsonb_set(data::jsonb, '{"data","sites"}', sites)
FROM test
CROSS JOIN LATERAL (
SELECT jsonb_agg(CASE WHEN site ? 'externalSite' THEN site || '{"newAttribute":"true"}'::jsonb
ELSE site
END) AS sites
FROM jsonb_array_elements( (data#>'{"data","sites"}')::jsonb ) as ja(site)
) as sub
);
Note that I cast the data to jsonb data as there are more functions and operators available for manipulating jsonb than plain json.
You can run the SELECT statement alone to see what it is doing, but the basic idea is to re-create the sites object by expanding it with jsonb_array_elements and adding the newAttribute attribute if externalSite exists.
This array is then aggregated with jsonb_agg and, finally, in the outer select, the sites object is replaced entirely with this newly computed version.
I have a CouchDB database, which uses a query language Mango - which seems to be the same as Cloudant's query language.
I'm trying to search and compare two fields to each other and only return the relevant results when they're equal.
For example:
{
"_id": "ACCEPT0",
"_rev": "1-92ea4e727271aefd0a2befed0d4bb736",
"OfferID": "OFFER0"
}
{
"_id": "ACCEPT1",
"_rev": "3-986ca6e717b225ac909d644de54d5f7d",
"OfferID": "OFFER3"
}
{
"_id": "OFFER0",
"_rev": "1-2af5f5c7b1c59dd3f0997f748a367cb2",
"From": "merchant1",
"To": "customer1"
}
{
"_id": "OFFER1",
"_rev": "6-f0927c5d4f9fd8a2d2b602f1c265d6d5",
"From": "merchant1",
"To": "customer2"
}
I trying to come up with a query which will, in this example, return "OFFER0" - since OFFER0 exists in an "OfferID"
EDIT (clarification): The query needs to be able to select all the _id's which begin with OFFER and which exist in an OfferID field.
I know I can set this up with a view (as seen from: Cloudant query to return records where 2 fields are equal), but I need this in a Mango query as it'll be running over Hyperledger
You can easily return documents whose _id field starts with "OFFER" with the following query:
{
"selector": {
"_id": {
"$regex": "^OFFER"
}
}
}
but this is likely to be inefficient because Cloudant has to scan the whole database, testing each documents _id field with that regular expression.
A better way to design your data may be to have a type field which distinguishes between the document types in your database e.g.
{
"_id": "OFFER0",
"_rev": "1-2af5f5c7b1c59dd3f0997f748a367cb2",
"type": "offer",
"From": "merchant1",
"To": "customer1"
}
and then a query to return all documents where type = 'offer' becomes:
{
"selector": {
"type": "offer"
}
}
I don't fully understand the part of the question where you say "which exist in an OfferID field." but it's important to note that Cloudant Query & Mango can only query single documents - you can't say "get me all the documents which are offers, where another document has a certain property". Include all the data you need in each document and then you'll be able to query it cleanly.
I am trying to update my mysql table and insert json data to my mysql table's json-datatype column using JSON_INSERT. Here is the structure of my column.
{
"Data":
[{
"Devce": "ios",
"Status": 1
}]
}
This is the query I am using to insert more data to this field.
UPDATE table SET `Value` = JSON_INSERT
(`Value`,'$.Data','{\"Device\":\"ios\",\"Status\":1}') WHERE Meta = 'REQUEST_APP'
This is supposed to update the field to this:
{
"Data":
[{
"Devce": "ios",
"Status": 1
},
{
"Devce": "ios",
"Status": 1
}
]
}
But instead it the result is:
0 rows affected. (Query took 0.0241 seconds.)
Any help regarding this would be appreciated.
JSON_APPEND serves your purpose better JSON_APPEND docs