i want to update data in bulk i have over 50 rows to be updated in an array of objects in node JS. something like
https://github.com/felixge/node-mysql
and How do I do a bulk insert in mySQL using node.js
var updateData=[
{a: '15',b: 1,c: '24',d: 9,e: 1,f: 0,g: 0,h: 5850,i: 78 },
{a: '12',b: 1,c: '21',d: 9,e: 1,f: 0,g: 0,h: 55,i: 78 },
{a: '13',b: 1,c: '34',d: 9,e: 1,f: 0,g: 0,h: 58,i: 78 },
{a: '14',b: 1,c: '45',d: 9,e: 1,f: 0,g: 0,h: 585,i:78 },
{a: '16',b: 1,c: '49',d: 9,e: 1,f: 0,g: 0,h: 85,i: 78 }
]
my query is : update table set a= updateData.a ,b= updateData.b ,c = updateData.c , d==updateData.d ,e=updateData.e,f=updateData.f where e=updateData.e
As of I know, there is no direct way to do bulk update records in mySQL. But there is a work around for this - You could execute multiple insert statements and then execute the query to achieve the desired result.
To do this, while creating a connection allow it to execute multiple statements as it is disabled by default.
var connection = mysql.createConnection({
host : dbConfig.host,
user : dbConfig.user,
password : dbConfig.password,
database : dbConfig.database,
multipleStatements: true
});
Then, construct the bulk update query in the below syntax by manipulating the inputs you have.
Query1; Query2; Query3;
Say, for Instance,
update table set a='15', b=1, c='24', d=9, e=1, f=0, g=0, h=5850, i=78;update table set a='12', b=1, c='21', d=9, e=1, f=0, g=0, h=5850, i=78;
Then, execute the query as usual,
connection.query(sqlQuery, params, callback);
Hope this helps.
You can accomplish this by enabling the multiple statements feature in your mysql connection. Then you can loop through your updateData and construct mysql statements separated by a ';'. You can see an example of this in this answer.
It's really not easy to bulk update data using node-MySQL but here you can do an alternative if you can use .map function in the frontend. Let me show you what i did with mine--
just make a single update API and use it like this in your frontend-
updateData.map((item, key)=>{
return (
axios.path('/api/update', {
a: item.a,
b: item.b,
c: item.c
})
).then(()=> console.log('updated'))
.catch((err)=> console.log(err))
})
there are couple ifs, but
if you have a unique constraint on column e
if you have a default values for all columns in the target table which are not affected by this query
then you can use this slightly nasty way:
const sql = `insert into table (a,b,c,d,e,f)
values ?
on duplicate key update
a = values(a),
b = values(b),
c = values(c),
d = values(d),
f = values(f)`
the use the query variant with passed values (updateData in your case):
connection.query(sqlString, updateData, callback)
your updateData should be an array of arrays of values to go into a,b,c,d,e,f columns
A little late answering, but using MySQL JSON_TABLE can help. Here's a working example:
UPDATE person a
INNER JOIN (
SELECT
personId, addressType, addressId
FROM JSON_TABLE('
[
{"personId": 318, "addressType": "Primary", "addressId": 712},
{"personId": 319, "addressType": "Shipping", "addressId": 712}
]',
'$[*]' COLUMNS(
personId INT PATH '$.personId',
addressType VARCHAR(10) path '$.addressType',
addressId INT path '$.addressId')
) a) b
ON a.personId = b.personId
SET
a.addressId = b.addressId,
a.addressType = b.addressType;
Related
I have JSON stored in a table. The JSON is nested and has the following structure
[
{
"name": "abc",
"ques": [
{
"qId": 100
},
{
"qId": 200
}
]
},{
"name": "xyz",
"ques": [
{
"qId": 100
},
{
"qId": 300
}
]
}
]
Update TABLE_NAME
set COLUMN_NAME = jsonb_set(COLUMN_NAME, '{ques,qId}', '101')
WHERE COLUMN_NAME->>'qId'=100
I am trying to update qId value from JSON. If qId is 100, I want to update it to 101.
1st solution, simple but to be used carefully
You convert your json data into text and you use the replace function :
Update TABLE_NAME
set COLUMN_NAME = replace(COLUMN_NAME :: text,'"qId": 100}', '"qId": 101}') :: jsonb
2nd solution more elegant and more complex
jsonb_set cannot make several replacements in the same jsonb data at the same time. To do so, you need to create your own aggregate based on the jsonb_set function :
CREATE OR REPLACE FUNCTION jsonb_set(x jsonb, y jsonb, path text[], new_value jsonb) RETURNS jsonb LANGUAGE sql AS $$
SELECT jsonb_set(COALESCE(x, y), path, new_value) ; $$ ;
CREATE OR REPLACE AGGREGATE jsonb_set_agg(x jsonb, path text[], new_value jsonb)
( stype = jsonb, sfunc = jsonb_set);
Then you get your result with the following query :
UPDATE TABLE_NAME
SET COLUMN_NAME =
( SELECT jsonb_set_agg(COLUMN_NAME :: jsonb, array[(a.id - 1) :: text, 'ques', (b.id - 1) :: text], jsonb_build_object('qId', 101))
FROM jsonb_path_query(COLUMN_NAME :: jsonb, '$[*]') WITH ORDINALITY AS a(content, id)
CROSS JOIN LATERAL jsonb_path_query(a.content->'ques', '$[*]') WITH ORDINALITY AS b(content, id)
WHERE (b.content)->'qId' = to_jsonb(100)
)
Note that this query is not universal, and it must breakdown the jsonb data according to its structure.
Note that jsonb_array_elements can be used in place of jsonb_path_query, but you will get an error with jsonb_array_elements when the jsonb data is not an array, whereas you won't get any error with jsonb_path_query in lax mode which is the default mode.
Full test results in dbfiddle
You must specify the whole path to the value.
In this case your json is an array so you need to address which element of this array your are trying to modify.
A direct approach (over your example) would be:
jsonb_set(
jsonb_set(
COLUMN_NAME
, '{0,ques,qId}'
, '101'
)
, '{1,ques,qId}'
, '101'
)
Of course, if you want to modify every element of different arrays of different lengths you would need to elaborate this approach disassembling the array to modify every contained element.
Given the following two table columns jsonb type:
dividend_actual
{
"dividends": [
{
"amount": "2.9800",
"balanceDate": "2020-06-30T00:00:00Z"
},
{
"amount": "4.3100",
"balanceDate": "2019-06-30T00:00:00Z"
}
],
"lastUpdated": "2020-11-16T14:50:51.289649512Z",
"providerUpdateDate": "2020-11-16T00:00:00Z"
}
dividend_forecast
{
"dividends": [
{
"amount": "2.3035",
"balanceDate": "2021-06-01T00:00:00Z"
},
{
"amount": "3.0452",
"balanceDate": "2022-06-01T00:00:00Z"
},
{
"amount": "3.1845",
"balanceDate": "2023-06-01T00:00:00Z"
}
],
"lastForecasted": "2020-11-13T00:00:00Z",
"providerUpdateDate": "2020-11-16T00:00:00Z"
}
I would like to merge both dividends arrays from dividend_actual and dividend_forecast, but before merging them I want to add an extra field (forecast) on every single object.
I did try the following:
SELECT
dividends
FROM
stock_financial AS f
INNER JOIN instrument AS i ON i.id = f.instrument_id,
jsonb_array_elements(
(f.dividend_forecast->'dividends' || jsonb '{"forecast": true}') ||
(f.dividend_actual->'dividends' || jsonb '{"forecast": false}')
) AS dividends
WHERE
i.symbol = 'ASX_CBA'
ORDER BY
dividends ->>'balanceDate' DESC;
The above query gives me the following results:
{"forecast":true}
{"forecast":false}
{"amount":"3.1845","balanceDate":"2023-06-01T00:00:00Z"}
{"amount":"3.0452","balanceDate":"2022-06-01T00:00:00Z"}
{"amount":"2.3035","balanceDate":"2021-06-01T00:00:00Z"}
{"amount":"2.9800","balanceDate":"2020-06-30T00:00:00Z"}
{"amount":"4.3100","balanceDate":"2019-06-30T00:00:00Z"}
But what I need instead is the following output:
{"amount":"3.1845","balanceDate":"2023-06-01T00:00:00Z","forecast":true}
{"amount":"3.0452","balanceDate":"2022-06-01T00:00:00Z","forecast":true}
{"amount":"2.3035","balanceDate":"2021-06-01T00:00:00Z","forecast":true}
{"amount":"2.9800","balanceDate":"2020-06-30T00:00:00Z","forecast":false}
{"amount":"4.3100","balanceDate":"2019-06-30T00:00:00Z","forecast":false}
It turns out that it is not possible to update multiple jsons objects within a json array in a single operation by default.
To be able to do that a Postgres function needs to be created:
-- the params are the same as in aforementioned `jsonb_set`
CREATE OR REPLACE FUNCTION update_json_array_elements(target jsonb, path text[], new_value jsonb)
RETURNS jsonb language sql AS $$
-- aggregate the jsonb from parts created in LATERAL
SELECT jsonb_agg(updated_jsonb)
-- split the target array to individual objects...
FROM jsonb_array_elements(target) individual_object,
-- operate on each object and apply jsonb_set to it. The results are aggregated in SELECT
LATERAL jsonb_set(individual_object, path, new_value) updated_jsonb
$$;
The above function was suggested by kubak in this answer: https://stackoverflow.com/a/53712268/782390
Combined with this query:
SELECT
dividends
FROM
stock_financial AS f
INNER JOIN instrument AS i ON i.id = f.instrument_id,
jsonb_array_elements(
update_json_array_elements(f.dividend_forecast->'dividends', '{forecast}', 'true') ||
update_json_array_elements(f.dividend_actual->'dividends', '{forecast}', 'false')
) AS dividends
WHERE
i.symbol = 'ASX_CBA'
ORDER BY
dividends ->>'balanceDate' DESC;
I then get the following output, that it is exactly what I need:
{"amount":"3.1845","forecast":true,"balanceDate":"2023-06-01T00:00:00Z"}
{"amount":"3.0452","forecast":true,"balanceDate":"2022-06-01T00:00:00Z"}
{"amount":"2.3035","forecast":true,"balanceDate":"2021-06-01T00:00:00Z"}
{"amount":"2.9800","forecast":false,"balanceDate":"2020-06-30T00:00:00Z"}
{"amount":"4.3100","forecast":false,"balanceDate":"2019-06-30T00:00:00Z"}
I am using mssql and one column is having json data, I want to update that part of that json which is an array, by passing the id.
{
"customerName":"mohan",
"custId":"e35273d0-c002-11e9-8188-a1525f580dfd",
"feeds":[
{
"feedId":"57f221d0-c310-11e9-8af7-cf1cf42fc72e",
"feedName":"ccsdcdscsdc",
"format":"Excel",
"sources":[
{
"sourceId":69042417,
"name":"TV 2 Livsstil"
},
{
"sourceId":69042419,
"name":"Turk Max"
}
]
},
{
"feedId":"59bbd360-c312-11e9-8af7-cf1cf42fc72e",
"feedName":"dfgdfgdfgdfgsdfg",
"format":"XmlTV",
"sources":[
{
"sourceId":69042417,
"name":"TV 2 Livsstil"
},
{
"sourceId":69042419,
"name":"Turk Max"
}
]
}
]
}
suppose if I am going to pass customerId and feedId, it should update the whole feed with the feed which I have passed.
I tried with below query, but no help.
UPDATE
ExtractsConfiguration.dbo.Customers
SET
configJSON = JSON_MODIFY(configJSON,'$.feeds[]',{"feedName":"ccsdcdscsdc"})
WHERE
CustomerId = '9ee07040-c001-11e9-b29a-55eb3439cd7c'
AND json_query(configJSON,'$.feeds[].feedId'='57f221d0-c310-11e9-8af7-cf1cf42fc72e');
This, #mohan, is a tricky one and I took it on as a challenge to myself. There is a way to update a nested JSON object's value like you're asking, however, it's not as straight forward as it seems.
Because you're working within an array, you need the array's index in order to update a nested value. In your case you don't know the index within the array, however, you do have a key-value you can reference, in this case, your feedName.
In order to update your value, you first need to "unpack" your JSON so that you can filter for a specific feedName, "ccsdcdscsdc" in your example.
Here is an example that you can run in SSMS that will get you moving in the right direction.
The first thing I created was #Customers TABLE variable to mimic the data structure you showed in your example and inserted your sample data.
DECLARE #Customers TABLE ( CustomerId VARCHAR(50), configJSON VARCHAR(MAX) );
INSERT INTO #Customers ( CustomerID, configJSON ) VALUES ( '9ee07040-c001-11e9-b29a-55eb3439cd7c', '{"customerName":"mohan","custId":"e35273d0-c002-11e9-8188-a1525f580dfd","feeds":[{"feedId":"57f221d0-c310-11e9-8af7-cf1cf42fc72e","feedName":"ccsdcdscsdc","format":"Excel","sources":[{"sourceId":69042417,"name":"TV 2 Livsstil"},{"sourceId":69042419,"name":"Turk Max"}]},{"feedId":"59bbd360-c312-11e9-8af7-cf1cf42fc72e","feedName":"dfgdfgdfgdfgsdfg","format":"XmlTV","sources":[{"sourceId":69042417,"name":"TV 2 Livsstil"},{"sourceId":69042419,"name":"Turk Max"}]}]}' );
Running a SELECT against #Customers returns the following:
+--------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| CustomerId | configJSON |
+--------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 9ee07040-c001-11e9-b29a-55eb3439cd7c | {"customerName":"mohan","custId":"e35273d0-c002-11e9-8188-a1525f580dfd","feeds":[{"feedId":"57f221d0-c310-11e9-8af7-cf1cf42fc72e","feedName":"ccsdcdscsdc","format":"Excel","sources":[{"sourceId":69042417,"name":"TV 2 Livsstil"},{"sourceId":69042419,"name":"Turk Max"}]},{"feedId":"59bbd360-c312-11e9-8af7-cf1cf42fc72e","feedName":"dfgdfgdfgdfgsdfg","format":"XmlTV","sources":[{"sourceId":69042417,"name":"TV 2 Livsstil"},{"sourceId":69042419,"name":"Turk Max"}]}]} |
+--------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Next, I matched your rules for the update: Update a nested JSON value that is restricted to a specific CustomerId (9ee07040-c001-11e9-b29a-55eb3439cd7c) and a feedName (ccsdcdscsdc).
Like I mentioned, we need to "unpack" the JSON first because we don't know the specific key (index) value that should be updated. The easiest way to accomplish both tasks (unpack/update) is to use a Common Table Expression (CTE).
So, here's how I did that:
;WITH Config_CTE AS (
SELECT * FROM #Customers AS Customer
CROSS APPLY OPENJSON( configJSON, '$.feeds' ) AS Config
WHERE
Customer.CustomerId = '9ee07040-c001-11e9-b29a-55eb3439cd7c'
AND JSON_VALUE( Config.value, '$.feedName' ) = 'ccsdcdscsdc'
)
UPDATE Config_CTE
SET configJSON = JSON_MODIFY( configJSON, '$.feeds[' + Config_CTE.[key] + '].format', 'MS Excel' );
The CTE allows us to "unpack" (I made this word up as it seemed fitting) the JSON contained in configJSON, which then allows us to apply a filter against the feedName.
AND JSON_VALUE( Config.value, '$.feedName' ) = 'ccsdcdscsdc'
You'll also note that we included the CustomerId rule:
Customer.CustomerId = '9ee07040-c001-11e9-b29a-55eb3439cd7c'
Both the CustomerId and feedName could easily be SQL variables.
So, what did this do? If we were to look at Configs_CTE resultset ( by changing the UPDATE... to SELECT * FROM Config_CTE ) we would see:
+--------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------+
| CustomerId | configJSON | key | value | type |
+--------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------+
| 9ee07040-c001-11e9-b29a-55eb3439cd7c | {"customerName":"mohan","custId":"e35273d0-c002-11e9-8188-a1525f580dfd","feeds":[{"feedId":"57f221d0-c310-11e9-8af7-cf1cf42fc72e","feedName":"ccsdcdscsdc","format":"Excel","sources":[{"sourceId":69042417,"name":"TV 2 Livsstil"},{"sourceId":69042419,"name":"Turk Max"}]},{"feedId":"59bbd360-c312-11e9-8af7-cf1cf42fc72e","feedName":"dfgdfgdfgdfgsdfg","format":"XmlTV","sources":[{"sourceId":69042417,"name":"TV 2 Livsstil"},{"sourceId":69042419,"name":"Turk Max"}]}]} | 0 | {"feedId":"57f221d0-c310-11e9-8af7-cf1cf42fc72e","feedName":"ccsdcdscsdc","format":"Excel","sources":[{"sourceId":69042417,"name":"TV 2 Livsstil"},{"sourceId":69042419,"name":"Turk Max"}]} | 5 |
+--------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------+
There is a bunch of information here, but what we really care about is the "key" column as this contains the feed index ( in this case 0 ) that we want to update.
With that, was able to complete the request and UPDATE format from "Excel" to "MS Excel" for the "feed" with the feedName of "ccsdcdscsdc".
This guy ( note the use of Config_CTE.[key] ):
UPDATE Config_CTE
SET configJSON = JSON_MODIFY( configJSON, '$.feeds[' + Config_CTE.[key] + '].format', 'MS Excel' );
Did it work? Let's look at the updated table's data.
+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| CustomerId | configJSON |
+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 9ee07040-c001-11e9-b29a-55eb3439cd7c | {"customerName":"mohan","custId":"e35273d0-c002-11e9-8188-a1525f580dfd","feeds":[{"feedId":"57f221d0-c310-11e9-8af7-cf1cf42fc72e","feedName":"ccsdcdscsdc","format":"MS Excel","sources":[{"sourceId":69042417,"name":"TV 2 Livsstil"},{"sourceId":69042419,"name":"Turk Max"}]},{"feedId":"59bbd360-c312-11e9-8af7-cf1cf42fc72e","feedName":"dfgdfgdfgdfgsdfg","format":"XmlTV","sources":[{"sourceId":69042417,"name":"TV 2 Livsstil"},{"sourceId":69042419,"name":"Turk Max"}]}]} |
+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Here's the updated JSON "beautified" (pretty sure I didn't make up that one).
{
"customerName": "mohan",
"custId": "e35273d0-c002-11e9-8188-a1525f580dfd",
"feeds": [{
"feedId": "57f221d0-c310-11e9-8af7-cf1cf42fc72e",
"feedName": "ccsdcdscsdc",
"format": "MS Excel",
"sources": [{
"sourceId": 69042417,
"name": "TV 2 Livsstil"
}, {
"sourceId": 69042419,
"name": "Turk Max"
}]
}, {
"feedId": "59bbd360-c312-11e9-8af7-cf1cf42fc72e",
"feedName": "dfgdfgdfgdfgsdfg",
"format": "XmlTV",
"sources": [{
"sourceId": 69042417,
"name": "TV 2 Livsstil"
}, {
"sourceId": 69042419,
"name": "Turk Max"
}]
}]
}
Well, there you have it, format for feedName "ccsdcdscsdc" has been updated from "Excel" to "MS Excel". I was not clear on what you were trying to update, so I used format for my testing/example.
I hope this gets you moving in the right direction with your task. Happy coding!
Here's the complete example that can be run in SSMS:
-- CREATE A CUSTOMERS TABLE TO MIMIC SCHEMA --
DECLARE #Customers TABLE ( CustomerId VARCHAR(50), configJSON VARCHAR(MAX) );
INSERT INTO #Customers ( CustomerID, configJSON ) VALUES ( '9ee07040-c001-11e9-b29a-55eb3439cd7c', '{"customerName":"mohan","custId":"e35273d0-c002-11e9-8188-a1525f580dfd","feeds":[{"feedId":"57f221d0-c310-11e9-8af7-cf1cf42fc72e","feedName":"ccsdcdscsdc","format":"Excel","sources":[{"sourceId":69042417,"name":"TV 2 Livsstil"},{"sourceId":69042419,"name":"Turk Max"}]},{"feedId":"59bbd360-c312-11e9-8af7-cf1cf42fc72e","feedName":"dfgdfgdfgdfgsdfg","format":"XmlTV","sources":[{"sourceId":69042417,"name":"TV 2 Livsstil"},{"sourceId":69042419,"name":"Turk Max"}]}]}' );
-- SHOW CURRENT DATA --
SELECT * FROM #Customers;
-- UPDATE "format" FROM "Excel" to "MS Excel" FOR feedName: ccsdcdscsdc --
WITH Config_CTE AS (
SELECT * FROM #Customers AS Customer
CROSS APPLY OPENJSON( configJSON, '$.feeds' ) AS Config
WHERE
Customer.CustomerId = '9ee07040-c001-11e9-b29a-55eb3439cd7c'
AND JSON_VALUE( Config.value, '$.feedName' ) = 'ccsdcdscsdc'
)
UPDATE Config_CTE
SET configJSON = JSON_MODIFY( configJSON, '$.feeds[' + Config_CTE.[key] + '].format', 'MS Excel' );
-- SHOW UPDATED DATA --
SELECT * FROM #Customers;
EDIT:
i wanted to update the feed with the given feedId with the whole new
feed
To replace one "feed" with an entirely new feed, you may do the following:
-- REPLACE AN ENTIRE JSON ARRAY OBJECT --
DECLARE #MyNewJson NVARCHAR(MAX) = '{"feedId": "this_is_an_entirely_new_node","feedName": "ccsdcdscsdc","format": "NewFormat","sources": [{"sourceId": 1,"name": "New Source 1"},{"sourceId": 2,"name": "New Source 2"}]}';
WITH Config_CTE AS (
SELECT * FROM #Customers AS Customer
CROSS APPLY OPENJSON( configJSON, '$.feeds' ) AS Config
WHERE
Customer.CustomerId = '9ee07040-c001-11e9-b29a-55eb3439cd7c'
AND JSON_VALUE( Config.value, '$.feedName' ) = 'ccsdcdscsdc'
)
UPDATE Config_CTE
SET configJSON = JSON_MODIFY( configJSON, '$.feeds[' + Config_CTE.[key] + ']', JSON_QUERY( #MyNewJson ) );
After running this, the feeds now appear as:
{
"customerName": "mohan",
"custId": "e35273d0-c002-11e9-8188-a1525f580dfd",
"feeds": [
{
"feedId": "this_is_an_entirely_new_node",
"feedName": "ccsdcdscsdc",
"format": "NewFormat",
"sources": [
{
"sourceId": 1,
"name": "New Source 1"
},
{
"sourceId": 2,
"name": "New Source 2"
}
]
},
{
"feedId": "59bbd360-c312-11e9-8af7-cf1cf42fc72e",
"feedName": "dfgdfgdfgdfgsdfg",
"format": "XmlTV",
"sources": [
{
"sourceId": 69042417,
"name": "TV 2 Livsstil"
},
{
"sourceId": 69042419,
"name": "Turk Max"
}
]
}
]
}
Note the use of JSON_QUERY( #MyNewJson ) in the UPDATE. This is important.
From Microsoft's Docs:
JSON_QUERY without its optional second parameter returns only the
first argument as a result. Since JSON_QUERY always returns valid
JSON, FOR JSON knows that this result does not have to be escaped.
If you were to pass #MyNewJson without the JSON_QUERY your new json would be escaped ( e.g., "customerName" becomes \"customerName\" ) as if it were being stored as plain text. JSON_QUERY will return unescaped, valid JSON which is necessary in your case.
Also note that the only change I made to replace the entire feed vs. a single item value was switching
'$.feeds[' + Config_CTE.[key] + '].format'
to
'$.feeds[' + Config_CTE.[key] + ']'.
I'm using Sequelize 5 in a NodeJS/Express/Angular7 application using MySQL.
I have a table of images with a bidirectional hasMany relationship to a table of keywords through a join table.
I want to find all images that include keyword IDs in an array of IDs [kewordsAnd], excluding images that have keyword IDs in an array to exclude [keywordsNot].
The code right now is as follows:
Image.findAll({
include: [ {
model: Keywords,
as: 'Keywords',
attributes: ['id', 'name'],
where: {
id: {
[Op.and]: [
{ [Op.in]: keywordsAnd },
{ [Op.notIn]: keywordsNot }
]
}
}
} ]
});
This correctly fetches all images that have IDs in the keywordsAnd array, but the keywordsNot array is completely ignored.
This is the generated SQL (line breaks added for readability)
const objectsAnd: [ 430 ]
const objectsNot: [ 779 ]
Executing (default):
SELECT `image`.`id`, `image`.`status`, `image`.`image_type`, `image`.`embed`,
`image`.`raw_path`, `image`.`thumb_path`, `image`.`detail_path`, `image`.`story_title`,
`image`.`image_title`, `image`.`original_filename`, `image`.`description`,
`image`.`geo_info`, `image`.`year_taken`, `image`.`credit_info`,
`image`.`width`, `image`.`height`, `image`.`format`, `image`.`duration`,
`image`.`created_at`
AS `createdAt`, `image`.`updated_at` AS `updatedAt`, `keywords`.`id` AS `keywords.id`,
`keywords`.`name` AS `keywords.name`, `keywords`.`created_at` AS `keywords.createdAt`,
`keywords`.`updated_at` AS `keywords.updatedAt`, `keywords->image_keywords`.`id` AS
`keywords.image_keywords.id`, `keywords->image_keywords`.`image_id` AS
`keywords.image_keywords.image_id`, `keywords->image_keywords`.`keyword_id` AS
`keywords.image_keywords.keyword_id`, `keywords->image_keywords`.`created_at` AS
`keywords.image_keywords.createdAt`, `keywords->image_keywords`.`updated_at` AS
`keywords.image_keywords.updatedAt`
FROM `image` AS `image` INNER JOIN ( `image_keywords` AS `keywords->image_keywords`
INNER JOIN `keywords` AS `keywords`
ON `keywords`.`id` = `keywords->image_keywords`.`keyword_id`)
ON `image`.`id` = `keywords->image_keywords`.`image_id`
AND (`keywords`.`id` IN (430) AND `keywords`.`id` NOT IN (779));
I'm not clear if this is an issue with how I'm structuring the query, or a bug, or a complete misunderstanding on my part as to how this should work, but I'm really hoping for some guidance.
Given the sample data below, are you expecting results ONLY image 1 (and NOT image 2)?
image id keyword id
1 430
2 430
2 709
If so, then the problem is with your query. You would need to select image ids which have keyword id 709 and use that as the basis of your NOT IN statement.
I would like to update fields of the following JSON array (stored in a column with JSONB datatype) based on the objectId.
[
{
objectId: 'gDKn1jM5d',
objectType: 'type1',
posX: 50,
posY: 100,
},
{
objectId: '4dg5E8BDv',
objectType: 'type2',
posX: 50,
posY: 100,
},
{
objectId: 'ZmCwOf5N2',
objectType: 'type3',
posX: 100,
posY: 150,
}
]
In Mongodb I can use a simple update statement but I was not able to find a way in postgres.
For example I would like to update all array elements with objectId 'ZmCwOf5N2' to the posX value 300 (that means it would only affect the 3rd array item).
I'm looking for a plain SQL statement in order to execute the update.
The postgres version is 11.
It is not possible for me to install extensions because I'm using a database as a service provider. However, in case there is no easy way to accomplish the update statement, I would be able to add a postgres function using e.g. C code.
UPDATE tbl t
SET js =
(
SELECT jsonb_agg(CASE WHEN elem->>'objectId' = 'ZmCwOf5N2'
THEN jsonb_set(elem, '{posX}', to_jsonb(int '300'))
ELSE elem
END) AS js1
FROM jsonb_array_elements(t.js) elem
)
WHERE t.js #> '[{"objectId": "ZmCwOf5N2"}]';
Note that this ..
adds the 'posX' key if it's missing
updates rows even where nothing changes
To only update existing keys and only update the row if the update actually changes the value:
UPDATE tbl t
SET js =
(
SELECT jsonb_agg(CASE WHEN elem->>'objectId' = 'ZmCwOf5N2'
THEN jsonb_set(elem, '{posX}', to_jsonb(int '300'), false) -- !
ELSE elem
END) AS js1
FROM jsonb_array_elements(t.js) elem
)
WHERE t.js #> '[{"objectId": "ZmCwOf5N2"}]'
AND js <>
(
SELECT jsonb_agg(CASE WHEN elem->>'objectId' = 'ZmCwOf5N2'
THEN jsonb_set(elem, '{posX}', to_jsonb(int '300'), false)
ELSE elem
END) AS js1
FROM jsonb_array_elements(t.js) elem
); --!
See:
How to update complex jsonb column?
Update key value in jsonb array of objects