MYSQL - append new array elements to JSON column - mysql

I have a table with json columns with default empty arrays [].
old table
id
myJson
A1
[1, 2]
A12
[]
I want the table updated to below.
id
myJson
A1
[1, 2, 321, 432]
A12
[222]
Tried - INSERT INTO table (id, myJson) VALUES ("A1", "[321, 432]"), ("A12", "[222]") ON DUPLICATE KEY UPDATE myJson = JSON_ARRAY_APPEND(myJson, "$", myJson)
Above query and other tried so far did not produce desirable result.
What query can i use to append new arrays to old as shown in the tables?

What version of MySQL are you using?
One option is to use JSON_MERGE_PRESERVE or JSON_MERGE_PATCH (as needed):
INSERT INTO `table` (`id`, `myJson`)
VALUES ('A1', '[321, 432]'), ('A12', '[222]') AS `new`
ON DUPLICATE KEY UPDATE
`table`.`myJson` = JSON_MERGE_PRESERVE(`table`.`myJson`, `new`.`myJson`);
See dbfiddle.

Related

Insert query for Foregin key mysql python Script

I want to write a script for data migration on another DB, my python script is first to read and then insert in bulk, but I have an issue with the foreign key, where incident_number is the primary key for the reading table and the foreign key for Insert table, how to insert proper for incident_number
mycursor.execute("SELECT incident_category_id,incident_number,org_id_id FROM `internal_incident_incidentcreationinfo`")
IncidentMaster=[]
columns=tuple([d[0] for d in mycursor.description])
for row in mycursor:
IncidentMaster.append(dict(zip(columns, row)))
for incident_data in IncidentMaster:
sql= "INSERT INTO internal_incident_historicalmultiplecategorylist(incident_category_id,incident_number_id,org_id_id) VALUES(%s,%s, %s)"
values=[(str(incident_data['incident_category_id']), str(incident_data['incident_number']), str(incident_data['org_id_id']))]
mycursor1.executemany(sql,values)
connection1.commit()
the problem within select table row field incident_number
while in INSERT TABLE row in incident_number_id
You should collect all the values into a single list, then call executemany() once with that complete list. mycursor.fetchall() will do this automatically for you.
There's no need to convert the values to strings.
mycursor.execute("SELECT incident_category_id,incident_number,org_id_id FROM `internal_incident_incidentcreationinfo`")
values = mycursor.fetchall()
sql= "INSERT INTO internal_incident_historicalmultiplecategorylist(incident_category_id,incident_number,org_id_id) VALUES(%s,%s, %s)"
mycursor1.executemany(sql,values)
connection1.commit()

Transform records of a column from int to JSON?

I have a column with int datatype records. I want to first convert them to varchar so I can represent them as JSON inside the table. Is this possible?
Edit:
I created a test table so I could explain better what I meant with my question.
This is the query I made:
SELECT TOP (1000) [test_id], [ToJsonTestValue]
FROM [Test].[dbo].[Test]
This is the result from the query. What I wish is to convert the column "ToJsonTestValue" to actual JSON. It is of datatype int, but what is intended is to alter that to varchar and then represent it as JSON.
Solution
I was overthinking on this one. I just needed to make a set and an update like this:
UPDATE dbo.TestTwo
SET ToJsonTestValue = '["' + ToJsonTestValue + '"]'
The output is like this:
Original answer:
If I understand the question correctly, what you may try is to generate a JSON content for each row using FOR JSON PATH. The following basic example is a possible solution to your problem:
Table:
CREATE TABLE Test (
TestId int,
ToJsonTestValue int
)
INSERT INTO Test (TestId, ToJsonTestValue)
VALUES
(2, 1),
(3, 1),
(4, 2),
(5, 3)
ALTER and UPDATE table:
ALTER TABLE Test ALTER COLUMN ToJsonTestValue varchar(1000)
UPDATE Test
SET ToJsonTestValue = (SELECT ToJsonTestValue FOR JSON PATH)
Result:
TestId ToJsonTestValue
--------------------------------
2 [{"ToJsonTestValue":"1"}]
3 [{"ToJsonTestValue":"1"}]
4 [{"ToJsonTestValue":"2"}]
5 [{"ToJsonTestValue":"3"}]
Update:
Note, that with FOR JSON you can't generate a JSON array of scalar values ([1, 2, 3]), but you may try an approach using JSON_MODIFY() (of course, concatenating strings to build the expected outpur is always an option):
ALTER TABLE Test ALTER COLUMN ToJsonTestValue varchar(1000)
UPDATE Test
SET ToJsonTestValue = JSON_MODIFY('[]', 'append $', ToJsonTestValue)
Result:
TestId ToJsonTestValue
----------------------
2 ["1"]
3 ["1"]
4 ["2"]
5 ["3"]

MySql json reverse search

I have a MySQL table with a column of type json. The values of this columns are json array not json object. I need to find records of this table that at least one value of their json column is substring of the given string/phrase.
Let's suppose the table is looks like this:
create table if not exists test(id int, col json);
insert into test values (1, '["ab", "cd"]');
insert into test values (2, '["ef", "gh", "ij"]');
insert into test values (3, '["xyz"]');
If the input string/phrase is "acf ghi z" the second column must be returned as the result, because "gh" is a substring of the input. I read a lot about json_contains, json_extract, json_search and even json_overlaps but couldn't manage to solve this problem.
What is the correct sql syntax to retrieve the related rows?
MySQL version is 8.0.20
You can use json_table() to extract the JSON array as rows in a table. Then just filter:
select *
from test t cross join
json_table(t.col, '$[*]' columns (str varchar(255) path '$')) j
where 'acf ghi z' like concat('%', j.str, '%');
Here is a db<>fiddle.

Query Postgres for number of items in JSON

I am running Postgres 9.3 and have a problem with a query involving a JSON column that I cannot seem to crack.
Let's assume this is the table:
# CREATE TABLE aa (a int, b json);
# INSERT INTO aa VALUES (1, '{"f1":1,"f2":true}');
# INSERT INTO aa VALUES (2, '{"f1":2,"f2":false,"f3":"Hi I''m \"Dave\""}');
# INSERT INTO aa VALUES (3, '{"f1":3,"f2":true,"f3":"Hi I''m \"Popo\""}');
I now want to create a query that returns all rows that have exactly three items/keys in the root node of the JSON column (i.e., row 2 and 3). Whether the JSON is nested doesn't matter.
I tried to use json_object_keys and json_each but couldn't get it to work.
json_each(json) should do the job. Counting only root elements:
SELECT aa.*
FROM aa, json_each(aa.b) elem
GROUP BY aa.a -- possible, because it's the PK!
HAVING count(*) = 3;
SQL Fiddle.

Cannot extract element from a scalar - postgresql error

I'm getting this error when trying to access data in a JSON object, does anybody know what it is causing it?
This is the query:
SELECT id, data FROM cities WHERE data->'location'->>'population' = '270816'
This is the JSON object:
location": {
"population": 270816,
"type": "city"
}
Any help would be really appreciate it. Thanks
I was able to get this SELECT to work in Postgres 9.3.1. Here's an sqlfiddle which illustrates that.
Here is the DDL and INSERT statement I used in the sqlfiddle:
create table cities
(
id serial,
data json
);
insert into cities (data) VALUES
('{
"location": {
"population": 270816,
"type": "city"
}
}'
);
What version of Postgres are you using? How are you inserting the JSON? What's the DDL for your cities table?
It suspect it may be an issue with the way you are inserting the JSON data. Try inserting it similar to the way I am doing in the sqlfiddle above and see if that works for you. i.e. as a pure SQL string, but one with valid JSON inside, into a column defined as json.
Just had what sounds like the same issue on Postgres 9.6.6. Improper string escaping caused mysterious JSONB behavior. Using pgAdmin4,
CREATE TABLE json_test (json_data JSONB);
INSERT INTO json_test (json_data) VALUES ('"{\"id\": \"test1\"}"');
INSERT INTO json_test (json_data) VALUES ('{"id": "test2"}');
SELECT json_data, json_data->>'id' as id FROM json_test;
returns the following pgAdmin4 output showing baffling failure to find id test2. Turns out the pgAdmin4 display is misleading. Situation becomes clear using text display from PSQL:
db=> CREATE TABLE json_test (json_data JSONB);
CREATE TABLE
db=> INSERT INTO json_test (json_data) VALUES ('"{\"id\": \"test1\"}"');
INSERT 0 1
db=> INSERT INTO json_test (json_data) VALUES ('{"id": "test2"}');
INSERT 0 1
db=> SELECT json_data, json_data->>'id' as id FROM json_test;
json_data | id
-----------------------+-------
"{\"id\": \"test1\"}" |
{"id": "test2"} | test2
(2 rows)
Where it is obvious that the first row was inserted as a string which just looks like JSON, not as a nested JSON object.