how do you escape double quotes and brackets when querying a JSON column containing an array of objects?
This works when I run it...
SELECT boarded, boarded->>"$[0].division.id" AS divisionId FROM onboarder
But this doesn't...
SELECT boarded, boarded->>"$[*].division.id" AS divisionId FROM onboarder
I thought double arrows escaped everything and bought back only the value. This is what I have...
The ->> operator does not escape anything. It just converts the result to a scalar data type, like text or integer. It's the equivalent of doing JSON_UNQUOTE(JSON_EXTRACT(...)):
Unquotes JSON value and returns the result as a utf8mb4 string.
We can demonstrate the difference between -> and ->> by using the result set to create a new table, and inspecting the data types it creates.
create table t as SELECT boarded, boarded->"$[*].division.id" AS divisionId FROM onboarder;
show create table t;
CREATE TABLE `t` (
`boarded` json DEFAULT NULL,
`divisionId` json DEFAULT NULL
);
select * from t;
+------------------------------------+--------------+
| boarded | divisionId |
+------------------------------------+--------------+
| [{"division": {"id": "8ac7a..."}}] | ["8ac7a..."] |
+------------------------------------+--------------+
Note divisionId is a json document, which is an array.
If we use ->> this is what happens:
create table t as SELECT boarded, boarded->>"$[*].division.id" AS divisionId FROM onboarder;
show create table t;
CREATE TABLE `t` (
`boarded` json DEFAULT NULL,
`divisionId` longtext CHARACTER SET utf8mb4 COLLATE utf8mb4_bin
)
select * from t;
+------------------------------------+--------------+
| boarded | divisionId |
+------------------------------------+--------------+
| [{"division": {"id": "8ac7a..."}}] | ["8ac7a..."] |
+------------------------------------+--------------+
There's no visible difference, because the square brackets are still present. But the latter is stored as a longtext data type.
Re your comment:
How can I return divisionId as a value so that it's not an array and quoted?
You used $[*] in your query, and the meaning of this pattern is to return all elements of the array, as an array. To get a single value, you need to query a single element of the array, as in your first example:
boarded->>'$[0].division.id'
This would be so much easier if you didn't use JSON, but stored your data in a traditional table, with one division on its own row, and each field of the division in its own column.
CREATE TABLE divisions (
division_id VARCHAR(...) PRIMARY KEY,
merchant ...
);
CREATE TABLE merchants (
merchant_id ... PRIMARY KEY,
division_id VARCHAR(...),
FOREIGN KEY (division_id) REFERENCES division(division_id)
);
The more examples I see of how developers try to use JSON in MySQL, the more I'm convinced it's a bad idea.
Related
I created a table with the following types:
CREATE OR REPLACE TABLE original_table (
id INT NOT NULL PRIMARY KEY,
f FLOAT,
d DOUBLE
);
And I inserted:
INSERT INTO original_table VALUES(1, 2.2, 2.2);
If I want to make a simple copy of this table, I can use the CREATE TABLE ... SELECT statement, but in addition, I would like to add each field at the time of making the "copy", that is:
CREATE TABLE backup_table SELECT f + 1 as "f", d + 1 as "d" FROM original_table;
Both fields are assigned as DOUBLE:
SELECT COLUMN_NAME, DATA_TYPE
FROM INFORMATION_SCHEMA.COLUMNS
WHERE
TABLE_SCHEMA = 'test_db' AND
TABLE_NAME = 'backup_table' AND
(COLUMN_NAME = 'd' or COLUMN_NAME = 'f');
-> +-------------+-----------+
| COLUMN_NAME | DATA_TYPE |
+-------------+-----------+
| f | double |
+-------------+-----------+
| d | double |
+-------------+-----------+
But the field "f" was calculated with a single precision (FLOAT), while the field "d" with a double precision (DOUBLE), but both were saved as DOUBLE, is there any rule that occurs here , Should field "f" have been assigned as FLOAT, not as a DOUBLE data type?
The only thing I could find in the documentation was:
In this case, the table in the column resulting from the expression
has type INT or BIGINT depending on the length of the integer
expression. If the maximum length of the expression does not fit in an
INT, BIGINT is used instead. The length is taken from the max_length
value of the SELECT result set metadata (see C API Basic Data
Structures)...
this too:
Some conversion of data types might occur. For example, the
AUTO_INCREMENT attribute is not preserved, and VARCHAR columns can
become CHAR columns. Retrained attributes are NULL (or NOT NULL) and,
for those columns that have them, CHARACTER SET, COLLATION, COMMENT,
and the DEFAULT clause.
But I didn't find anything related to floating point expressions.
Note: I know that I can specify the data type explicitly as the documentation says, but what I mainly want to know is what the default behavior is, as it happens with expressions that have integer types.
Thanks.
I have a table of users similar to the statement below, (relevant fields included)
CREATE TABLE User (
ID INT NOT NULL AUTO_INCREMENT,
Username VARCHAR(30) NOT NULL,
IsActive BIT NOT NULL DEFAULT 1,
PRIMARY KEY (ID)
);
When I query the table with a simple SELECT statements, I get the fields back exactly as expected.
+----------------------------+
| ID | Username | IsActive |
+----+------------+----------+
| 42 | CoolGuy92 | 1 |
+----+------------+----------+
That's all well and good. If, though, I try to run the following query, and send its output to the browser for use, I get this error on the client side when trying to parse the return: Uncaught SyntaxError: Unexpected token in JSON at position X.
SELECT JSON_OBJECT(
"ID", Users.ID,
"Username", Users.Username,
"IsActive", Users.IsActive
)
FROM Users
What is causing this parsing error?
The parsing error occurs due to your IsActive column being the BIT data type. BIT columns have a charset and collation of binary, and are thusly stored as binary. An apparent value of 1 is actually stored as 0x01. During a normal query, MySQL will output the value of a BIT column using the schema's default character set, often treating it as an integer*. So for most cases, an apparent value of 1, becomes 0x31 and 0 becomes 0x30. This allows the returned values to be readable, and easily used as numbers or strings.
In the case of JSON_OBJECT though,
Strings produced by converting JSON values have a character set of utf8mb4 and a collation of utf8mb4_bin*
By making this conversion, the field's literal value is used and is output as \u0001. This is not escaped in any way, so when your browser attempts to parse the output, it fails when it reaches such a value.
To get around this, you can cast your BIT columns to another numeric datatype in your JSON_OBJECT call, or add 0 to it*.
SELECT JSON_OBJECT(
"ID", Users.ID,
"Username", Users.Username,
"IsActive", Users.IsActive + 0
)
FROM Users
Just started playing with JSON_VALUE in SQL Server. I am able to pull values from name/value pairs of JSON but I happen to have an object that looks like this:
["first.last#domain.com"]
When I attempt what works for name/value pairs:
SELECT TOP 1
jsonemail,
JSON_VALUE(jsonemail, '$') as pleaseWorky
FROM MyTable
I get back the full input, not first.last#domain.com. Am I out of luck? I don't control the upstream source of the data. I think its a sting collection being converted into a json payload. If it was name: first.last#domain.com I would be able to get it with $.name.
Thanks in advance.
It is a JSON array. So you just need to specify its index, i.e 0.
Please try the following solution.
SQL
-- DDL and sample data population, start
DECLARE #tbl TABLE (ID INT IDENTITY PRIMARY KEY, jsonemail NVARCHAR(MAX));
INSERT INTO #tbl (jsonemail) VALUES
('["first.last#domain.com"]');
-- DDL and sample data population, end
SELECT ID
, jsonemail AS [Before]
, JSON_VALUE(jsonemail, '$[0]') as [After]
FROM #tbl;
Output
+----+---------------------------+-----------------------+
| ID | Before | After |
+----+---------------------------+-----------------------+
| 1 | ["first.last#domain.com"] | first.last#domain.com |
+----+---------------------------+-----------------------+
From the docs:
Array elements. For example, $.product[3]. Arrays are zero-based.
So you need JSON_VALUE(..., '$[0]') when the root is an array and you want the first value.
To break it out into rows, you would need OPENJSON:
SELECT TOP 1
jsonemail
,j.[value] as pleaseWorky
FROM MyTable
CROSS APPLY OPENJSON(jsonemail) j
From the docs I see an example:
SELECT json_mergepatch(po_document, '{"Special Instructions":null}'
RETURNING CLOB PRETTY)
FROM j_purchaseorder;
But When I try this code in SQL Developer I get a squiggly line under CLOB and an error when I run the query?
It works in Oracle 18c:
SELECT json_mergepatch(
po_document,
'{"Special Instructions":null}'
RETURNING CLOB PRETTY
) AS updated_po_document
FROM j_purchaseorder;
Which for the test data:
CREATE TABLE j_purchaseorder( po_document CLOB CHECK ( po_document IS JSON ) );
INSERT INTO j_purchaseorder ( po_document )
VALUES ( '{"existing":"value", "Special Instructions": 42}' );
Outputs:
| UPDATED_PO_DOCUMENT |
| :------------------------------- |
| {<br> "existing" : "value"<br>} |
Removing the Special Instructions attribute as per the documentation you linked to:
When merging object members that have the same field:
If the patch field value is null then the field is dropped from the source — it is not included in the result.
Otherwise, the field is kept in the result, but its value is the result of merging the source field value with the patch field value. That is, the merging operation in this case is recursive — it dives down into fields whose values are themselves objects.
db<>fiddle here
I have the user table with user_id and user_details. it contains the JSON data in string format as shown below:
[{"name":"question-1","value":"sachin","label":"Enter your name?"},
{"name":"question-2","value":"abc#example.com","label":"Enter your email?"},
{"name":"question-3","value":"xyz","label":"Enter your city?"}]
I have tried with the json_extract but it return the result if json has data as shown below:
{"name":"question-1","value":"sachin","label":"Enter your name?"}
then it return the result as,
Name | Label
question-1 | Enter your name?
Expected Result :
I want to extract all name and label from json in sql query.
Example-1:
Consider that we have the following data in user_details column,
[{"name":"question-1","value":"sachin","label":"Enter your name?"},
{"name":"question-2","value":"abc#example.com","label":"Enter your email?"},
{"name":"question-3","value":"xyz","label":"Enter your city?"}]
then the sql query should return the result in following format ,
Name | Label
question-1 | Enter your name?
question-2 | Enter your email?
question-3 | Enter your city?
How to get this using JSON_EXTRACT in MySQL?
I assume that you are not using a table.
SET #data = '[{"name":"question-1","value":"sachin","label":"Enter your name?"},
{"name":"question-2","value":"abc#example.com","label":"Enter your email?"},
{"name":"question-3","value":"xyz","label":"Enter your city?"}]';
SELECT JSON_EXTRACT(#data,'$[*].name') AS "name", JSON_EXTRACT(#data,'$[*].label') AS "label";
it will return
name | label
["question-1", "question-2", "question-3"] | ["Enter your name?", "Enter your email?", "Enter your city?"]
SQL should be like below according to your table and column name:
SELECT JSON_EXTRACT(user_details,'$[*].name') AS "name", JSON_EXTRACT(user_details,'$[*].label') AS "label" FROM user;
you can match them by using some loops for arrays. I do not know if this is the best way but it satisfy my needs.
Another answer given by How to extract rows from a json array using the mysql udf json_extract 0.4.0? is to parse yourself the JSON with common_schema. Pretty tricky if you are not used to complex SQL.
You could create an own aggregated table as proposed in topic List all array elements of a MySQL JSON field if you know how many elements will be given by the field but I guess this is not your case.
However, it seems better, as mentioned in both answers, not to store such json lists in your SQL database. Maybe could you make a related table containing one line per each dictionary and then link it to your main table with a foreign key.
I was working in a report where there was a big json array list in one column. I modified the datamodel to store the relationship 1 to * instead of storing everything in one single column. For doing this process, I had to use a while in a stored procedure since I do not know the maximum size:
DROP PROCEDURE IF EXISTS `test`;
DELIMITER #
CREATE PROCEDURE `test`()
PROC_MAIN:BEGIN
DECLARE numNotes int;
DECLARE c int;
DECLARE pos varchar(10);
SET c = 0;
SET numNotes = (SELECT
ROUND (
(
LENGTH(debtor_master_notes)
- LENGTH( REPLACE ( debtor_master_notes, "Id", "") )
) / LENGTH("Id")
) AS countt FROM debtor_master
order by countt desc Limit 1);
DROP TEMPORARY TABLE IF EXISTS debtorTable;
CREATE TEMPORARY TABLE debtorTable(debtor_master_id int(11), json longtext, note int);
WHILE(c <numNotes) DO
SET pos = CONCAT('$[', c, ']');
INSERT INTO debtorTable(debtor_master_id, json, note)
SELECT debtor_master_id, JSON_EXTRACT(debtor_master_notes, pos), c+1
FROM debtor_master
WHERE debtor_master_notes IS NOT NULL AND debtor_master_notes like '%[%' AND JSON_EXTRACT(debtor_master_notes, pos) IS NOT NULL AND JSON_EXTRACT(debtor_master_notes, pos) IS NOT NULL;
SET c = c + 1;
END WHILE;
SELECT * FROM debtorTable;
END proc_main #
DELIMITER ;
You don't use JSON_EXTRACT(). You use JSON_TABLE():
mysql> create table mytable ( id serial primary key, data json);
Query OK, 0 rows affected (0.01 sec)
mysql> insert into mytable set data = '[{"name":"question-1","value":"sachin","label":"Enter your name?"},
'> {"name":"question-2","value":"abc#example.com","label":"Enter your email?"},
'> {"name":"question-3","value":"xyz","label":"Enter your city?"}]';
Query OK, 1 row affected (0.00 sec)
mysql> SELECT j.* FROM mytable,
JSON_TABLE(data, '$[*]' COLUMNS (
name VARCHAR(20) PATH '$.name',
label VARCHAR(50) PATH '$.label'
)) AS j;
+------------+-------------------+
| name | label |
+------------+-------------------+
| question-1 | Enter your name? |
| question-2 | Enter your email? |
| question-3 | Enter your city? |
+------------+-------------------+
JSON_TABLE() requires MySQL 8.0.4 or later. If you aren't running at least that version, you will have to upgrade.
Honestly, if you need to access the individual fields, it's less work to store your data in normal columns, and avoid using JSON.