Force check constraints to be evaluated before computed columns - json

I want to have a JSON column in a table. I want to have (persisted) computed column that extracts useful information from the JSON data.
I want to have a "strict" JSON path but I also want to check that the path exists in the JSON so that the error message is specific to the table and isn't just about the illegal JSON path.
CREATE TABLE DataWithJSON (
DataID BIGINT,
DataJSON NVARCHAR(MAX) CONSTRAINT CK_DataWithJSON_DataJSON CHECK (
ISJSON(DataJSON) = 1
AND JSON_VALUE(DataJSON, 'lax $.Data.ID') IS NOT NULL
),
DataJSONID AS JSON_VALUE(DataJSON, 'strict $.Data.ID') PERSISTED
);
INSERT INTO DataWithJSON (DataID, DataJSON)
VALUES (666, N'{"Data":{"Name":"Tydýt"}}');
This code returns (on my machine) somewhat mysterious error message
Msg 13608, Level 16, State 2, Line xx Property cannot be found on the specified JSON path.
I would like to see more specific message
Msg 547, Level 16, State 0, Line yy The INSERT statement conflicted with the CHECK constraint "CK_DataWithJSON_DataJSON". The conflict occurred in database "DB", table "schema.DataWithJSON", column 'DataJSON'.
Is this possible to achieve just with table constraints or am I out of luck and do I have to check the JSON in a stored procedure/application before inserting to the table?
One solution would be to have "lax" path in the computed column, which, hopefully, is not the only solution. I will fall back to that solution if there is none other to be found.

You can't control the order that check constraints and computed columns are evaluated but you can use a CASE expression in the computed column definition so that the JSON_VALUE(... 'strict ...) part is only evaluated if the check constraint would pass.
CREATE TABLE DataWithJSON (
DataID BIGINT,
DataJSON NVARCHAR(MAX) CONSTRAINT CK_DataWithJSON_DataJSON CHECK (
ISJSON(DataJSON) = 1 AND JSON_VALUE(DataJSON, 'lax $.Data.ID') IS NOT NULL
),
DataJSONID AS CASE WHEN ISJSON(DataJSON) = 1 AND JSON_VALUE(DataJSON, 'lax $.Data.ID') IS NOT NULL THEN JSON_VALUE(DataJSON, 'strict $.Data.ID') END PERSISTED
);
Msg 547, Level 16, State 0, Line 9 The INSERT statement conflicted
with the CHECK constraint "CK_DataWithJSON_DataJSON". The conflict
occurred in database "Foo", table
"dbo.DataWithJSON", column 'DataJSON'. The statement has been
terminated.

Related

How to insert a json object with ORACLE 19 and 21

Because I don't use Oracle 21. I can't use the JSON type in the definition of a table.
CREATE TABLE TABLE_TEST_QUERY_2
(
TTQ_NR INTEGER GENERATED BY DEFAULT AS IDENTITY,
TTQ_QUERY_TO_BE_TESTED VARCHAR2 (4000 BYTE),
TTQ_RESULT CLOB,
--RESULT JSON, UPGRADE oracle 21
TTQ_TTQ_CREATION_DATE DATE DEFAULT SYSDATE,
TTQ_ALREADY_TESTED INTEGER DEFAULT 0,
TTQ_TEST_PASSED INTEGER,
PRIMARY KEY (TTQ_NR),
CONSTRAINT RESULT CHECK (TTQ_RESULT IS JSON)
)
I want to add a json object in ttq_result. Not a string representing a json.
I've a way to transform a json into a clob.
select to_clob(utl_raw.cast_to_raw (json_object('a' value 2))) from dual;
But it's not working, if I try to insert the clob created from a json in the table
INSERT INTO BV_OWN.TABLE_TEST_QUERY_2 TTQ_RESULT
VALUES to_clob(utl_raw.cast_to_raw (json_object(a value '2')));
[Error] Execution (3: 13): ORA-03001: unimplemented feature
code(oracle 18)
update:
I've tried to add a json on dbfiddle with oracle 21. I'm using the json type to define a column.
CREATE TABLE TABLE_TEST_QUERY_2
(
TTQ_NR INTEGER GENERATED BY DEFAULT AS IDENTITY,
TTQ_QUERY_TO_BE_TESTED VARCHAR2 (4000 BYTE),
TTQ_RESULT JSON,
TTQ_TTQ_CREATION_DATE DATE DEFAULT SYSDATE,
TTQ_ALREADY_TESTED INTEGER DEFAULT 0,
TTQ_TEST_PASSED INTEGER,
PRIMARY KEY (TTQ_NR)
)
INSERT INTO TABLE_TEST_QUERY_2 TTQ_RESULT
VALUES json_object('a' value 2);
I have the same error.
ORA-03001: unimplemented feature
Maybe are these 2 problems related.
code oracle 21
Your first problem is because you are using the wrong syntax as you have omitted the brackets from around column identifiers or the column value:
INSERT INTO BV_OWN.TABLE_TEST_QUERY_2 (TTQ_RESULT)
VALUES ( to_clob(utl_raw.cast_to_raw (json_object(a value '2'))));
Which fixes the unimplemented feature exception but now you get:
ORA-00984: column not allowed here
Which is because you are using a different query to the SELECT as you have changed json_object('a' value 2) to json_object(a value '2') and the query cannot find a column a.
If you fix that by using the original code from the SELECT with 'a' as a string literal and not a a column identifier:
INSERT INTO BV_OWN.TABLE_TEST_QUERY_2 (TTQ_RESULT)
VALUES ( to_clob(utl_raw.cast_to_raw (json_object('a' value 2))));
You will then get the error:
ORA-02290: check constraint (FIDDLE_FCJHJVMCPHKXUCUPDUSV.RESULT) violated
Because converting to a RAW and then to a CLOB will mangle the value.
You need something much simpler:
INSERT INTO BV_OWN.TABLE_TEST_QUERY_2 (TTQ_RESULT)
VALUES (json_object('a' value 2));
or:
INSERT INTO BV_OWN.TABLE_TEST_QUERY_2 (TTQ_RESULT)
VALUES (EMPTY_CLOB() || json_object('a' value 2));
Which both work.
db<>fiddle here

postgres force json datatype

When working with JSON datatype, is there a way to ensure the input JSON must have elements. I don't mean primary, I want the JSON that gets inserted to at least have the id and name element, it can have more but at the minimum the id and name must be there.
thanks
The function checks what you want:
create or replace function json_has_id_and_name(val json)
returns boolean language sql as $$
select coalesce(
(
select array['id', 'name'] <# array_agg(key)
from json_object_keys(val) key
),
false)
$$;
select json_has_id_and_name('{"id":1, "name":"abc"}'), json_has_id_and_name('{"id":1}');
json_has_id_and_name | json_has_id_and_name
----------------------+----------------------
t | f
(1 row)
You can use it in a check constraint, e.g.:
create table my_table (
id int primary key,
jdata json check (json_has_id_and_name(jdata))
);
insert into my_table values (1, '{"id":1}');
ERROR: new row for relation "my_table" violates check constraint "my_table_jdata_check"
DETAIL: Failing row contains (1, {"id":1}).

Why does MySQL version 6.0 keep throwing out errors?

I am new to mySQL and Netbeans7.3.1. I got a database set up and the connection all ready to go. I created one table and that worked out alright except it wouldn't let me use auto_increment.
Now I am trying to create a table using the information from:
http://docs.oracle.com/cd/E19957-01/mysql-refman-6.0/tutorial.html#example-auto-increment
and no matter what I do, I keep getting error messages even though I am doing just what the tutorial said.
Error code -1, SQL state 42X01: Syntax error: Encountered "(" at line 2, column 13.
Line 1, column 1
Error code -1, SQL state 42X05: Table/View 'ANIMALS' does not exist.
Line 8, column 1
Error code -1, SQL state 42X05: Table/View 'ANIMALS' does not exist.
Line 13, column 1
Execution finished after 0 s, 3 error(s) occurred.
What I would like to know is, is there something wrong with the code in the tutorial or is there a setting somewhere in the set up of mySQL in netbeans that I need to configure to get the code to work?
CREATE TABLE animals (
grp ENUM('fish','mammal','bird') NOT NULL,
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (grp,id)
) ENGINE=MyISAM;
INSERT INTO animals (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');
SELECT * FROM animals ORDER BY grp,id;

Create a SQL table to import (and convert) .CSV containing MySQL tstamp

I just don't seem to get a solution for my problem! I need to import this into SQL Server.
The 2nd column (and a few more) from a .CSV MySQL export contains the tstamp field, which I need converted.
I created the table, but the bulk import did not work. Got the following error message
Msg 4864, Level 16, State 1, Line 4
Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 2, column 1 (tstamp).
Msg 4864, Level 16, State 1, Line 4
Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 3, column 1 (tstamp).
Herewith the code.
-- Recreate the table
CREATE TABLE [Majestic].[dbo].hdiyouth
(tstamp datetime NOT NULL
)
GO
-- Bulk insert the data from csv file
-- Ensure the file(s) is/are closed!
BULK
INSERT [Majestic].[dbo].hdiyouth
FROM 'C:\Path\CSV\hdiyouth.csv'
WITH
(
FIRSTROW = 2,
FIELDTERMINATOR = ';',
ROWTERMINATOR = '\n'
)
GO
Try checking out LOAD DATA: http://dev.mysql.com/doc/refman/5.1/en/load-data.html
A bit down there is an example of how to convert a column before inserting:
mysql> LOAD DATA INFILE '/tmp/bit_test.txt'
-> INTO TABLE bit_test (#var1) SET b= CAST(#var1 AS UNSIGNED);
how about creating a SSIS package to do that?
This link may help you
The problem is, that you have the value "0" in the tstamp and tstamp_updated columns, which are of datatype timestamp, right?
MySQL supports NULL values in timestamp columns, also represented by '0000-00-00 00:00:00'. SQL Server does not support this. Don't get me wrong, it supports NULL value in timestamp columns, but not the '0' of MySQL. The easiest way to solve this may be using SSIS like Diego suggested. I personally solved this issue by converting MySQL NULL values to '1970-01-01', which is the minimum value for timestamp columns.

How to handle mysql #1062 - Duplicate entry error when creating a large table

I am working on a table having around 5 million records. I'm loading records from a csv file.
There is a unique column, url.
While inserting, if the url is already in the table, I want to make a change in the new url value and then do the insertion.
Example:
try inserting a record with a url of "book". If "book" already exists, the new record should have a url of "book-1" (then "book-2" and so on)
result: the url values "book-1","book-2"... are in the table in addition to the initial value book
I have figured out that there are 2 ways to do so.
before inserting each record: check whether the url value already exists; if it does then make the required changes in the new url value and insert. I am afraid that this will result in a poor performance.
insert records without checking if the url value already exists. If url value already exists handle the "mysql #1062 - Duplicate entry error" and make the required changes in the url value; retry the insertion.
Is this possible? If so, how?
If this is an one-off problem, I'd like to recommend an ad-hoc MySQL solution:
If your table isn't MyISAM, convert to MyISAM.
Temporarily create an auto_increment integer column named
url_suffix.
Temporarily delete the unique constraint on the url column.
Create the multiple-column index (url, url_suffix) and ensure that there are no other indexes that use url_suffix.
Insert all of your rows, allowing duplicate URLs. You'll notice that the auto_increment url_suffix column is keyed on the url now. So, the first particular url will have url_suffix of 1 and the next 2, and so on.
Do an update like the following, then delete your temporary url_suffix column and put your unique constraint back.
Query to update all the rows:
UPDATE urls
SET url = if (url_suffix = 1, url, CONCAT(url, '-', url_suffix - 1))
In fact, you could skip step 6, keep the auto_increment field so you could easily add duplicate URLs in the future, and simply fetch your URLs like this:
SELECT (if (url_suffix = 1, url, CONCAT(url, '-', url_suffix - 1))) AS url
FROM urls
Your data would look something like this:
url url_suffix
---------------------------
that 1
that 2
this 1
this 2
this 3
those 1
You have the problem here that a simple trigger will prove inefficient when inserting due to the fact that you are saying they will go from 'book' to 'book-1' 'book-2' etc. The easiest way to do this would be to have a new column which contains a numeric value defaulting to 0. This could be done in a stored procedure i.e.
CREATE PROCEDURE `insertURL`(inURL VARCHAR(255))
BEGIN
DECLARE thisSuffix INT UNSIGNED DEFAULT 0;
// We have to get this ID first, as MySQL won't let you select from the table you are inserting to
SELECT COALESCE(MAX(url_suffix)+1,0) INTO thisSuffix FROM urls WHERE url_column = inURL;
// Now the ID is retrieved, insert
INSERT INTO urls (
url_column,
url_suffix
) VALUES (
inURL,
thisSuffix
);
// And then select the generated URL
SELECT IF(thisSuffix>0,CONCAT(inURL,'-',thisSuffix),inURL) AS outURL;
END
Which is then invoked using
CALL insertURL('book');
And will then return 'book' if the suffix = 0, or 'book-1' if it's got a suffix greater than 0.
For purposes of testing my table design was
CREATE TABLE `urls` (
`url_column` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL ,
`url_suffix` tinyint(3) UNSIGNED NOT NULL ,
PRIMARY KEY (`url_column`, `url_suffix`)
);