MySQL - Why is Safe Update Mode blocking this UPDATE command? - mysql

I have an actor table that looks like this:
| actor_id | first_name | last_name | last_update |
+----------+------------+-----------+---------------------+
| 1 | Jack | Nicholson | 2019-06-02 00:00:00 |
Column actor_id is a primary key with auto increment.
When I try to update the table like so:
UPDATE actor
SET last_name = 'foo'
WHERE last_update > '2019-06-02 00:00:00';
I get blocked by MySQL's safe update mode with this error:
Error Code: 1175. You are using safe update mode and you tried to update a table without a WHERE that uses a KEY column. To disable safe mode, toggle the option in Preferences -> SQL Editor and reconnect.
Indeed column last_update is not a KEY column, so based on this SO answer I've come up with the following workaround:
CREATE TEMPORARY TABLE IF NOT EXISTS ids AS (SELECT actor_id FROM actor WHERE last_update > '2019-06-02 00:00:00');
UPDATE actor
SET last_name = 'foo'
WHERE actor_id IN (SELECT actor_id FROM ids);
But again I'm blocked with a 1175 error.
Why is safe update mode blocking me here? Can I work around it without disabling safe update mode?

You can work around this error by making the column a KEY column. In other words, create an index (aka key) on the column.
mysql> set sql_safe_updates=ON;
mysql> UPDATE actor SET last_name = 'foo' WHERE last_update > '2019-06-02 00:00:00';
ERROR 1175 (HY000): You are using safe update mode and you tried to update a table without a WHERE that uses a KEY column
mysql> alter table actor add key (last_update);
Query OK, 0 rows affected (0.04 sec)
Records: 0 Duplicates: 0 Warnings: 0
mysql> UPDATE actor SET last_name = 'foo' WHERE last_update > '2019-06-02 00:00:00';
Query OK, 0 rows affected (0.00 sec)
Rows matched: 0 Changed: 0 Warnings: 0
The point of the error is to prevent you from unintentionally locking every row in the table when you have a condition on a non-indexed column.
The way locking works, it locks all rows that the query examines to test the condition, not just all rows that satisfy the condition. If you run a query with a condition that tests an unindexed query, it has to examine every row in the table, which probably locks way more than you intended it to lock.

Related

Why do I get a different number of hits in a MySQL query if I use an index or not

I am new in MySQL and I do not understand, why if I use index on JSON column, result set is different as without index.
I have a simple table:
CREATE TABLE jsontest (
jsondata JSON
);
Table is filled up with 50000 JSONs and one filed in json is also:
"allowedNfTypes": ["aaa", "bbb", "ccc"]}
This field in some cases have 1 or 2 or 3 values in array (but there are about 10 string options - let's say from "aaa" to "iii").
In some cases this filed does not exist at all.
Then if I execute:
mysql> SELECT * FROM jsontest WHERE "AMF" MEMBER OF(jsondata->'$.allowedNfTypes')
I get:
10045 rows in set (0.21 sec)
Then I created an index:
CREATE INDEX allowedNfTypes_index ON jsontest((CAST(jsondata->'$.allowedNfTypes' AS CHAR(128) ARRAY)))
And the same query
SELECT * FROM jsontest WHERE "AMF" MEMBER OF(jsondata->'$.allowedNfTypes');
return much less hits:
1402 rows in set (0.03 sec)
Any idea why?
I have downloaded your data file.
CLI output copy:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 16
Server version: 8.0.23 MySQL Community Server - GPL
Copyright (c) 2000, 2021, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> CREATE TABLE jsontest (
-> jsondata JSON
-> );
Query OK, 0 rows affected (0.07 sec)
mysql> LOAD DATA INFILE 'C:\\ProgramData\\MySQL\\MySQL Server 8.0\\Uploads\\test_data.json' INTO TABLE jsontest;
Query OK, 50000 rows affected (15.50 sec)
Records: 50000 Deleted: 0 Skipped: 0 Warnings: 0
mysql> SELECT COUNT(*) FROM jsontest WHERE "AMF" MEMBER OF(jsondata->'$.allowedNfTypes');
+----------+
| COUNT(*) |
+----------+
| 10045 |
+----------+
1 row in set (0.32 sec)
mysql> CREATE INDEX allowedNfTypes_index ON jsontest((CAST(jsondata->'$.allowedNfTypes' AS CHAR(128) ARRAY)));
Query OK, 0 rows affected (0.88 sec)
Records: 0 Duplicates: 0 Warnings: 0
mysql> SELECT COUNT(*) FROM jsontest WHERE "AMF" MEMBER OF(jsondata->'$.allowedNfTypes');
+----------+
| COUNT(*) |
+----------+
| 10045 |
+----------+
1 row in set (0.28 sec)
Also original query (not SELECT COUNT(*) FROM .. but SELECT * FROM ..) was tested - the result is the same.
The issue was not reproduced.
After version updating till actual 8.0.28 the issue is reproduced. The amount of rows selected with the index is the same - 1402.
After adding autoincremented primary key ALTER TABLE jsontest ADD COLUMN id INT AUTO_INCREMENT PRIMARY KEY; the amount of rows selected with index alters to 1448.
Comparing the rows selected with and without index I have found that:
There is a lot of duplicates in jsondata->'$.allowedNfTypes'.
For each unique jsondata->'$.allowedNfTypes' value only the rows which have created unique id less than approximately 6800 are selected with the index by jsondata->'$.allowedNfTypes', all another rows with duplicated value are not selected.
Deleting the rows which have id value above some N decreases the amount of rows returned by the query without the index and does not alter the amount of rows returned by the query with index (tested N = 40000, 25000, 10000, 7500, 7000, 6800). When N is 6797 or less the amount of rows selected become equal.
I have tried to add generated column which extracts this property as JSON array with aLTER TABLE jsontest ADD COLUMN allowedNfTypes JSON AS (CAST(jsondata->'$.allowedNfTypes' AS JSON));, create multivalued index by this column and use this column insrtead of the whole JSON. The amount of rows selected with index increases till 1498. The value of N is 5097.
Then I add the index described by OP (the table contains 2 indices - by original column and by generated one) - and the query returns 10045! I drops the generated column and the index by it - 10045. I drop the index by original column and re-create it - again 1498.
There was a lot of additional experiments, but their results are less interesting.
Looks like a bug. Reproduceable bug. I think that you may repeat my expriment (and/or perform your own) and report this to MySQL bugtrack.

ON DUPLICATE KEY UPDATE - decrement value in MySQL

The following seems odds to me:
INSERT INTO sometable (UNIQUEVALUE,NUMERICVALUE) VALUES ('valuethatexists','100') ON DUPLICATE KEY UPDATE NUMERICVALUE = NUMERICVALUE+VALUES(NUMERICVALUE);
Assume your NUMERICVALUE is at 0.
The above would change it to 100 - which does work.
If, however, you then input -100, it does not work properly.
INSERT INTO sometable (UNIQUEVALUE,NUMERICVALUE) VALUES ('valuethatexists','-100') ON DUPLICATE KEY UPDATE NUMERICVALUE = NUMERICVALUE+VALUES(NUMERICVALUE);
The above statement should return it to 0. It does not, in my case. It remains at 100.
Am I missing something?
Edit: This goes wrong somewhere else. I am doing this with PHP. The actual code exhibiting this bug looks like this:
Edit 2: This had nothing to do with PHP either. The problem was the NUMERIC value was UNSIGNED in my production environment, meaning VALUES(NUMERICVALUE) was brought from -100 to 0 before it was used.
On my MySQL server (5.7.12), it does work as expected:
mysql> CREATE TABLE sometable (
UNIQUEVALUE VARCHAR(16) NOT NULL PRIMARY KEY,
NUMERICVALUE INT NOT NULL);
Query OK, 0 rows affected (0.01 sec)
mysql> INSERT INTO sometable (UNIQUEVALUE,NUMERICVALUE)
VALUES ('valuethatexists','100')
ON DUPLICATE KEY UPDATE NUMERICVALUE = NUMERICVALUE+VALUES(NUMERICVALUE);
Query OK, 1 row affected (0.01 sec)
mysql> SELECT * FROM sometable;
+-----------------+--------------+
| UNIQUEVALUE | NUMERICVALUE |
+-----------------+--------------+
| valuethatexists | 100 |
+-----------------+--------------+
1 row in set (0.00 sec)
mysql> INSERT INTO sometable (UNIQUEVALUE,NUMERICVALUE)
VALUES ('valuethatexists','-100')
ON DUPLICATE KEY UPDATE NUMERICVALUE = NUMERICVALUE+VALUES(NUMERICVALUE);
Query OK, 2 rows affected (0.00 sec)
mysql> SELECT * FROM sometable;
+-----------------+--------------+
| UNIQUEVALUE | NUMERICVALUE |
+-----------------+--------------+
| valuethatexists | 0 |
+-----------------+--------------+
1 row in set (0.00 sec)
Which version of MySQL are you using? Can you execute the exact statements above and see if you have different results?
While Benjamin's answer is correct, the root of the issue turned out to be the fact that the NUMERICVALUE column was UNSIGNED, so whenever I input -100, it was turned into 0 before it was evaluated as VALUES(NUMERICVALUE). If this is to be considered a bug or not I don't know.
Obviously the result of the final evaluation should not be negative, but I don't know how clever it is to silently turn it into 0. I had logic in place making sure the value in question would never be below 0 anyway by never passing a negative value larger than what was already in the row.

Can I make mysql table columns case insensitive?

I am new to mysql (and sql in general) and am trying to see if I can make data inserted into a column in a table case insensitive.
I am storing data like state names, city names, etc. So I want to have a unique constraint on these types of data and on top of that make them case insensitive so that I can rely on the uniqueness constraint.
Does mysql support a case-insensitive option on either the column during table creation or alternatively when setting the uniqueness constraint on the column? What is the usual way to deal with such issues? I would appreciate any alternate ideas/suggestions to deal with this.
EDIT: As suggested, does COLLATE I think only applies to queries on the inserted data. But to really take advantage of the uniqueness contraint, I want to have a case insensitivity restriction on INSERT. For e.g. I want mysql to not allow insertions of California and california and cALifornia as they should be the same. But if I understand the uniqueness constraint prooperly, having it on the StateName will still allow the above four inserts.
By default, MySQL is case-insensitive.
CREATE TABLE test
(
name VARCHAR(20),
UNIQUE(name)
);
mysql> INSERT INTO test VALUES('California');
Query OK, 1 row affected (0.00 sec)
mysql> INSERT INTO test VALUES('california');
ERROR 1062 (23000): Duplicate entry 'california' for key 'name'
mysql> INSERT INTO test VALUES('cAlifornia');
ERROR 1062 (23000): Duplicate entry 'cAlifornia' for key 'name'
mysql> INSERT INTO test VALUES('cALifornia');
ERROR 1062 (23000): Duplicate entry 'cALifornia' for key 'name'
mysql> SELECT * FROM test;
+------------+
| name |
+------------+
| California |
+------------+
1 row in set (0.00 sec)
Use BINARY when you need case-sensitivity
To make case-sensitive in MySQL, BINARY keyword is used as follows
mysql> CREATE TABLE test
-> (
-> name varchar(20) BINARY,
-> UNIQUE(name)
-> );
Query OK, 0 rows affected (0.00 sec)
mysql>
mysql> INSERT INTO test VALUES('California');
Query OK, 1 row affected (0.00 sec)
mysql>
mysql> INSERT INTO test VALUES('california');
Query OK, 1 row affected (0.00 sec)
mysql> INSERT INTO test VALUES('cAlifornia');
Query OK, 1 row affected (0.00 sec)
mysql> INSERT INTO test VALUES('cALifornia');
Query OK, 1 row affected (0.00 sec)
mysql>
mysql> SELECT * FROM test;
+------------+
| name |
+------------+
| California |
| cALifornia |
| cAlifornia |
| california |
+------------+
4 rows in set (0.00 sec)
You can use COLLATE operator: http://dev.mysql.com/doc/refman/5.0/en/case-sensitivity.html

MySQL gender table field

If I want to create a gender field in my table, how do I make sure that my database doesn't accept any value apart from "M" or "F" ?
$sqlCommand = "CREATE TABLE members (
id int(11) NOT NULL auto_increment,
...
...
...
...
gender
)";
Thank you
No triggers, no enums or other deamonic activities.
You can use a FOREIGN KEY to a reference table with just 2 rows:
CREATE TABLE Gender_Ref
( gender CHAR(1) NOT NULL,
PRIMARY KEY (gender)
) ENGINE = InnoDB ;
INSERT INTO Gender_Ref (gender)
VALUES
('F'), ('M') ;
CREATE TABLE members
( id int(11) NOT NULL auto_increment,
...
...
gender CHAR(1) NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY gender
REFERENCES Gender_Ref (gender)
) ENGINE = InnoDB ;
It's also good advice to "lock" the reference table so the applications code has only read access. (That's usually good for most reference tables, and if you have an Admin application, you can of course give it write access as well to the reference tables).
Like pointed out in the comments, you can use ENUM like so:
gender ENUM('F','M') NOT NULL
However, you have to be careful as this will still accept the empty string too (although you'll get a warning for that):
mysql> create table t (g enum('M','F') not null);
Query OK, 0 rows affected (0.12 sec)
mysql> insert into t values ('M');
Query OK, 1 row affected (0.00 sec)
mysql> insert into t values ('');
Query OK, 1 row affected, 1 warning (0.00 sec)
mysql> show warnings;
+---------+------+----------------------------------------+
| Level | Code | Message |
+---------+------+----------------------------------------+
| Warning | 1265 | Data truncated for column 'g' at row 1 |
+---------+------+----------------------------------------+
1 row in set (0.00 sec)
mysql> select * from t;
+---+
| g |
+---+
| M |
| |
+---+
2 rows in set (0.00 sec)
To ensure this does not happen, you could consider setting the sql_mode (http://dev.mysql.com/doc/refman/5.5/en/server-sql-mode.html) to a more restrictive value:
mysql> set sql_mode = strict_all_tables;
Query OK, 0 rows affected (0.00 sec)
mysql> insert into t values ('');
ERROR 1265 (01000): Data truncated for column 'g' at row 1
However, you should investigate if this is a suitable option for you. Many existing applications (wordpress etc) don't like messing with the sql_mode so if your code is a plugin to those systems you want to avoid setting it.
You can choose to set the sql_mode server wide or session wide; The first option would be more robust, but requires configuring MySQL in a non default way, and is likely to affect other applications. Setting at the session level immediately after you open the connection should work just fine, but will clutter your application code. Pick your poison.
ypercube's suggestion to use a foreign key is also good, and is more portable to other RDBMSes than ENUM. However, you'll have to ensure your tables are both managed by the InnoDB engine. This is becoming more and more the standard so it's not a bad choice.
(if you're really paranoid, you should really ensure that the application only has read access to the gender reference table)
You could use enum type enum('M','F').
You can use a trigger to check if it is coreect or you can use an enum type enum('M','F')

MySQL bulk INSERT or UPDATE

Is there any way of performing in bulk a query like INSERT OR UPDATE on the MySQL server?
INSERT IGNORE ...
won't work, because if the field already exists, it will simply ignore it and not insert anything.
REPLACE ...
won't work, because if the field already exists, it will first DELETE it and then INSERT it again, rather than updating it.
INSERT ... ON DUPLICATE KEY UPDATE
will work, but it can't be used in bulk.
So I'd like to know if there's any command like INSERT ... ON DUPLICATE KEY UPDATE that can be issued in bulk (more than one row at the same time).
You can insert/update multiple rows using INSERT ... ON DUPLICATE KEY UPDATE. The documentation has the following example:
INSERT INTO table (a,b,c) VALUES (1,2,3),(4,5,6)
ON DUPLICATE KEY UPDATE c=VALUES(a)+VALUES(b);
Or am I misunderstanding your question?
One possible way to do this is to create a temporary table, insert the data into that, and then do 1 query with a join to insert the records that don't exist followed by and update to the fields that do exist. The basics would be something like this.
CREATE TABLE MyTable_Temp LIKE MyTable
LOAD DATA INFILE..... INTO MyTable_Temp
UPDATE MyTable INNER JOIN
MyTable_Temp
ON MyTable.ID=MyTable_Temp.ID
SET MyTable.Col1=MyTable_Temp.Col1, MyTable.Col2=MyTable_Temp.Col2.....
INSERT INTO MyTable(ID,Col1,Col2,...)
SELECT ID,Col1,Col2,...
FROM MyTable_Temp
LEFT JOIN MyTable
ON MyTable_Temp.ID = MyTable.ID
WHERE myTable.ID IS NULL
DROP TABLE MyTable_Temp
The syntax may not be exact, but this should give you the basics. Also, I know it's not pretty, but it gets the job done.
Update
I swapped the order of the insert and update, because doing insert first causes all the inserted rows to be updated when the update is called. If you do update first, only the existing records are updated. This should mean a little less work for the server, although the results should be the same.
Although this question has been answered correctly already (that MySQL does support this via ON DUPLICATE UPDATE with the expected multiple value set syntax), I'd like to expand on this by providing a demonstration that anyone with MySQL can run:
CREATE SCHEMA IF NOT EXISTS `test`;
DROP TABLE IF EXISTS test.new_table;
CREATE TABLE test.new_table (`Key` int(11) NOT NULL AUTO_INCREMENT, PRIMARY KEY (`Key`)) ENGINE=InnoDB AUTO_INCREMENT=106 DEFAULT CHARSET=utf8;
SELECT * FROM test.new_table;
INSERT INTO test.new_table VALUES (1),(2),(3),(4),(5) ON DUPLICATE KEY UPDATE `Key`=`Key`+100;
SELECT * FROM test.new_table;
INSERT INTO test.new_table VALUES (1),(2),(3),(4),(5) ON DUPLICATE KEY UPDATE `Key`=`Key`+100;
SELECT * FROM test.new_table;
The output is as follows:
Empty set (0.00 sec)
Query OK, 5 rows affected (0.00 sec)
Records: 5 Duplicates: 0 Warnings: 0
+-----+
| Key |
+-----+
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
+-----+
5 rows in set (0.00 sec)
Query OK, 10 rows affected (0.00 sec)
Records: 5 Duplicates: 5 Warnings: 0
+-----+
| Key |
+-----+
| 101 |
| 102 |
| 103 |
| 104 |
| 105 |
+-----+
5 rows in set (0.00 sec)
Try adding an insert trigger that does a pre-flight check and cancels the insert on duplicate key (after updating the existing row).
Not sure it'll scale well for bulk inserts, let alone work for load data infile, but it's the best I can think of. :-)
If you were using Oracle or Microsoft SQL, you could use the MERGE. However, MySQL does not have a direct correlation to that statement. There is the single-row solution that you mentioned but, as you pointed out, it doesn't do bulk very well. Here is a blog post I found on the difference between Oracle and MySQL and how to do what Oracle does with MERGE in MySQL:
http://blog.mclaughlinsoftware.com/2009/05/25/mysql-merge-gone-awry/
It isn't a pretty solution and it probably isn't as full a solution as you would like, but I believe that is the best there is for a solution.