I have two separate update queries, i.e., "Update Query #1" & "Udpate Query #2". I would like to combine these two queries into a single compound query.
QUERY #1:
/remove bad dates in [addr.stuupd]/
UPDATE addr
SET addr.STUUPD= NULL
Where addr.STUUPD='0000-00-00 00:00:00'
QUERY #2:
/remove bad dates in [loan.DDBTUPD] date field/
UPDATE loan
SET loan.DDBTUPD= NULL
Where loan.DDBTUPD='0000-00-00 00:00:00'
I have taken the liberty of assuming that the real problem here is to ensure that both tables are updated at the same time, and neither update fails. In that case I would use a transaction.
Compound queries are normally only referred to with "select" statements. Reliable handling of multiple updates is provided by "transactions" because they support rollback if one part of the update fails.
NB: transactions only work with InnoDB tables.
You can change your tables to InnoDB with
mysql> alter table `addr` engine = InnoDB
mysql> alter table `loan` engine = InnoDB
Before...
mysql> select * from loan ;
+--------+---------------------+
| loanid | DDBTUPD |
+--------+---------------------+
| 1 | 0000-00-00 00:00:00 |
| 2 | 0000-00-00 00:00:00 |
| 3 | 0000-00-00 00:00:00 |
+--------+---------------------+
The transaction...
mysql> START TRANSACTION;
Query OK, 0 rows affected (0.00 sec)
mysql> UPDATE addr SET addr.STUUPD= NULL Where addr.STUUPD='0000-00-00 00:00:00' ;
Query OK, 4 rows affected (0.01 sec)
mysql> UPDATE loan SET loan.DDBTUPD= NULL Where loan.DDBTUPD='0000-00-00 00:00:00' ;
Query OK, 3 rows affected (0.00 sec)
At this point, you will be able to see the results of the update, but other users will not
mysql> select * from loan ;
+--------+---------+
| loanid | DDBTUPD |
+--------+---------+
| 1 | NULL |
| 2 | NULL |
| 3 | NULL |
+--------+---------+
You will need to commit the transaction
mysql> COMMIT ;
Query OK, 0 rows affected (0.01 sec)
I suspect that peterm's solution below will run into problems if one of the tables does not have dates equal to '0000-00-00 00:00:00' and the other table does. peterm's query might not null out everything as expected. I'm happy to be proven wrong.
Although it's technically possible with a query like this
UPDATE addr a JOIN loan l
ON a.stuupd = '0000-00-00 00:00:00'
AND l.ddbtupd = '0000-00-00 00:00:00'
SET a.stuupd = NULL,
l.ddbtupd = NULL
Here is SQLFiddle demo
the question remains for what practical reason?
Related
I want to show rows that have updated_at more than 3 hours ago. MySQL seems to be completely ignoring the ORDER BY clause. Any idea why?
Edit: as pointed out by Sebastian, this only occurs in certain timezones, like GMT+5 or GMT+8.
mysql> SET time_zone='+08:00';
Query OK, 0 rows affected (0.00 sec)
mysql> CREATE DATABASE test1; USE test1;
Query OK, 1 row affected (0.01 sec)
Database changed
mysql> CREATE TABLE `boxes` (
-> `box_id` int unsigned NOT NULL AUTO_INCREMENT,
-> `updated_at` timestamp NULL DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP,
-> PRIMARY KEY (`box_id`)
-> ) ENGINE=InnoDB;
Query OK, 0 rows affected (0.01 sec)
mysql> INSERT INTO `boxes` (`box_id`, `updated_at`) VALUES
-> (1, '2020-08-22 05:25:35'),
-> (2, '2020-08-26 18:49:05'),
-> (3, '2020-08-23 03:28:30'),
-> (4, '2020-08-23 03:32:55');
Query OK, 4 rows affected (0.00 sec)
Records: 4 Duplicates: 0 Warnings: 0
mysql> SELECT NOW();
+---------------------+
| NOW() |
+---------------------+
| 2020-08-26 20:49:59 |
+---------------------+
1 row in set (0.00 sec)
mysql> SELECT b.box_id, updated_at, (b.updated_at < NOW() - INTERVAL 3 HOUR) AS more_than_3hr
-> FROM boxes b
-> ORDER BY more_than_3hr DESC;
+--------+---------------------+---------------+
| box_id | updated_at | more_than_3hr |
+--------+---------------------+---------------+
| 1 | 2020-08-22 05:25:35 | 1 |
| 2 | 2020-08-26 18:49:05 | 0 | <--- WHY IS THIS HERE???
| 3 | 2020-08-23 03:28:30 | 1 |
| 4 | 2020-08-23 03:32:55 | 1 |
+--------+---------------------+---------------+
4 rows in set (0.00 sec)
Expectation: the rows with "1" should show up first.
Actual results: ORDER BY is ignored, and the resultset is sorted by primary key
I have a hunch it has something to do with MySQL storing timestamps as UTC and displaying them in the current timezone. My current timezone is GMT+8. However, it still doesn't make sense -- I am sorting the results based on the aliased expression, and the expression's value is clearly shown in the resultset.
MySQL version 8.0.21.
I also tried moving the expression to the ORDER BY clause, and the results are the same.
I don't know why but it compares wrong timezones in the background and thus values at the end are correct, but comparisons are invalid (for specific timezones).
When you query a TIMESTAMP value, MySQL converts the UTC value back to
your connection’s time zone. Note that this conversion does not take
place for other temporal data types such as DATETIME.
https://www.mysqltutorial.org/mysql-timestamp.aspx/
Changing type from TIMESTAMP to DATETIME fixes problem.
Other solution may be casting to the decimal number.
SELECT b.box_id, updated_at, FORMAT((b.updated_at < NOW() - INTERVAL 3 HOUR),0) AS more_than_3hr
FROM boxes b
ORDER BY more_than_3hr DESC;
From the documentation:
https://dev.mysql.com/doc/refman/8.0/en/user-variables.html
HAVING, GROUP BY, and ORDER BY, when referring to a variable that is assigned a value in the select expression list do not work as expected because the expression is evaluated on the client and thus can use stale column values from a previous row.
Basically, you can't use a variable name you created with "AS" in your sorting.
The solution is to use the verbose statement you used for the AS in sorting. Yeah, it's verbose. 🤷♂️ It is what it is.
I'm implementing a custom table-based sequence generator for MySQL database v5.7.16 with InnoDB engine.
The sequence_table looks as follows:
+-------------+-----------+
|sequence_name|next_value |
+-------------+-----------+
| first_seq | 1 |
+-------------+-----------+
| second_seq | 1 |
+-------------+-----------+
sequence_name column is a primary key.
This sequence table contains multiple sequences for different consumers.
I use the following strategy for the sequence updates:
Select current sequence value: select next_val from sequence_table where sequence_name=?.
Add the allocation size to current sequence value.
Update the sequence value if it's current value matches the value selected in the first step: update sequence_table set next_val=? where sequence_name=? and next_val=?.
If the update is successful return the increased sequence value, otherwise repeat the process from step 1.
The documentation contains the following information:
UPDATE ... WHERE ... sets an exclusive next-key lock on every record
the search encounters. However, only an index record lock is required
for statements that lock rows using a unique index to search for a
unique row. 14.5.3 Locks Set by Different SQL Statements in InnoDB
The part in bold is a bit confusing.
As you can see, I match the primary key in the WHERE clause of the UPDATE statement.
Is it possible that the search may encounter more than one record and therefore lock multiple rows in this sequence table?
In other words, will the update in the 3rd step of the algorithm block just one or multiple rows?
You didn't mention what transaction isolation level you're planning to use.
Lets assume you're using repeatable read (in read committed no such a problem should exist)
From here:
For locking reads (SELECT with FOR UPDATE or LOCK IN SHARE MODE),
UPDATE, and DELETE statements, locking depends on whether the
statement uses a unique index with a unique search condition, or a
range-type search condition
and
For a unique index with a unique search condition, InnoDB locks only
the index record found, not the gap before it
So at least in theory it should lock only a single record and no next-key lock will be used.
More quotes from other docs pages to back my thoughts:
innodb-next-key-locks
link
A next-key lock is a combination of a record lock on the index record
and a gap lock on the gap before the index record.
gap locks
link
Gap locking is not needed for statements that lock rows using a unique
index to search for a unique row
Don't grab the sequence numbers inside the main transaction; do it before the START TRANSCTION.
Do the task in a single statement with autocommit=ON.
Both of those lead to it being much faster, less likely to block.
(You code was missing BEGIN/COMMIT and FOR UPDATE. I got rid of those rather than explaining the issues.)
Set up test:
mysql> CREATE TABLE so49197964 (
-> name VARCHAR(22) NOT NULL,
-> next_value INT UNSIGNED NOT NULL,
-> PRIMARY KEY (name)
-> ) ENGINE=InnoDB;
Query OK, 0 rows affected (0.02 sec)
mysql> INSERT INTO so49197964 (name, next_value)
-> VALUES
-> ('first', 1), ('second', 1);
Query OK, 2 rows affected (0.00 sec)
Records: 2 Duplicates: 0 Warnings: 0
mysql> SELECT * FROM so49197964;
+--------+------------+
| name | next_value |
+--------+------------+
| first | 1 |
| second | 1 |
+--------+------------+
2 rows in set (0.00 sec)
Grab 20 nums from 'first' and fetch the starting number:
mysql> UPDATE so49197964
-> SET next_value = LAST_INSERT_ID(next_value) + 20
-> WHERE name = 'first';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> SELECT LAST_INSERT_ID();
+------------------+
| LAST_INSERT_ID() |
+------------------+
| 1 |
+------------------+
1 row in set (0.00 sec)
mysql> SELECT * FROM so49197964;
+--------+------------+
| name | next_value |
+--------+------------+
| first | 21 |
| second | 1 |
+--------+------------+
2 rows in set (0.00 sec)
Grab another 20:
mysql> UPDATE so49197964
-> SET next_value = LAST_INSERT_ID(next_value) + 20
-> WHERE name = 'first';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> SELECT LAST_INSERT_ID();
+------------------+
| LAST_INSERT_ID() |
+------------------+
| 21 |
+------------------+
1 row in set (0.00 sec)
mysql> SELECT * FROM so49197964;
+--------+------------+
| name | next_value |
+--------+------------+
| first | 41 |
| second | 1 |
+--------+------------+
2 rows in set (0.00 sec)
I am doing these :
insert into table_name(maxdate) values
((select max(date1) from table1)), -- goes in row1
((select max(date2) from table2)), -- goes in row2
.
.
.
((select max(date500) from table500));--goes in row500
is it possible that while insertion , order of inserting might get change ?.Eg when i will do
select maxdate from table_name limit 500;
i will get these
date1 date2 . . date253 date191 ...date500
Short answer:
No, not possible.
If you want to double check :
mysql> create table letest (f1 varchar(50), f2 varchar(50));
Query OK, 0 rows affected (0.00 sec)
mysql> insert into letest (f1,f2) values
( (SELECT SLEEP(5)), 'first'),
( (SELECT SLEEP(1)), 'second');
Query OK, 2 rows affected, 1 warning (6.00 sec)
Records: 2 Duplicates: 0 Warnings: 0
mysql> select * from letest;
+------+--------+
| f1 | f2 |
+------+--------+
| 0 | first |
| 0 | second |
+------+--------+
2 rows in set (0.00 sec)
mysql>
SLEEP(5) is the first row to be inserted after 5 seconds,
SLEEP(1) is the second row to be inserted after 5+1 seconds
that is why query takes 6 seconds.
The warning that you see is
mysql> show warnings;
+-------+------+-------------------------------------------------------+
| Level | Code | Message |
+-------+------+-------------------------------------------------------+
| Note | 1592 | Statement may not be safe to log in statement format. |
+-------+------+-------------------------------------------------------+
1 row in set (0.00 sec)
This can affect you only if you are using a master-slave setup, because the replication binlog will not be safe. For more info on this http://dev.mysql.com/doc/refman/5.1/en/replication-rbr-safe-unsafe.html
Later edit: Please consider a comment if you find this answer not usefull.
Yes, very possible.
You should consider a database table unordered, and a SELECT statement without ORDER clause as well. Every DBMS can choose how to implement tables (often even depending on Storage Engine) and return the rows. Sure, many DBMS's happen to return your data in the order you inserted, but never rely on it.
The order of the retrieved data my depend on the execution plan, and may even be different when running the same query multiple times. Especially when only retrieving part of the data (TOP/LIMIT).
If you want to impose an order, add a field which orders your data. Yes, an autoincrement primary key will be enough in many cases. If you think you'll be wanting to change the order someday, add another field.
I want to create a temporal table from a SELECT statement in MySQL. It involves several JOINs, and it can produce NULL values that I want MySQL to take as zeroes. It sounds like an easy problem (simply default to zero), but MySQL (5.6.12) fails to elicit the default value.
For example, take the following two tables:
mysql> select * from TEST1;
+------+------+
| a | b |
+------+------+
| 1 | 2 |
| 4 | 25 |
+------+------+
2 rows in set (0.00 sec)
mysql> select * from TEST2;
+------+------+
| b | c |
+------+------+
| 2 | 100 |
| 3 | 100 |
+------+------+
2 rows in set (0.00 sec)
A left join gives:
mysql> select TEST1.*,c from TEST1 left join TEST2 on TEST1.b=TEST2.b;
+------+------+------+
| a | b | c |
+------+------+------+
| 1 | 2 | 100 |
| 4 | 25 | NULL |
+------+------+------+
2 rows in set (0.00 sec)
Now, if I want to save these values in a temporal table (changing NULL for zero), this is the code I would use:
mysql> create temporary table TEST_JOIN (a int, b int, c int default 0 not null)
select TEST1.*,c from TEST1 left join TEST2 on TEST1.b=TEST2.b;
ERROR 1048 (23000): Column 'c' cannot be null
What am I doing wrong? The worst part is that this code used to work before I did a system-wide upgrade (I don't remember which version of MySQL I had, but surely it was lower than my current 5.6). It used to produce the behavior I would expect: if it's NULL, use the default, not the frustrating error I'm getting now.
From the documentation of 5.6 (unchanged since 4.1):
Inserting NULL into a column that has been declared NOT NULL. For
multiple-row INSERT statements or INSERT INTO ... SELECT statements,
the column is set to the implicit default value for the column data
type. This is 0 for numeric types, the empty string ('') for string
types, and the “zero” value for date and time types. INSERT INTO ...
SELECT statements are handled the same way as multiple-row inserts
because the server does not examine the result set from the SELECT to
see whether it returns a single row. (For a single-row INSERT, no
warning occurs when NULL is inserted into a NOT NULL column. Instead,
the statement fails with an error.)
My current workaround is to store the NULL values in the temporal table, and then replace them by zeroes, but it seems rather cumbersome with many columns (and terribly inefficient). Is there a better way to do it?
BTW, I cannot simply ignore some columns in the query (as suggested for another question), because it's a multirow query.
IFNULL(`my_column`,0);
That would set NULLs to 0. Other values stay as is.
Just wrap your values/column names with IFNULL and it will convert them to whatever default value you put into the function. E.g. 0. Or "european swallow", or whatever you want.
Then you can keep strict mode on and still handle NULLs gracefully.
I'm trying to understand a huge performance difference that I'm seeing in equivalent code. Or at least code I think is equivalent.
I have a table with about 10 million records on it. It contains a field, which is indexed defined as:
USPatentNum char(8)
If I set a variable withing MySql to a value, it takes over 218 seconds. The exact same query with a string literal takes under 1/4 of a second.
In the code below, the first select statement (with where USPatentNum = #pn;) takes forever, but the second, with the literal value
(where USPatentNum = '5288812';) is nearly instant
mysql> select #pn := '5288812';
+------------------+
| #pn := '5288812' |
+------------------+
| 5288812 |
+------------------+
1 row in set (0.00 sec)
mysql> select patentId, USPatentNum, grantDate from patents where USPatentNum = #pn;
+----------+-------------+------------+
| patentId | USPatentNum | grantDate |
+----------+-------------+------------+
| 306309 | 5288812 | 1994-02-22 |
+----------+-------------+------------+
1 row in set (3 min 38.17 sec)
mysql> select #pn;
+---------+
| #pn |
+---------+
| 5288812 |
+---------+
1 row in set (0.00 sec)
mysql> select patentId, USPatentNum, grantDate from patents where USPatentNum = '5288812';
+----------+-------------+------------+
| patentId | USPatentNum | grantDate |
+----------+-------------+------------+
| 306309 | 5288812 | 1994-02-22 |
+----------+-------------+------------+
1 row in set (0.21 sec)
Two questions:
Why is the use of the #pn so much slower?
Can I change the select statement so that the performance will be the same?
Declare #pn as char(8) before setting its value.
I suspect it will be a varchar as you do it now. If so, the performance loss is because MySql can't mach the index with your variable.
It doesn't matter whether you use constant or #var. You get different result because the second time MySQL gets results from cache. If you execute once again your scenario but trade places queries with const and with #var you will get them same results (but with another value). First will be slowed, second will be fast.
Hope it helps