SQL WHERE Clause for any value? - mysql

I'm executing SQL against a MySQL database and trying to write commands to delete all rows in a table. When I run "DELETE FROM MyTable" I get the error 1175 because a safety feature is turned on by default that forces you to specify a WHERE clause on the primary key.
I could turn this feature off, but I kind of like it, so I'm trying to modify my command to have the same effect by adding the WHERE clause. So, this works:
DELETE FROM MyTable WHERE MyID > 0
In deleting all rows greater where MyID is greater than zero, but I'm looking for a truly universal one that would work for any MyID values. I tried these
DELETE FROM MyTable WHERE MyID = ANY
DELETE FROM MyTable WHERE MyID = *
But they are syntax errors. The doc on the WHERE clause says it just accepts =,<,>,>=,<=,<>,BETWEEN,LIKE,and IN, but nothing that looks like it accepts anything.
Is there a proper syntax for the WHERE conditional that accepts any value?
thank you.

Use
TRUNCATE MyTable;
then you don't need any condition, as you should have DELETE privileges
TRUNCATE TABLE empties a table. It requires the DROP privilege. Logically, TRUNCATE TABLE is similar to a DELETE statement that deletes all the rows, or a sequence of DROP TABLE and CREATE TABLE statements.
see manual

The error is related to the sql_safe_updates client option (see https://dev.mysql.com/doc/refman/8.0/en/mysql-tips.html#safe-updates).
One way to circumvent the safe updates mode is to use a LIMIT clause on your query. You can use any value, even a value that is far larger than the number of rows in your table.
DELETE FROM MyTable LIMIT 9223372036854775807;
You can also disable the safe updates mode, then run the DELETE normally:
SET sql_safe_updates=0;
DELETE FROM MyTable; -- returns no error
TRUNCATE TABLE is a better solution, if you want to delete all the rows.

Make it a compound condition.
WHERE id IS NULL OR id IS NOT NULL;
Then again, maybe the safety condition is there for an important reason that you should not get around.

Related

MySQL not updating information_schema, unless I manually run ANALYZE TABLE `myTable`

I have the need to get last id (primary key) of a table (InnoDB), and to do so I perform the following query:
SELECT (SELECT `AUTO_INCREMENT` FROM `information_schema`.`TABLES` WHERE `TABLE_SCHEMA` = 'mySchema' AND `TABLE_NAME` = 'myTable') - 1;
which returns the wrong AUTO_INCREMENT. The problem is the TABLES table of information_schema is not updated with the current value, unless I run the following query:
ANALYZE TABLE `myTable`;
Why doesn't MySQL update information_schema automatically, and how could I fix this behavior?
Running MySQL Server 8.0.13 X64.
Q: Why doesn't MySQL update information_schema automatically, and how could I fix this behavior?
A: InnoDB holds the auto_increment value in memory, and doesn't persist that to disk.
Behavior of metadata queries (e.g. SHOW TABLE STATUS) is influenced by setting of innodb_stats_on_metadata and innodb_stats_persistent variables.
https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_stats_on_metadata
Forcing an ANALYZE everytime we query metadata can be a drain on performance.
Other than the settings of those variables, or forcing statistics to be collected by manually executing the ANALYZE TABLE, I don't think there's a "fix" for the issue.
(I think that mostly because I don't think it's a problem that needs to be fixed.)
To get the highest value of an auto_increment column in a table, the normative pattern is:
SELECT MAX(`ai_col`) FROM `myschema`.`mytable`
What puzzles me is why we need to retrieve this particular piece of information. What are we going to use it for?
Certainly, we aren't going to use that in application code to determine a value that was assigned to a row we just inserted. There's no guarantee that the highest value isn't from a row that was inserted by some other session. And we have LAST_INSERT_ID() mechanism to retrieve the value of a row our session just inserted.
If we go with the ANALYZE TABLE to refresh statistics, there's still a small some time between that and a subsequent SELECT... another session could slip in another INSERT so that the value we get from the gather stats could be "out of date" by the time we retrieve it.
SELECT * FROM tbl ORDER BY insert_datetime DESC LIMIT 1;
will get you all the data from, the "latest" inserted row. No need to deal with AUTO_INCREMENT, no need to use subqueries, no ANALYZE, no information_schema, no extra fetch once you have the id, no etc, etc.
Yes, you do need an index on the column that you use to determine what is "latest". Yes, id could be used, but it should not be. AUTO_INCREMENT values are guaranteed to be unique, but nothing else.

How can I set a maximum number of rows in MySQL table?

I need to set a maximum limit of rows in my MySQL table. Documentation tell us that one can use following SQL code to create table:
CREATE TABLE `table_with_limit`
`id` int(11) DEFAULT NULL
) ENGINE=InnoDB MAX_ROWS=100000
But MAX_ROWS property is not a hard limit ("store not more then 100 000 rows and delete other") but a hint for database engine that this table will have AT LEAST 100 000 rows.
The only possible way I see to solve the problem is to use BEFORE INSERT trigger which will check the count of rows in table and delete the older rows. But I'm pretty sure that this is a huge overheat :/
Another solution is to clear the table with cron script every N minutes. This is a simplest way, but still it needs another system to watch for.
Anyone knows a better solution? :)
Try to make a restriction on adding a new record to a table. Raise an error when a new record is going to be added.
DELIMITER $$
CREATE TRIGGER trigger1
BEFORE INSERT
ON table1
FOR EACH ROW
BEGIN
SELECT COUNT(*) INTO #cnt FROM table1;
IF #cnt >= 25 THEN
CALL sth(); -- raise an error
END IF;
END
$$
DELIMITER ;
Note, that COUNT operation may be slow on big InnoDb tables.
On MySQL 5.5 you can use SIGNAL // RESIGNAL statement to raise an error.
Create a table with 100,000 rows.
Pre-fill one of the fields with a
"time-stamp" in the past.
Select oldest record, update "time-stamp"
when "creating" (updating) record.
Only use select and update - never use insert or delete.
Reverse index on "time-stamp" field makes
the select/update fast.
There is no way to limit the maximum number of a table rows in MySQL, unless you write a Trigger to do that.
I'm just making up an answer off the top of my head. My assumption is that you want something like a 'bucket' where you put in records, and that you want to empty it before it hits a certain record number count.
After an insert statement, run SELECT LAST_INSERT_ID(); which will get you the auto increment of a record id. Yes you still have to run an extra query, but it will be low resource intensive. Once you reach a certain count, truncate the table and reset the auto increment id.
Otherwise you can't have a 'capped' table in mysql, as you would have to have pre-defined actions like (do we not allowe the record, do we truncate the table? etc).

Is it possible to insert a new row at top of MySQL table?

All rows in MySQL tables are being inserted like this:
1
2
3
Is there any way how to insert new row at a top of table so that table looks like this?
3
2
1
Yes, yes, I know "order by" but let me explain the problem. I have a dating website and users can search profiles by sex, age, city, etc. There are more than 20 search criteria and it's not possible to create indexes for each possible combination. So, if I use "order by", the search usually ends with "using temporary, using filesort" and this causes a very high server load. If I remove "order by" oldest profiles are shown as first and users have to go to the last page to see the new profiles. That's very bad because first pages of search results always look the same and users have a feeling that there are no new profiles. That's why I asked this question. If it's not possible to insert last row at top of table, can you suggest anything else?
The order in which the results are returned when there's no ORDER BY clause depends on the RDBM. In the case of MySQL, or at least most engines, if you don't explicitly specify the order it will be ascending, from oldest to new entries. Where the row is located "physically" doesn't matter. I'm not sure if all mysql engines work that way though. I.e., in PostgreSQL the "default" order shows the most recently updated rows first. This might be the way some of the MySQL engines work too.
Anyway, the point is - if you want the results ordered - always specify sort order, don't just depend on something default that seems to work. In you case you want something trivial - you want the users in descending order, so just use:
SELECT * FROM users ORDER BY id DESC
I think you just need to make sure that if you always need to show the latest data first, all of your indexes need to specify the date/time field first, and all of your queries order by that field first.
If ORDER BY is slowing everything down then you need to optimise your queries or your database structure, i would say.
Maybe if you add the id 'by hand', and give it a negative value, but i (and probably nobody) would recommend you to do that:
Regular insert, e.g.
insert into t values (...);
Update with set, e.g.
update t set id = -id where id = last_insert_id();
Normally you specify a auto_incrementing primary key.
However, you can just specify the primary key like so:
CREATE TABLE table1 (
id signed integer primary key default 1, <<-- no auto_increment, but has a default value
other fields .....
Now add a BEFORE INSERT trigger that changes the primary key.
DELIMITER $$
CREATE TRIGGER ai_table1_each BEFORE INSERT ON table1 FOR EACH ROW
BEGIN
DECLARE new_id INTEGER;
SELECT COALESCE(MIN(id), 0) -1 INTO new_id FROM table1;
SET NEW.id = new_id;
END $$
DELIMITER ;
Now your id will start at -1 and run down from there.
The insert trigger will make sure no concurrency problems occur.
I know that a lot of time has passed since the above question was asked. But I have something to add to the comments:
I'm using MySQL version: 5.7.18-0ubuntu0.16.04.1
When no ORDER BY clause is used with SELECT it is noticeable that records are displayed, regardless of the order in which they are added, in the table's Prime Key sequence.

Quickest way to delete enormous MySQL table

I have an enormous MySQL (InnoDB) database with millions of rows in the sessions table that were created by an unrelated, malfunctioning crawler running on the same server as ours. Unfortunately, I have to fix the mess now.
If I try to truncate table sessions; it seems to take an inordinately long time (upwards of 30 minutes). I don't care about the data; I just want to have the table wiped out as quickly as possible. Is there a quicker way, or will I have to just stick it out overnight?
(As this turned up high in Google's results, I thought a little more instruction might be handy.)
MySQL has a convenient way to create empty tables like existing tables, and an atomic table rename command. Together, this is a fast way to clear out data:
CREATE TABLE new_foo LIKE foo;
RENAME TABLE foo TO old_foo, new_foo TO foo;
DROP TABLE old_foo;
Done
The quickest way is to use DROP TABLE to drop the table completely and recreate it using the same definition. If you have no foreign key constraints on the table then you should do that.
If you're using MySQL version greater than 5.0.3, this will happen automatically with a TRUNCATE. You might get some useful information out of the manual as well, it describes how a TRUNCATE works with FK constraints. http://dev.mysql.com/doc/refman/5.0/en/truncate-table.html
EDIT: TRUNCATE is not the same as a drop or a DELETE FROM. For those that are confused about the differences, please check the manual link above. TRUNCATE will act the same as a drop if it can (if there are no FK's), otherwise it acts like a DELETE FROM with no where clause.
EDIT: If you have a large table, your MariaDB/MySQL is running with a binlog_format as ROW and you execute a DELETE without a predicate/WHERE clause, you are going to have issues to keep up the replication or even, to keep your Galera nodes running without hitting a flow control state. Also, binary logs can get your disk full. Be careful.
The best way I have found of doing this with MySQL is:
DELETE from table_name LIMIT 1000;
Or 10,000 (depending on how fast it happens).
Put that in a loop until all the rows are deleted.
Please do try this as it will actually work. It will take some time, but it will work.
Couldn't you grab the schema drop the table and recreate it?
drop table should be the fastest way to get rid of it.
Have you tried to use "drop"? I've used it on tables over 20GB and it always completes in seconds.
If you just want to get rid of the table altogether, why not simply drop it?
Truncate is fast, usually on the order of seconds or less. If it took 30 minutes, you probably had a case of some foreign keys referencing the table you were truncating. There may also be locking issues involved.
Truncate is effectively as efficient as one can empty a table, but you may have to remove the foreign key references unless you want those tables scrubbed as well.
We had these issues. We no longer use the database as a session store with Rails 2.x and the cookie store. However, dropping the table is a decent solution. You may want to consider stopping the mysql service, temporarily disable logging, start things up in safe mode and then do your drop/create. When done, turn on your logging again.
I'm not sure why it's taking so long. But perhaps try a rename, and recreate a blank table. Then you can drop the "extra" table without worrying how long it takes.
searlea's answer is nice, but as stated in the comments, you lose the foreign keys during the fight.
this solution is similar: the truncate is executed within a second, but you keep the foreign keys.
The trick is that we disable/enable the FK checks.
SET FOREIGN_KEY_CHECKS=0;
CREATE TABLE NewFoo LIKE Foo;
insert into NewFoo SELECT * from Foo where What_You_Want_To_Keep
truncate table Foo;
insert into Foo SELECT * from NewFoo;
SET FOREIGN_KEY_CHECKS=1;
Extended answer - Delete all but some rows
My problem was: Because of a crazy script, my table was for with 7.000.000 junk rows. I needed to delete 99% of data in this table, this is why i needed to copy What I Want To Keep in a tmp table before deleteting.
These Foo Rows i needed to keep were depending on other tables, that have foreign keys, and indexes.
something like that:
insert into NewFoo SELECT * from Foo where ID in (
SELECT distinct FooID from TableA
union SELECT distinct FooID from TableB
union SELECT distinct FooID from TableC
)
but this query was always timing out after 1 hour.
So i had to do it like this:
CREATE TEMPORARY TABLE tmpFooIDS ENGINE=MEMORY AS (SELECT distinct FooID from TableA);
insert into tmpFooIDS SELECT distinct FooID from TableB
insert into tmpFooIDS SELECT distinct FooID from TableC
insert into NewFoo SELECT * from Foo where ID in (select ID from tmpFooIDS);
I theory, because indexes are setup correctly, i think both ways of populating NewFoo should have been the same, but practicaly it didn't.
This is why in some cases, you could do like this:
SET FOREIGN_KEY_CHECKS=0;
CREATE TABLE NewFoo LIKE Foo;
-- Alternative way of keeping some data.
CREATE TEMPORARY TABLE tmpFooIDS ENGINE=MEMORY AS (SELECT * from Foo where What_You_Want_To_Keep);
insert into tmpFooIDS SELECT ID from Foo left join Bar where OtherStuff_You_Want_To_Keep_Using_Bar
insert into NewFoo SELECT * from Foo where ID in (select ID from tmpFooIDS);
truncate table Foo;
insert into Foo SELECT * from NewFoo;
SET FOREIGN_KEY_CHECKS=1;

How to insert the value derived from AUTO_INCREMENT into another column in the same INSERT query?

I have an id column which is a primary key with AUTO_INCREMENT. I need the value that is generated to be inserted into the id column, as well as another column (which isn't set to AUTO_INCREMENT, and isnt unique.
Currently I use the mysqld_isnert_id() function to get the id, and simply run an update query after the insert, but I was wondering if I could do this without running the 2nd update query.
after insert Trigger?
If I recall correctly, the automatically generated ID isn't even created until after the insert has been performed. Your two query way is probably the only way without diving into perhaps a stored procedure.
You could define a trigger along the lines of:
delimiter //
CREATE TRIGGER upd_check AFTER INSERT ON mainTable
FOR EACH ROW
BEGIN
UPDATE dependingTable
SET dependingTable.column = NEW.id
END;//
delimiter ;
I am not exactly sure WHEN the AUTO_INCREMENT value is generated, but you could try the following, since if it works it'll save you an update (If the column you want the value replicated to is in the same row as the inserted row):
CREATE TRIGGER upd_check BEFORE INSERT ON mainTable
FOR EACH ROW
SET NEW.column = NEW.id
The only way I can see you doing it with a single query is to use the information schema. In the information schema there is a table called 'tables', there you access the column auto_increment. That contains the NEXT insert id for that table, you can access this via a nested select, just give the user used to connect to the database read access to that table. This will only work with innodb engines as far as I can tell as that way the nested select you'll do to populate the second id field will be part of the greater transaction of the insert.
That's what your query might look like:
INSERT INTO fooTable VALUES (0, (SELECT AUTO_INCREMENT FROM information_schema.TABLES));
Also if you're worried about read access and security issues, just remember this is the same info you can get by running a show table status. Speaking of which, I tried to see if you could project the show commands/queries via a select and you can't, which totally sucks, because that would have been a much cleaner solution.