update a random row from a table - mysql

I've got this code here:
UPDATE starinformation SET starOwner = -1 WHERE (
SELECT starinformation.starID
ORDER BY RAND( )
LIMIT 1
)
I want to update a random row from a table with information pulled from that table and only effect 1 row.

There are a lot of problems with this approach:
MySQL does not allow combining a modifying outer query with a selecting subquery to touch the same table
ORDER BY RAND() LIMIT 1 is a major performance killer: It will create an intermediate result of ALL rows - i.e. if your table contains 1 megarows, you will touch every single one of them.
My recommendation is to
allways have a sortable single-field PK
Run SELECT ROUND(RAND()*COUNT(*)) AS num FROM starinformation, returning $num
Run UPDATE starinformation SET starOwner = -1 ORDER BY [pk-field] LIMIT $num,1

Related

Generate random sample from huge table, with conditions

I have a table (>500GB) from which I need to select 5000 random rows where table.condition = True and 5000 random rows where table.condition = False. My attempts until now used tablesample, but, unfortunately, any WHERE clause is only applied after the sample has been generated. So the only way I see this working is by doing the following:
Generate 2 empty temporary_tables -- temporary_table_true and temporary_table_false -- with the structure of the main table, so I can add rows iteratively.
create temp temporary_table_true as select
table.condition, table.b, table.c, ... table.z
from table LIMIT 0
create temp temporary_table_false as select
table.condition, table.b, table.c, ... table.z
from table LIMIT 0
Create a loop that only stops when the size of my temporary_tables are both 5000.
Inside that loop I generate a batch of 100 random samples from table, in each iteration. From those random rows I insert the ones with the table.condition = True in my temporary_table_true and the ones with the table.condition = False in my temporary_table_false.
Could you guys give me some help here?
Are there any better approaches?
If not, any idea on how I could code parts 2. and 3.?
Add a column to your table and populate it with random numbers.
ALTER TABLE `table` ADD COLUMN rando FLOAT DEFAULT NULL;
UPDATE `table` SET rando = RAND() WHERE rando IS NULL;
Then do
SELECT *
FROM `table`
WHERE rando > RAND() * 0.9
AND condition = 0
ORDER BY rando
LIMIT 5000
Do it again for condition = 1 and Bob's your uncle. It will pull rows in random order starting from a random row.
A couple of notes:
0.9 is there to improve the chances you'll actually get 5000 rows and not some lesser number.
You may have to add LIMIT 1000 to the UPDATE statement and run it a whole bunch of times to populate the complete rando column: trying to update all the rows in a big table can generate a huge transaction and swamp your server for a long time.
If you need to generate another random sample, run the UPDATE or UPDATEs again.
The textbook solution would be to run two queries, one for rows with true and one for rows with `false:
SELECT * FROM mytable WHERE `condition`=true ORDER BY RAND() LIMIT 5000;
SELECT * FROM mytable WHERE `condition`=false ORDER BY RAND() LIMIT 5000;
The WHERE clause applies first, to reduce the matching rows, then it sorts the subset of rows randomly and picks up to 5000 of them. The result is a random subset.
This solution has an advantage that it returns a pretty evenly distributed set of random rows, and it automatically handles cases like there being an unknown proportion of true to false in the table, and even handles if one of the condition values matches fewer than 5000 rows.
The disadvantage is that it's incredibly expensive to sort such a large set of rows, and an index does not help you sort by a nondeterministic expression like RAND().
You could do this with window functions if you need it to be a single SQL query, but it would still be very expensive.
SELECT t.*
FROM (
SELECT *, ROW_NUMBER() OVER (PARTITION BY `condition` ORDER BY RAND()) AS rownum
FROM mytable
) AS t
WHERE t.rownum <= 5000;
Another alternative that does not use a random sort operation would be to do a table-scan, and pick a random subset of rows. But you need to know roughly how many rows match each condition value, so that you can estimate the fraction of these that would make ~5000 rows. Say for example there are 1 million rows with the true value and 500k rows with the false value:
SELECT * FROM mytable WHERE `condition`=true AND RAND()*1000000 < 5000;
SELECT * FROM mytable WHERE `condition`=false AND RAND()*500000 < 5000;
This is not guaranteed to return exactly 5000 rows, because of the randomness. But probably pretty close. And a table-scan is still quite expensive.
The answer from O.Jones gives me another idea. If you can add a column, then you can add an index on that column.
ALTER TABLE `table`
ADD COLUMN rando FLOAT DEFAULT NULL,
ADD INDEX (`condition`, rando);
UPDATE `table` SET rando = RAND() WHERE rando IS NULL;
Then you can use indexed searches. Again, you need to know how many rows match each value to do this.
SELECT * FROM mytable
WHERE `condition`=true AND rando < 5000/1000000
ORDER BY `condition`, rando
LIMIT 5000;
SELECT * FROM mytable
WHERE `condition`=true AND rando < 5000/500000
ORDER BY `condition`, rando
LIMIT 5000;
The ORDER BY in this case should be a no-op if the index I added is used. The rows will be read in index order anyway, and MySQL's optimizer will not do any work to sort them.
This solution will be much faster, because it doesn't have to sort anything, and doesn't have to do a table-scan. MySQL has an optimization to bail out of a query once the LIMIT has been satisfied.
But the disadvantage is that it doesn't return a different random result when you run the SELECT again, or if different clients run the query. You would have to use UPDATE to re-randomize the whole table to get a different result. This might not be suitable depending on your needs.

best way for select row mysql

Which of the following notations is better?
SELECT id,name,data FROM table WHERE id = X
OR
SELECT id,name,data FROM table WHERE id = X LIMIT 1
I think it should not have "LIMIT".
Thanks!
If there is a unique constraint on id then they will be exactly the same.
If there isn't a unique constraint (which I would find highly surprising on a column called id) then which is better depends on what you want to do:
If you want to find all rows matching the condition, don't use LIMIT.
If you want to find any row matching the condition (and you don't care which), use LIMIT 1.
Always use LIMIT with select statement even if you are fetching 1 record because it will speed up your query. So use :
SELECT id,name,data FROM table WHERE id = X LIMIT 1
For example :
If there are 1000 records in your table than if you using
SELECT id,name,data FROM table WHERE id = X
than it will traverse through 1000 records even if finds that id
But if you using LIMIT like this
SELECT id,name,data FROM table WHERE id = X LIMIT 1
than it will stop executing when finds first record.

Select all except top N in MySQL

I have a table with data that I don't need to keep for very long, so every night I want to remove all rows except the last 20.
To do that, I found the following query:
DELETE FROM Table WHERE ID NOT IN (
SELECT id FROM (
SELECT TOP 10 ID FROM Table
) AS x
)
MySQL doesn't support the TOP function, so I rewrote it to use LIMIT instead:
DELETE FROM Table WHERE ID NOT IN (
SELECT id FROM (
SELECT ID FROM Table ORDER BY ID DESC LIMIT 10
) AS x
)
Unfortunately, MySQL doesn't seem to support the LIMIT function within subqueries. So what do I do now?
How do I select all except the 10 rows with the highest ID?
I could probably just delete all the records that are older than a day or something, but it feels like I should be able to do it this way.
As MySQL has foibles when reading from the same table as you are deleting, the simplest option is often to use a temp table.
INSERT INTO yourTempTable
SELECT id FROM yourTable
ORDER BY ID DESC LIMIT 10
DELETE yourTable WHERE id IN (SELECT id FROM yourTempTable)
Or many variations thereof (using joins instead of IN, etc).
The main consideration isn't about how to write the second query, it's about race conditions.
Your data could be changed by another process between the temp table and the delete. If that is possible and matters, you need to wrap it all in a transaction and slap a table lock on yourTable.
Another way:
DELETE t
FROM
TableX AS t
CROSS JOIN
( SELECT Id
FROM TableX
ORDER BY Id DESC
LIMIT 1 OFFSET 9
) AS tenth
WHERE t.Id < tenth.id
There are few ways to do this, I would prefer this :-
create table tmp_table select * from your_table order by id desc limit 20;
truncate table your_table;
insert into your_table select * from tmp_table;
drop table tmp_table;
It seems very lengthy at first glance,
but personally I think is understandable,
and very low risk (plus, it should be efficient that doing a JOIN)
PS : truncate won't reset auto increment field
ps2 : this approach assume there is no write during truncate, one can issue lock to preserve the data integrity

How can I speed up a MySQL query with a large offset in the LIMIT clause?

I'm getting performance problems when LIMITing a mysql SELECT with a large offset:
SELECT * FROM table LIMIT m, n;
If the offset m is, say, larger than 1,000,000, the operation is very slow.
I do have to use limit m, n; I can't use something like id > 1,000,000 limit n.
How can I optimize this statement for better performance?
Perhaps you could create an indexing table which provides a sequential key relating to the key in your target table. Then you can join this indexing table to your target table and use a where clause to more efficiently get the rows you want.
#create table to store sequences
CREATE TABLE seq (
seq_no int not null auto_increment,
id int not null,
primary key(seq_no),
unique(id)
);
#create the sequence
TRUNCATE seq;
INSERT INTO seq (id) SELECT id FROM mytable ORDER BY id;
#now get 1000 rows from offset 1000000
SELECT mytable.*
FROM mytable
INNER JOIN seq USING(id)
WHERE seq.seq_no BETWEEN 1000000 AND 1000999;
If records are large, the slowness may be coming from loading the data. If the id column is indexed, then just selecting it will be much faster. You can then do a second query with an IN clause for the appropriate ids (or could formulate a WHERE clause using the min and max ids from the first query.)
slow:
SELECT * FROM table ORDER BY id DESC LIMIT 10 OFFSET 50000
fast:
SELECT id FROM table ORDER BY id DESC LIMIT 10 OFFSET 50000
SELECT * FROM table WHERE id IN (1,2,3...10)
There's a blog post somewhere on the internet on how you should best make the selection of the rows to show should be as compact as possible, thus: just the ids; and producing the complete results should in turn fetch all the data you want for only the rows you selected.
Thus, the SQL might be something like (untested, I'm not sure it actually will do any good):
select A.* from table A
inner join (select id from table order by whatever limit m, n) B
on A.id = B.id
order by A.whatever
If your SQL engine is too primitive to allow this kind of SQL statements, or it doesn't improve anything, against hope, it might be worthwhile to break this single statement into multiple statements and capture the ids into a data structure.
Update: I found the blog post I was talking about: it was Jeff Atwood's "All Abstractions Are Failed Abstractions" on Coding Horror.
I don't think there's any need to create a separate index if your table already has one. If so, then you can order by this primary key and then use values of the key to step through:
SELECT * FROM myBigTable WHERE id > :OFFSET ORDER BY id ASC;
Another optimisation would be not to use SELECT * but just the ID so that it can simply read the index and doesn't have to then locate all the data (reduce IO overhead). If you need some of the other columns then perhaps you could add these to the index so that they are read with the primary key (which will most likely be held in memory and therefore not require a disc lookup) - although this will not be appropriate for all cases so you will have to have a play.
Paul Dixon's answer is indeed a solution to the problem, but you'll have to maintain the sequence table and ensure that there is no row gaps.
If that's feasible, a better solution would be to simply ensure that the original table has no row gaps, and starts from id 1. Then grab the rows using the id for pagination.
SELECT * FROM table A WHERE id >= 1 AND id <= 1000;
SELECT * FROM table A WHERE id >= 1001 AND id <= 2000;
and so on...
I have run into this problem recently. The problem was two parts to fix. First I had to use an inner select in my FROM clause that did my limiting and offsetting for me on the primary key only:
$subQuery = DB::raw("( SELECT id FROM titles WHERE id BETWEEN {$startId} AND {$endId} ORDER BY title ) as t");
Then I could use that as the from part of my query:
'titles.id',
'title_eisbns_concat.eisbns_concat',
'titles.pub_symbol',
'titles.title',
'titles.subtitle',
'titles.contributor1',
'titles.publisher',
'titles.epub_date',
'titles.ebook_price',
'publisher_licenses.id as pub_license_id',
'license_types.shortname',
$coversQuery
)
->from($subQuery)
->leftJoin('titles', 't.id', '=', 'titles.id')
->leftJoin('organizations', 'organizations.symbol', '=', 'titles.pub_symbol')
->leftJoin('title_eisbns_concat', 'titles.id', '=', 'title_eisbns_concat.title_id')
->leftJoin('publisher_licenses', 'publisher_licenses.org_id', '=', 'organizations.id')
->leftJoin('license_types', 'license_types.id', '=', 'publisher_licenses.license_type_id')
The first time I created this query I had used the OFFSET and LIMIT in MySql. This worked fine until I got past page 100 then the offset started getting unbearably slow. Changing that to BETWEEN in my inner query sped it up for any page. I'm not sure why MySql hasn't sped up OFFSET but between seems to reel it back in.

SQL query: Delete all records from the table except latest N?

Is it possible to build a single mysql query (without variables) to remove all records from the table, except latest N (sorted by id desc)?
Something like this, only it doesn't work :)
delete from table order by id ASC limit ((select count(*) from table ) - N)
Thanks.
You cannot delete the records that way, the main issue being that you cannot use a subquery to specify the value of a LIMIT clause.
This works (tested in MySQL 5.0.67):
DELETE FROM `table`
WHERE id NOT IN (
SELECT id
FROM (
SELECT id
FROM `table`
ORDER BY id DESC
LIMIT 42 -- keep this many records
) foo
);
The intermediate subquery is required. Without it we'd run into two errors:
SQL Error (1093): You can't specify target table 'table' for update in FROM clause - MySQL doesn't allow you to refer to the table you are deleting from within a direct subquery.
SQL Error (1235): This version of MySQL doesn't yet support 'LIMIT & IN/ALL/ANY/SOME subquery' - You can't use the LIMIT clause within a direct subquery of a NOT IN operator.
Fortunately, using an intermediate subquery allows us to bypass both of these limitations.
Nicole has pointed out this query can be optimised significantly for certain use cases (such as this one). I recommend reading that answer as well to see if it fits yours.
I know I'm resurrecting quite an old question, but I recently ran into this issue, but needed something that scales to large numbers well. There wasn't any existing performance data, and since this question has had quite a bit of attention, I thought I'd post what I found.
The solutions that actually worked were the Alex Barrett's double sub-query/NOT IN method (similar to Bill Karwin's), and Quassnoi's LEFT JOIN method.
Unfortunately both of the above methods create very large intermediate temporary tables and performance degrades quickly as the number of records not being deleted gets large.
What I settled on utilizes Alex Barrett's double sub-query (thanks!) but uses <= instead of NOT IN:
DELETE FROM `test_sandbox`
WHERE id <= (
SELECT id
FROM (
SELECT id
FROM `test_sandbox`
ORDER BY id DESC
LIMIT 1 OFFSET 42 -- keep this many records
) foo
);
It uses OFFSET to get the id of the Nth record and deletes that record and all previous records.
Since ordering is already an assumption of this problem (ORDER BY id DESC), <= is a perfect fit.
It is much faster, since the temporary table generated by the subquery contains just one record instead of N records.
Test case
I tested the three working methods and the new method above in two test cases.
Both test cases use 10000 existing rows, while the first test keeps 9000 (deletes the oldest 1000) and the second test keeps 50 (deletes the oldest 9950).
+-----------+------------------------+----------------------+
| | 10000 TOTAL, KEEP 9000 | 10000 TOTAL, KEEP 50 |
+-----------+------------------------+----------------------+
| NOT IN | 3.2542 seconds | 0.1629 seconds |
| NOT IN v2 | 4.5863 seconds | 0.1650 seconds |
| <=,OFFSET | 0.0204 seconds | 0.1076 seconds |
+-----------+------------------------+----------------------+
What's interesting is that the <= method sees better performance across the board, but actually gets better the more you keep, instead of worse.
Unfortunately for all the answers given by other folks, you can't DELETE and SELECT from a given table in the same query.
DELETE FROM mytable WHERE id NOT IN (SELECT MAX(id) FROM mytable);
ERROR 1093 (HY000): You can't specify target table 'mytable' for update
in FROM clause
Nor can MySQL support LIMIT in a subquery. These are limitations of MySQL.
DELETE FROM mytable WHERE id NOT IN
(SELECT id FROM mytable ORDER BY id DESC LIMIT 1);
ERROR 1235 (42000): This version of MySQL doesn't yet support
'LIMIT & IN/ALL/ANY/SOME subquery'
The best answer I can come up with is to do this in two stages:
SELECT id FROM mytable ORDER BY id DESC LIMIT n;
Collect the id's and make them into a comma-separated string:
DELETE FROM mytable WHERE id NOT IN ( ...comma-separated string... );
(Normally interpolating a comma-separate list into an SQL statement introduces some risk of SQL injection, but in this case the values are not coming from an untrusted source, they are known to be integer values from the database itself.)
note: Though this doesn't get the job done in a single query, sometimes a more simple, get-it-done solution is the most effective.
DELETE i1.*
FROM items i1
LEFT JOIN
(
SELECT id
FROM items ii
ORDER BY
id DESC
LIMIT 20
) i2
ON i1.id = i2.id
WHERE i2.id IS NULL
If your id is incremental then use something like
delete from table where id < (select max(id) from table)-N
To delete all the records except te last N you may use the query reported below.
It's a single query but with many statements so it's actually not a single query the way it was intended in the original question.
Also you need a variable and a built-in (in the query) prepared statement due to a bug in MySQL.
Hope it may be useful anyway...
nnn are the rows to keep and theTable is the table you're working on.
I'm assuming you have an autoincrementing record named id
SELECT #ROWS_TO_DELETE := COUNT(*) - nnn FROM `theTable`;
SELECT #ROWS_TO_DELETE := IF(#ROWS_TO_DELETE<0,0,#ROWS_TO_DELETE);
PREPARE STMT FROM "DELETE FROM `theTable` ORDER BY `id` ASC LIMIT ?";
EXECUTE STMT USING #ROWS_TO_DELETE;
The good thing about this approach is performance: I've tested the query on a local DB with about 13,000 record, keeping the last 1,000. It runs in 0.08 seconds.
The script from the accepted answer...
DELETE FROM `table`
WHERE id NOT IN (
SELECT id
FROM (
SELECT id
FROM `table`
ORDER BY id DESC
LIMIT 42 -- keep this many records
) foo
);
Takes 0.55 seconds. About 7 times more.
Test environment: mySQL 5.5.25 on a late 2011 i7 MacBookPro with SSD
DELETE FROM table WHERE ID NOT IN
(SELECT MAX(ID) ID FROM table)
try below query:
DELETE FROM tablename WHERE id < (SELECT * FROM (SELECT (MAX(id)-10) FROM tablename ) AS a)
the inner sub query will return the top 10 value and the outer query will delete all the records except the top 10.
What about :
SELECT * FROM table del
LEFT JOIN table keep
ON del.id < keep.id
GROUP BY del.* HAVING count(*) > N;
It returns rows with more than N rows before.
Could be useful ?
Using id for this task is not an option in many cases. For example - table with twitter statuses. Here is a variant with specified timestamp field.
delete from table
where access_time >=
(
select access_time from
(
select access_time from table
order by access_time limit 150000,1
) foo
)
Just wanted to throw this into the mix for anyone using Microsoft SQL Server instead of MySQL. The keyword 'Limit' isn't supported by MSSQL, so you'll need to use an alternative. This code worked in SQL 2008, and is based on this SO post. https://stackoverflow.com/a/1104447/993856
-- Keep the last 10 most recent passwords for this user.
DECLARE #UserID int; SET #UserID = 1004
DECLARE #ThresholdID int -- Position of 10th password.
SELECT #ThresholdID = UserPasswordHistoryID FROM
(
SELECT ROW_NUMBER()
OVER (ORDER BY UserPasswordHistoryID DESC) AS RowNum, UserPasswordHistoryID
FROM UserPasswordHistory
WHERE UserID = #UserID
) sub
WHERE (RowNum = 10) -- Keep this many records.
DELETE UserPasswordHistory
WHERE (UserID = #UserID)
AND (UserPasswordHistoryID < #ThresholdID)
Admittedly, this is not elegant. If you're able to optimize this for Microsoft SQL, please share your solution. Thanks!
If you need to delete the records based on some other column as well, then here is a solution:
DELETE
FROM articles
WHERE id IN
(SELECT id
FROM
(SELECT id
FROM articles
WHERE user_id = :userId
ORDER BY created_at DESC LIMIT 500, 10000000) abc)
AND user_id = :userId
This should work as well:
DELETE FROM [table]
INNER JOIN (
SELECT [id]
FROM (
SELECT [id]
FROM [table]
ORDER BY [id] DESC
LIMIT N
) AS Temp
) AS Temp2 ON [table].[id] = [Temp2].[id]
DELETE FROM table WHERE id NOT IN (
SELECT id FROM table ORDER BY id, desc LIMIT 0, 10
)
Stumbled across this and thought I'd update.
This is a modification of something that was posted before. I would have commented, but unfortunately don't have 50 reputation...
LOCK Tables TestTable WRITE, TestTable as TestTableRead READ;
DELETE FROM TestTable
WHERE ID <= (
SELECT ID
FROM TestTable as TestTableRead -- (the 'as' declaration is required for some reason)
ORDER BY ID DESC LIMIT 1 OFFSET 42 -- keep this many records);
UNLOCK TABLES;
The use of 'Where' and 'Offset' circumvents the sub-query.
You also cannot read and write from the same table in the same query, as you may modify entries as they're being used. The Locks allow to circumvent this. This is also safe for parallel access to the database by other processes.
For performance and further explanation see the linked answer.
Tested with mysql Ver 15.1 Distrib 10.5.18-MariaDB
For further details on locks, see here
Why not
DELETE FROM table ORDER BY id DESC LIMIT 1, 123456789
Just delete all but the first row (order is DESC!), using a very very large nummber as second LIMIT-argument. See here
Answering this after a long time...Came across the same situation and instead of using the answers mentioned, I came with below -
DELETE FROM table_name order by ID limit 10
This will delete the 1st 10 records and keep the latest records.