I need to select say 2000000 records at random from a very large database. I looked at previous questions. So please do not mark this question as duplicate. I need clarification. Most answers suggest using ORDER BY RAND() function. So my query will be:
SELECT DISTINCT no
FROM table
WHERE name != "null"
ORDER BY RAND()
LIMIT 2000000;
I want each record to be selected at random. I am not sure if I understand the ORDER BY RAND() effect here. But I am afraid it will select a random record, say 3498 and will continue selection from there, say, the next records will be: 3499, 3500, 3501, etc.
I want each recor to be random, not to start the order from a random record.
How can I select 2000000 random record where each record is selected at random? Can you simplify what exactly ORDER BY RAND() does?
Note that I use Google BigQuery so the performance issue should not be a big problem here. I just want to achieve the requirement of selecting random 2000000 records.
SELECT x
FROM T
ORDER BY RAND()
is equivalent to
SELECT x
FROM (
SELECT x, RAND() AS r
FROM T
)
ORDER BY r
The query generates a random value for each row, then uses that random value to order the rows. If you include a limit:
SELECT x
FROM T
ORDER BY RAND()
LIMIT 10
This randomly selects 10 rows from the table.
Related
What I'm trying to achieve is to return a random sample of x size from a dataset, then order it based on a column. This is what I have tried:
SELECT *
FROM Table
WHERE integerField > 0
ORDER BY RAND(), integerField DESC
LIMIT 100
The idea here is that it will first order the table by random, effectively shuffling it, then order the first 100 rows returned by the integerField. I believe the problem is that it does not do the limit before the order, so I'm either going to get 100 random lines back or the first 100 lines of the database ordered by score (In this example, it's the former)
Is there a way to achieve this in a single query, or will the output have to be manually parsed through external logic/additional queries?
Solution: Utilise a substring to collect the initial randomised sample, then order it:
(SELECT * FROM Table
WHERE integerString >= 0
ORDER BY RAND() LIMIT 100)
ORDER BY integerField DESC
Credit to Akina and jarlh for the pointer to use substring
I'm currently working on a multi-thread program (in Java) that will need to select random rows in a database, in order to update them. This is working well but I started to encounter some performance issue regarding my SELECT request.
I tried multiple solutions before finding this website :
http://jan.kneschke.de/projects/mysql/order-by-rand/
I tried with the following solution :
SELECT * FROM Table
JOIN (SELECT FLOOR( COUNT(*) * RAND() ) AS Random FROM Table)
AS R ON Table.ID > R.Random
WHERE Table.FOREIGNKEY_ID IS NULL
LIMIT 1;
It selects only one row below the random id number generated. This is working pretty good (an average of less than 100ms per request on 150k rows). But after the process of my program, the FOREIGNKEY_ID will no longer be NULL (it will be updated with some value).
The problem is, my SELECT will "forget" some rows than have an id below the random generated id, and I won't be able to process them.
So I tried to adapt my request, doing this :
SELECT * FROM Table
JOIN (SELECT FLOOR(
(SELECT COUNT(id) FROM Table WHERE FOREIGNKEY_ID IS NULL) * RAND() )
AS Random FROM Table)
AS R ON Table.ID > R.Random
WHERE Table.FOREIGNKEY_ID IS NULL
LIMIT 1;
With that request, no more problems of skipping some rows, but performances are decreasing drastically (an average of 1s per request on 150k rows).
I could simply execute the fast one when I still have a lot of rows to process, and switch to the slow one when it remains just a few rows, but it will be a "dirty" fix in the code, and I would prefer an elegant SQL request that can do the work.
Thank you for your help, please let me know if I'm not clear or if you need more details.
For your method to work more generally, you want max(id) rather than count(*):
SELECT t.*
FROM Table t JOIN
(SELECT FLOOR(MAX(id) * RAND() ) AS Random FROM Table) r
ON t.ID > R.Random
WHERE t.FOREIGNKEY_ID IS NULL
ORDER BY t.ID
LIMIT 1;
The ORDER BY is usually added to be sure that the "next" id is returned. In theory, MySQL could always return the maximum id in the table.
The problem is gaps in ids. And, it is easy to create distributions where you never get a random number . . . say that the four ids are 1, 2, 3, 1000. Your method will never get 1000000. The above will almost always get it.
Perhaps the simplest solution to your problem is to run the first query multiple times until it gets a valid row. The next suggestion would be an index on (FOREIGNKEY_ID, ID), which the subquery can use. That might speed the query.
I tend to favor something more along these lines:
SELECT t.id
FROM Table t
WHERE t.FOREIGNKEY_ID IS NULL AND
RAND() < 1.0 / 1000
ORDER BY RAND()
LIMIT 1;
The purpose of the WHERE clause is to reduce the volume considerable, so the ORDER BY doesn't take much time.
Unfortunately, this will require scanning the table, so you probably won't get responses in the 100 ms range on a 150k table. You can reduce that to an index scan with an index on t(FOREIGNKEY_ID, ID).
EDIT:
If you want a reasonable chance of a uniform distribution and performance that does not increase as the table gets larger, here is another idea, which -- alas -- requires a trigger.
Add a new column to the table called random, which is initialized with rand(). Build an index onrandom`. Then run a query such as:
select t.*
from ((select t.*
from t
where random >= #random
order by random
limit 10
) union all
(select t.*
from t
where random < #random
order by random desc
limit 10
)
) t
order by rand();
limit 1;
The idea is that the subqueries can use the index to choose a set of 20 rows that are pretty arbitrary -- 10 before and after the chosen point. The rows are then sorted (some overhead, which you can control with the limit number). These are randomized and returned.
The idea is that if you choose random numbers, there will be arbitrary gaps and these would make the chosen numbers not quite uniform. However, by taking a larger sample around the value, then the probability of any one value being chosen should approach a uniform distribution. The uniformity would still have edge effects, but these should be minor on a large amount of data.
Your ID's are probably gonna contain gaps. Anything that works with COUNT(*) is not going to be able to find all the ID's.
A table with records with ID's 1,2,3,10,11,12,13 has only 7 records. Doing a random with COUNT(*) will often result in a miss as records 4,5 and 6 donot exist, and it will then pick the nearest ID which is 3. This is not only unbalanced (it will pick 3 far too often) but it will also never pick records 10-13.
To get a fair uniformly distrubuted random selection of records, I would suggest loading the ID's of the table first. Even for 150k rows, loading a set of integer id's will not consume a lot of memory (<1 MB):
SELECT id FROM table;
You can then use a function like Collections.shuffle to randomize the order of the ID's. To get the rest of the data, you can select records one at a time or for example 10 at a time:
SELECT * FROM table WHERE id = :id
Or:
SELECT * FROM table WHERE id IN (:id1, :id2, :id3)
This should be fast if the id column has an index, and it will give you a proper random distribution.
If prepared statement can be used, then this should work:
SELECT #skip := Floor(Rand() * Count(*)) FROM Table WHERE FOREIGNKEY_ID IS NULL;
PREPARE STMT FROM 'SELECT * FROM Table WHERE FOREIGNKEY_ID IS NULL LIMIT ?, 1';
EXECUTE STMT USING #skip;
LIMIT in SELECT statement can be used to skip rows
We have a table of 50k items and we display it at a search page with a random sort and 10 items per page. We need to apply some filters.
RAND() with or without a seed is very slow. Note that items have three categories. The first category should be displayed first with random order, and then the second category, also with random order.
generating a random number between 0 and max_id s not working because of pages and the previously mentioned constraints
randomizing the records with php makes items always display at the same page
Is there a better solution to speed up this random search?
here are few tip hopes it works
Put indexes on your main field on which you are filtering
reduce number of column in your select query (only use needed columns)
recheck your Joins
recheck your conditions
recheck your group/having/order By clause
Tip: Don't seed your RAND() call unless you're trying to test with a reproducible sequence of items.
This is tricky to do nearly perfectly without a lot of programming. In the meantime here are a couple of things to do.
First, try this. Instead of doing SELECT * FROM t ORDER BY RAND() LIMIT 10 use the following kind of subquery:
SELECT * FROM t
WHERE id IN (
SELECT id FROM t WHERE category = 1 ORDER BY RAND() LIMIT 10
UNION ALL
SELECT id FROM t WHERE category = 2 ORDER BY RAND() LIMIT 10
)
ORDER BY RAND()
This should save some time on the ORDER BY RAND() LIMIT 10 operation because it only has to shuffle the id values, not the whole record. But it's not an algorithmic change, just a volume-of-data change: it still has to shuffle the whole list of id values. So it's a quick patch, not a real fix.
Second, if you can write a PHP function that will generate a text string with, let's say, 100 random numbers between 1 and max_id, you could try this to get your first category.
SELECT * FROM t WHERE id IN
( SELECT DISTINCT id FROM t
WHERE category = 1 AND id IN (num, num, num, ..., num, num)
LIMIT 10 )
ORDER BY RAND()
This will give you ten, or fewer, randomly chosen records in the named category, pretty cheaply. Notice that you must provide many more than ten random numbers in your (num, num, num, num) list because not all the num values will be valid for rows with category = 1.
If you need more than one category, just use a similar query in a UNION to get the other category.
Both these approaches' performance will be improved by a compound index on (category, id).
Notice there's an extra ORDER BY RAND() at the end of each of those approaches' queries. That's because the lists of id values generated by the subqueries are likely to be in a non-random order.
I have been looking on the web on how to select a random row on big tables, I have found various results, but then I analyzed my data and figured out that the best way for me to go is to count the rows and select a random one of those with LIMIT
While testing I start to wonder why this works:
SET #t = CEIL(RAND()*(SELECT MAX(id) FROM logo));
SELECT id
FROM logo
WHERE
current_status_id=29 AND
logo_type_id=4 AND
active='y' AND
id>=#t
ORDER BY id
LIMIT 1;
and gives random results, but this always returns the same 4 or 5 results ?
SELECT id
FROM logo
WHERE
current_status_id=29 AND
logo_type_id=4 AND
active='y' AND
id>=CEIL(RAND()*(SELECT MAX(id) FROM logo))
ORDER BY id
LIMIT 1;
the table has MANY fields (almost 100) and quite a few indexes. over 14 Million records and counting. When I select a random it is almost NEVER that I have to select it from the table, I always have to select depending on various fields values (all indexed).
Could it be a bug of my MySQL server version (5.6.13-log Source distribution)?
One possibility is that this statement in the documentation:
RAND() in a WHERE clause is re-evaluated every time the WHERE is executed.
is simply not always true. It is true when you do:
where rand() < 0.01
to get an approximate 1% sample of the rows. Perhaps the MySQL optimizer says something like "Oh, I'll evaluate the subquery to get one value back. And, just to be more efficient, I'll multiply that row by rand() before defining the constant."
If I had to guess, that would be the case.
Another possibility is that the data is arranged so the values you are looking for has one row with a large id. Or, it could be that there are lots of rows with small ids at the very beginning, and then a very large gap.
Your method of getting a random row, by the way is not guaranteed to return a result when you are doing filtering. I don't know if that is important to you.
EDIT:
Check to see if this version works as you expect:
SELECT id
FROM logo cross join
(SELECT MAX(id) as maxid FROM logo) c
WHERE current_status_id = 29 AND
logo_type_id = 4 AND
active = 'y' AND
id >= RAND() * maxid
ORDER BY id
LIMIT 1;
If so, the problem is that the max id is being calculated and then there is an extra step of multiplying it by rand() as execution of the query begins.
I have a query that looks like this:
SELECT article FROM table1 ORDER BY publish_date LIMIT 20
How does ORDER BY work? Will it order all records, then get the first 20, or will it get 20 records and order them by the publish_date field?
If it's the last one, you're not guaranteed to really get the most recent 20 articles.
It will order first, then get the first 20. A database will also process anything in the WHERE clause before ORDER BY.
The LIMIT clause can be used to constrain the number of rows returned by the SELECT statement. LIMIT takes one or two numeric arguments, which must both be nonnegative integer constants (except when using prepared statements).
With two arguments, the first argument specifies the offset of the first row to return, and the second specifies the maximum number of rows to return. The offset of the initial row is 0 (not 1):
SELECT * FROM tbl LIMIT 5,10; # Retrieve rows 6-15
To retrieve all rows from a certain offset up to the end of the result set, you can use some large number for the second parameter. This statement retrieves all rows from the 96th row to the last:
SELECT * FROM tbl LIMIT 95,18446744073709551615;
With one argument, the value specifies the number of rows to return from the beginning of the result set:
SELECT * FROM tbl LIMIT 5; # Retrieve first 5 rows
In other words, LIMIT row_count is equivalent to LIMIT 0, row_count.
All details on: http://dev.mysql.com/doc/refman/5.0/en/select.html
Just as #James says, it will order all records, then get the first 20 rows.
As it is so, you are guaranteed to get the 20 first published articles, the newer ones will not be shown.
In your situation, I recommend that you add desc to order by publish_date, if you want the newest articles, then the newest article will be first.
If you need to keep the result in ascending order, and still only want the 10 newest articles you can ask mysql to sort your result two times.
This query below will sort the result descending and limit the result to 10 (that is the query inside the parenthesis). It will still be sorted in descending order, and we are not satisfied with that, so we ask mysql to sort it one more time. Now we have the newest result on the last row.
select t.article
from
(select article, publish_date
from table1
order by publish_date desc limit 10) t
order by t.publish_date asc;
If you need all columns, it is done this way:
select t.*
from
(select *
from table1
order by publish_date desc limit 10) t
order by t.publish_date asc;
I use this technique when I manually write queries to examine the database for various things. I have not used it in a production environment, but now when I bench marked it, the extra sorting does not impact the performance.
You could add [asc] or [desc] at the end of the order by to get the earliest or latest records
For example, this will give you the latest records first
ORDER BY stamp DESC
Append the LIMIT clause after ORDER BY
If there is a suitable index, in this case on the publish_date field, then MySQL need not scan the whole index to get the 20 records requested - the 20 records will be found at the start of the index. But if there is no suitable index, then a full scan of the table will be needed.
There is a MySQL Performance Blog article from 2009 on this.
You can use this code
SELECT article FROM table1 ORDER BY publish_date LIMIT 0,10
where 0 is a start limit of record & 10 number of record
LIMIT is usually applied as the last operation, so the result will first be sorted and then limited to 20. In fact, sorting will stop as soon as first 20 sorted results are found.
Could be simplified to this:
SELECT article FROM table1 ORDER BY publish_date DESC FETCH FIRST 20 ROWS ONLY;
You could also add many argument in the ORDER BY that is just comma separated like: ORDER BY publish_date, tab2, tab3 DESC etc...