Slow Query Time in MySQL - mysql

The following query is overloading my system. It seems to be a problem with the rand(). I have seen other posts dealing with similar issues, but can't quite get them working in this problem. It is being run on a 10M+ row table. I know that the order by rand() is the issue, but after reading there seems to be an issue of the autoincrement (items.ID) increments by 2 not 1.
SELECT stores.phone, stores.storeID, stores.name, stores.ZIP,
stores.state,stores.city, storeID, GEOCODES.lon, GEOCODES.lat
FROM items
LEFT JOIN stores on stores.storeID = items.store_ID
LEFT JOIN GEOCODES on GEOCODES.address = CONCAT(stores.address1,', ',stores.ZIP)
WHERE stores.phone IS NOT NULL
GROUP BY items.store_ID
ORDER BY RAND( )
LIMIT 200
The other article that I was trying to follow was How can i optimize MySQL's ORDER BY RAND() function?, but can't seem to figure out how to adapt it to this query. Please note that this is done in PHP.

if I were you I would LIMIT first and then ORDER BY RAND() on the limited query.. that way you arent pulling everything out and randomizing it.. I have used this exact method to speed up my queries exponentially
SELECT *
FROM
( SELECT stores.phone, stores.storeID, stores.name, stores.ZIP,
stores.state,stores.city, storeID, GEOCODES.lon, GEOCODES.lat
FROM items
LEFT JOIN stores on stores.storeID = items.store_ID
LEFT JOIN GEOCODES on GEOCODES.address = CONCAT(stores.address1,', ',stores.ZIP)
WHERE stores.phone IS NOT NULL
GROUP BY items.store_ID
LIMIT 200
) t
ORDER BY RAND( )
Some proof:
CREATE table digits as (-- a digit table with 1 million rows)
1000000 row(s) affected Records: 1000000 Duplicates: 0 Warnings: 0
1.869 sec
SELECT * FROM digits ORDER BY RAND() LIMIT 200
200 row(s) returned
0.465 sec / 0.000 sec
SELECT * FROM (SELECT * FROM digits LIMIT 200)t ORDER BY RAND()
200 row(s) returned
0.000 sec / 0.000 sec

Using RAND() in your query has serious performance implications, avoiding it will speed up your query a lot.
Also since you're using php, randomizing the ordering using shuffle() w/ php may be a significantly quicker alternative to mysql.
See: http://php.net/manual/en/function.shuffle.php

Related

Below Sql query is taking too much time. How to make this faster?

SELECT
call_id
,call_date
,call_no
,call_amountdue
,rechargesamount
,call_penalty
,callpayment_received
,calldiscount
FROM `call`
WHERE calltype = 'Regular'
AND callcode = 98
AND call_connect = 1
AND call_date < '2018-01-01'
ORDER BY
`call_date` DESC
,`call_id` DESC
limit 1
Index is already there on call_date, callcode, calltype, callconnect
Table has 10 million records. Query is taking 2 min
How to get results within 3sec?
INDEX (calltype, callcode, call_connect, -- in any order
call_date, -- next
call_id) -- last
This will make it possible to find the one row that is desired without having to step over other rows.
Since you seem to have INDEX(calltype), Drop it; it will be in the way and, anyway, redundant. The rest of the indexes you mentioned will be ignored.
More discussion in Index Cookbook

Query Limit from MYSQL

I have database with about 500K row in it. I want get random row from row like 1 to 5000 and its limit result limit need like 100.
My current query is like below
'SELECT * FROM user where status='0' LIMIT 10,100'
what should I change or use for get limited random row, so I can get fast result without memory consume ?
Thanks
A database table is an unordered set, so you'll have to provide some order to get 1 to 5000 rows (otherwise those will be any 1 to 5000 rows), may be based on userid.
Once you have that, you can limit the rows in subquery and sort by rand() and get first 100 like this:
select *
from (select
*
from user
where status = 0
order by /* set of columns, may be user_id*/
limit 1, 5000
) t order by rand() limit 100;
This query gives you any 100 random rows from your 5000k rows
select * from user where status='0' order by rand() limit 100

Optimize query mysql search

I have the following SQL but its execution this very slow, takes about 45 seconds, the table has 15 million record, how can I improve?
SELECT A.*, B.ESPECIE
FROM
(
SELECT
A.CODIGO_DOCUMENTO,
A.DOC_SERIE,A.DATA_EMISSAO,
A.DOC_NUMERO,
A.CF_NOME,
A.CF_SRF,
A.TOTAL_DOCUMENTO,
A.DOC_MODELO
FROM MOVIMENTO A
WHERE
A.CODIGO_EMPRESA = 1
AND A.CODIGO_FILIAL = 5
AND A.DOC_TIPO_MOVIMENTO = 1
AND A.DOC_MODELO IN ('65','55')
AND (A.CF_NOME LIKE '%TEXT_SEARCH%'
OR A.CF_CODIGO LIKE 'TEXT_SEARCH%'
OR A.CF_SRF LIKE 'TEXT_SEARCH%'
OR A.DOC_SERIE LIKE 'TEXT_SEARCH%'
OR A.DOC_NUMERO LIKE 'TEXT_SEARCH%')
ORDER BY A.DATA_EMISSAO DESC , A.CODIGO_DOCUMENTO DESC
LIMIT 0, 100
) A
LEFT JOIN MODELODOCUMENTOFISCAL B ON A.DOC_MODELO = B.CODMODELO
For this query, I would start with an index on MOVIMENTO(CODIGO_EMPRESA, CODIGO_FILIAL, DOC_MODELO) and MODELODOCUMENTOFISCAL(CODMODELO).
That should speed the query.
If it doesn't you may need to consider a full text search to handle the LIKE clauses. I do note that you only have a wildcard at the beginning of one of the patterns. Is that intentional?

How to get next set of data on select top

I loaded the result set with top 10 rows as follows.
SELECT * FROM Persons LIMIT 10;
Now how to select next 10 rows.? Like what in google search results can be toggled between search results.
Pardon if question sounds silly because i didn't found any relevant answer on google.
You can give the LIMIT clause a starting point, like this:
SELECT * FROM Persons LIMIT 50, 10;
This would mean the offset is 50 (it skips the first 50 rows), and the next 10 rows are selected. See also in the manual: MySQL SELECT Syntax.
Use OFFSET + LIMIT to do this
SELECT * FROM Persons
ORDER BY somecol
OFFSET 10 LIMIT 10 -- LIMIT 10 OFFSET 10
For Mysql
SELECT * FROM Persons
ORDER BY somecol
LIMIT 10 OFFSET 10 --LIMIT 10,10
Note: Make sure you add Order by to get meaningful results

Recordset Iterating

I want to iterate through records returned from a MySQL database using Perl, but only ten records at a time. The reason is that the server component can only handle 10 items per request.
For example:
If the query returned 35 records then
I have to send the data in 4 requests:
Request # # of Records
-------- --------
1 10
2 10
3 10
4 5
What is the best way to accomplish the task?
Look at the LIMIT clause for MySQL. You could have a query like:
SELECT * from some_table LIMIT 0, 10;
SELECT * from some_table LIMIT 10, 10;
etc. where the first number after LIMIT is the offset, and the second number is the number of records.
You'd of course first need to do a query for the total count and figure out how many times you'll need to run your select query to get all results.
Alternatively, in Perl you can use an ORM package like DBIx::Class that can handle pagination through sets of results and auto-retrieving them for you.
You can adjust the query to select 10 rows:
select *
from yourtable
order by idcolumn
limit 10;
When iterating over the rows, store the ID of the row you process. After you've processed 10 rows, fetch the next 10:
select *
from yourtable
where idcolumn > stored_id
order by idcolumn
limit 10;
Continue the last query until it returns less than 10 rows.
For first page:
SELECT *
FROM table1
ORDER BY
field
LIMIT 0, 10
For seconds page:
SELECT *
FROM table1
ORDER BY
field
LIMIT 10, 10
etc.