The first time I run this sql, needs 39 seconds,when I run again and increase SQL_NO_CACHE,does not seem to take effect:
mysql> select count(*) from `deal_expired` where `site`=8&&`area`=122 &&
endtime<1310444996056;
+----------+
| count(*) |
+----------+
| 497 |
+----------+
1 row in set (39.55 sec)
mysql> select SQL_NO_CACHE count(*) from `deal_expired` where `site`=8&&`area`=
122 && endtime<1310444996056;
+----------+
| count(*) |
+----------+
| 497 |
+----------+
1 row in set (0.16 sec)
I tried a variety of methods, here
and even restart the mysql server or change table name, but I still can not let 39 seconds run this SQL
I replaced another SQL, and an increase in the first run on SQL_NO_CACHE, the problem is the same:
mysql> select SQL_NO_CACHE count(*) from `deal_expired` where `site`=25&&`area`=
134 && endtime<1310483196227;
+----------+
| count(*) |
+----------+
| 315 |
+----------+
1 row in set (2.17 sec)
mysql> select SQL_NO_CACHE count(*) from `deal_expired` where `site`=25&&`area`=
134 && endtime<1310483196227;
+----------+
| count(*) |
+----------+
| 315 |
+----------+
1 row in set (0.01 sec)
What is the reason?
How can I get the same SQL run-time?
I want to find a way to optimize this SQL to perform 39 seconds
BTW: RESET QUERY CACHE FLUSH QUERY CACHE FLUSH TABLES SET SESSION query_cache_type=off does not work
mysql state cache has been closed:
mysql> SHOW STATUS LIKE "Qcache%";
+-------------------------+-------+
| Variable_name | Value |
+-------------------------+-------+
| Qcache_free_blocks | 0 |
| Qcache_free_memory | 0 |
| Qcache_hits | 0 |
| Qcache_inserts | 0 |
| Qcache_lowmem_prunes | 0 |
| Qcache_not_cached | 0 |
| Qcache_queries_in_cache | 0 |
| Qcache_total_blocks | 0 |
+-------------------------+-------+
8 rows in set (0.04 sec)
mysql> select count(*) from `deal_expired` where `site`=25&&`area`=134 && endtime<1310
483196227;
+----------+
| count(*) |
+----------+
| 315 |
+----------+
1 row in set (0.01 sec)
mysql> SHOW STATUS LIKE "Qcache%";
+-------------------------+-------+
| Variable_name | Value |
+-------------------------+-------+
| Qcache_free_blocks | 0 |
| Qcache_free_memory | 0 |
| Qcache_hits | 0 |
| Qcache_inserts | 0 |
| Qcache_lowmem_prunes | 0 |
| Qcache_not_cached | 0 |
| Qcache_queries_in_cache | 0 |
| Qcache_total_blocks | 0 |
+-------------------------+-------+
8 rows in set (0.00 sec)
explan this SQL,used site+endtime composite index(named site_endtime):
mysql> explain select count(*) from `deal_expired` where `site`=8&&`area`=122 && endti
me<1310444996056;
+--------+------+-------------------------------+--------------+---------+------
-+------+-------------+
| table | type | possible_keys | key | key_len | ref
| rows | Extra |
+--------+------+-------------------------------+--------------+---------+------
-+------+-------------+
| deal_expired | ref | name,url,endtime,site_endtime | site_endtime | 4 | const
| 353 | Using where |
+--------+------+-------------------------------+--------------+---------+------
-+------+-------------+
1 row in set (0.00 sec)
The first query should use SQL_NO_CACHE to tell MySQL not to put the result into the cache. The second query uses the cache and the tells MySQL not to cache the result of that query, which does nothing.
tl;dr - Reverse your queries.
The answer to "How can I get the same SQL run-time?" is - you cannot.
If your query reads some rows, they are cached, dependent on the storage engine in use, those rows are either in OS cache (myisam), or in buffer pool (innodb). If rows are cached, running the same query second time is much faster, because MySQL does not have to read from the disk.
I was under the impression that including any sort of SQL function that is calculated in the current runtime would not cache. Have you tried doing something like the following?
select count(*), now() from `deal_expired` where `site`=8&&`area`=122 && endtime<1310444996056;
see: http://forums.mysql.com/read.php?24,225286,225468#msg-225468
you could try RESET QUERY CACHE (you need the RELOAD privilege) although having just read the link above this will probably not work either :(
Related
Here is my table :
In my table
Clustering_key (Primary key and auto incremental)
ID (Index Column)
Data (Text datatype column)
Position(Index column) maintain the order of Data
My table have 90,000 rows with same ID equal to 5. I want to first 3 rows with ID equal to 5 and my query like this
Select * from mytable where ID=5 Limit 3;
ID column is index column So I think mysql scan only first 3 rows but mysql scan around 42000 rows.
Here Explain query :
Any possibility to avoid all rows scan.
Please give me some solution
Thanks in advance
I simulated the scenario.
Created the table using
CREATE TABLE mytable (
Clustering_key INT NOT NULL AUTO_INCREMENT,
ID INT NOT NULL,
Data text NOT NULL,
Position INT NOT NULL,
PRIMARY KEY (Clustering_key),
KEY(ID),
KEY(Position)
)
Inserted data with
INSERT INTO mytable (ID,Data,Position) VALUES (5,CONCAT("Data-",5), 7);
INSERT INTO mytable (ID,Data,Position) VALUES (5,CONCAT("Data-",5), 26);
INSERT INTO mytable (ID,Data,Position) VALUES (5,CONCAT("Data-",51), 27);
INSERT INTO mytable (ID,Data,Position) VALUES (5,CONCAT("Data-",56), 28);
INSERT INTO mytable (ID,Data,Position) VALUES (5,CONCAT("Data-",57), 31);
Explain
mysql> explain Select * from mytable where ID=5 Limit 3
+----+-------------+---------+------------+------+---------------+------+---------+-------+------+----------+-------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+---------+------------+------+---------------+------+---------+-------+------+----------+-------+
| 1 | SIMPLE | mytable | NULL | ref | ID | ID | 4 | const | 5 | 100.00 | NULL |
+----+-------------+---------+------------+------+---------------+------+---------+-------+------+----------+-------+
1 row in set, 1 warning (0.00 sec)
Yes, the explain shows rows examined is 5, but not 3.
But seems it is just a misleading info.
The exact number of run-time rows_examined can be verified by enabling slow log for all queries(Setting long_query_time=0) by following steps.
Note: You MUST set long_query_time=0 only in your own testing database. And you MUST reset the parameter back to the previous value after the testing.
- set GLOBAL slow_query_log=1;
- set global long_query_time=0;
- set session long_query_time=0;
mysql> show variables like '%slow%';
+---------------------------+-------------------------------------------------+
| Variable_name | Value |
+---------------------------+-------------------------------------------------+
| log_slow_admin_statements | OFF |
| log_slow_slave_statements | OFF |
| slow_launch_time | 2 |
| slow_query_log | ON |
| slow_query_log_file | /usr/local/mysql/data/slow.log |
+---------------------------+-------------------------------------------------+
5 rows in set (0.10 sec)
mysql> select ##long_query_time;
+-------------------+
| ##long_query_time |
+-------------------+
| 0.000000 |
+-------------------+
And then in the terminal, executing the query
<pre>
mysql> Select * from mytable where ID=5 Limit 3;
+----------------+----+---------+----------+
| Clustering_key | ID | Data | Position |
+----------------+----+---------+----------+
| 5 | 5 | Data-5 | 7 |
| 26293 | 5 | Data-5 | 26 |
| 26294 | 5 | Data-51 | 27 |
+----------------+----+---------+----------+
3 rows in set (0.00 sec)
mysql> Select * from mytable where ID=5 Limit 1;
Checking the slow log by inspecting the slow_query_log_file printed above /usr/local/mysql/data/slow.log
You can find out the info as below.
# Time: 2019-04-26T01:48:19.890846Z
# User#Host: root[root] # localhost [] Id: 5124
# Query_time: 0.000575 Lock_time: 0.000146 Rows_sent: 3 Rows_examined: 3
SET timestamp=1556243299;
Select * from mytable where ID=5 Limit 3;
# Time: 2019-04-26T01:48:34.672888Z
# User#Host: root[root] # localhost [] Id: 5124
# Query_time: 0.000182 Lock_time: 0.000074 Rows_sent: 1 Rows_examined: 1
SET timestamp=1556243314;
Select * from mytable where ID=5 Limit 1;
The runtime Rows_exmained value is equal to the value of limit parameter.
The test is done on MySQL 5.7.18.
----------------------------------Another way to verify----------------------------------
mysql> show status like '%Innodb_rows_read%';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| Innodb_rows_read | 13 |
+------------------+-------+
1 row in set (0.00 sec)
mysql> Select * from mytable where ID=5 Limit 1;
+----------------+----+--------+----------+
| Clustering_key | ID | Data | Position |
+----------------+----+--------+----------+
| 5 | 5 | Data-5 | 7 |
+----------------+----+--------+----------+
1 row in set (0.00 sec)
mysql> show status like '%Innodb_rows_read%';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| Innodb_rows_read | 14 |
+------------------+-------+
1 row in set (0.00 sec)
You can see the Innodb_rows_read just be increased 1 for limit 1.
If you do a full table scan query, you can see the value will be increased by the count of the table.
mysql> select count(*) from mytable;
+----------+
| count(*) |
+----------+
| 126296 |
+----------+
1 row in set (0.05 sec)
mysql> show status like '%Innodb_rows_read%';
+------------------+--------+
| Variable_name | Value |
+------------------+--------+
| Innodb_rows_read | 505204 |
+------------------+--------+
1 row in set (0.00 sec)
mysql> Select * from mytable where Data="Data-5";
+----------------+----+--------+----------+
| Clustering_key | ID | Data | Position |
+----------------+----+--------+----------+
| 5 | 5 | Data-5 | 7 |
| 26293 | 5 | Data-5 | 26 |
| 26301 | 5 | Data-5 | 7 |
+----------------+----+--------+----------+
3 rows in set (0.09 sec)
mysql> show status like '%Innodb_rows_read%';
+------------------+--------+
| Variable_name | Value |
+------------------+--------+
| Innodb_rows_read | 631500 |
+------------------+--------+
1 row in set (0.00 sec)
Both ways confirmed the explain for limit seems providing misleading info about rows examined.
I am a newbie to SQL Server. In MySQL to find the count of the insert, update, delete, select queries we have the below queries:
SELECT
mysql> SHOW GLOBAL STATUS like '%com_select%';
+-------------+-----+
| Variable_name | Value |
+-------------+-----+
| Com_select | 1 |
+-------------+-----+
1 row in set (0.00 sec)
INSERT
mysql> SHOW GLOBAL STATUS like '%com_insert%';
+-----------------+-----+
| Variable_name | Value |
+-----------------+-----+
| Com_insert | 0 |
| Com_insert_select | 0 |
+-----------------+-----+
2 rows in set (0.00 sec)
UPDATE
mysql> SHOW GLOBAL STATUS like '%com_update%';
+----------------+-----+
| Variable_name | Value |
+----------------+-----+
| Com_update | 0 |
| Com_update_multi | 0 |
+----------------+-----+
2 rows in set (0.00 sec)
DELETE
mysql> SHOW GLOBAL STATUS like '%com_delete%';
+----------------+-----+
| Variable_name | Value |
+----------------+-----+
| Com_delete | 0 |
| Com_delete_multi | 0 |
+----------------+-----+
2 rows in set (0.00 sec)
Similarly are there any equivalent queries to find the count of the insert, update, delete, select queries in SQL Server?
Thanks !!!
Edit : A SQL server performance counter sys.dm_os_performance_counters provides the Page reads/sec & writes/sec like wise there anyother counters that can provide insert,update,delete counts?
So, I'm trying to find joins that aren't properly using indexes, but the log is being filled with queries which appear to have indexes to me.
I turn on slow_query_log and turn on log_queries_not_using_indexes and set the long_query_time to 10 seconds.
The log starts flooding with lines like this...
Query_time: 0.320889 Lock_time: 0.000030 Rows_sent: 0 Rows_examined: 338336
SET timestamp=1422564398;
select * from fversions where author=155669 order by entryID desc limit 40;
The query time is below 10 seconds, and from this explain, it seems to be using the primary key as the index.
Why is this query being logged? I can't see the problem queries to add indexes to them. Too much noise.
Thanks in advance!
PS. The answer for this doesn't seem to apply as I have a 'where'. MySQL why logged as slow query/log-queries-not-using-indexes when have indexes?
mysql> explain select * from fversions where author=155669 order by entryID desc limit 40;
+----+-------------+-----------+-------+---------------+---------+---------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-----------+-------+---------------+---------+---------+------+------
| 1 | SIMPLE | fversions | index | NULL | PRIMARY | 8 | NULL | 40 | Using where |
+----+-------------+-----------+-------+---------------+---------+---------+----
Blockquote
--+------
1 row in set (0.00 sec)
mysql> show variables like 'slow_query_log';
+----------------+-------+
| Variable_name | Value |
+----------------+-------+
| slow_query_log | ON |
mysql> show variables like 'long_query_time';
+-----------------+-----------+
| Variable_name | Value |
+-----------------+-----------+
| long_query_time | 10.000000 |
mysql> show variables like 'log_queries_not_using_indexes';
+-------------------------------+-------+
| Variable_name | Value |
+-------------------------------+-------+
| log_queries_not_using_indexes | ON |
I'm testing the InfiniDB community edition to see if it suits our needing.
I imported in a single table about 10 millions rows (loading of data was surprisingly fast), and I'm trying to do some query on it, but these are the results (with NON cached queries.. if query caching exists in InfiniDB):
Query 1 (very fast):
select * from mytable limit 150000,1000
1000 rows in set (0.04 sec)
Query 2 (immediate):
select count(*) from mytable;
+----------+
| count(*) |
+----------+
| 9429378 |
+----------+
1 row in set (0.00 sec)
Ok it seems to be amazingly fast.. but:
Query 3:
select count(title) from mytable;
.. still going after several minutes
Query 4:
select id from mytable where id like '%ABCD%';
+------------+
| id |
+------------+
| ABCD |
+------------+
1 row in set (11 min 17.30 sec)
I must be doing something wrong, it's not possible that it's performing this way with so simple queries. Any Idea?
That shouldn't be the case, there does appear to be something odd going on, see quick test below.
What is your server configuration: memory/OS/CPU and platform (dedicated, virtual, cloud).
Could I get the schema declaration and method to load the data?
Which version are you using? Version 4 community has significantly more features than prior versions, i.e. core syntax matches enterprise.
Cheers,
Jim T
mysql> insert into mytable select a, a from (select hex(rand() * 100000) a from lineitem limit 10000000) b;
Query OK, 10000000 rows affected (1 min 54.12 sec)
Records: 10000000 Duplicates: 0 Warnings: 0
mysql> desc mytable;
+-------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
| id | varchar(32) | YES | | NULL | |
| title | varchar(32) | YES | | NULL | |
+-------+-------------+------+-----+---------+-------+
2 rows in set (0.01 sec)
mysql> select * from mytable limit 150000,1000;
+-------+-------+
| id | title |
+-------+-------+
| E81 | E81 |
| 746A | 746A |
. . .
| DFC8 | DFC8 |
| 2C56 | 2C56 |
+-------+-------+
1000 rows in set (0.07 sec)
mysql> select count(*) from mytable;
+----------+
| count(*) |
+----------+
| 10000000 |
+----------+
1 row in set (0.06 sec)
mysql> select count(title) from mytable;
+--------------+
| count(title) |
+--------------+
| 10000000 |
+--------------+
1 row in set (0.09 sec)
mysql> select id from mytable where id like '%ABCD%' limit 1;
+------+
| id |
+------+
| ABCD |
+------+
1 row in set (0.03 sec)
For testing, I almost rebuild the newly designed MySQL database every day recently, I also have a Php application based on that. For my understanding, some of system variables value has been accumulated in every rebuild, such as:
mysql> show global status like '%tmp%';
+-------------------------+-------+
| Variable_name | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 14062 |
| Created_tmp_files | 437 |
| Created_tmp_tables | 20854 |
+-------------------------+-------+
3 rows in set (0.00 sec)
Created_tmp_disk_tables and Created_tmp_tables are growing constantly based on every rebuild. Surely there are some other variables doing same thing. I wonder how can we clean them safely in every rebuild, so we won't be cheated by these values. We will see the real value.
Please feel free to let me know if the question is not clear. Thanks.
after testing #dwjv's suggestion, doing 'flush status', got:
mysql> flush status;
Query OK, 0 rows affected (0.02 sec)
mysql> show global status like '%tmp%';
+-------------------------+-------+
| Variable_name | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 14062 |
| Created_tmp_files | 0 |
| Created_tmp_tables | 20856 |
+-------------------------+-------+
3 rows in set (0.00 sec)
The variable 'Created_tmp_files' was cleaned, but other two didn't change.
'Flush status' will only reset the session status, some, but not all the global status variables.
mysql> show status like '%tmp%';
+-------------------------+-------+
| Variable_name | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 0 |
| Created_tmp_files | 0 |
| Created_tmp_tables | 0 |
+-------------------------+-------+
3 rows in set (0.00 sec)
Then I followd &Yak's suggestion 'service mysql restart', got:
mysql> show global status like '%tmp%';
+-------------------------+-------+
| Variable_name | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 14062 |
| Created_tmp_files | 0 |
| Created_tmp_tables | 20857 |
+-------------------------+-------+
3 rows in set (0.00 sec)
Still same, no change.
All you need to do is restart the server.
FLUSH STATUS;
This will flush many of the global variables.