I apologize in advance if this question is too specific, but I think that it is a fairly typical scenario: join and group bys bogging down the db and the best way to get around it. My specific problem is that I need to create a scoreboard based on:
plays (userid,gameid,score) 40M rows
games (gameid) 100K rows
app_games (appid,gameid) ie, the games are grouped into apps and there's a total score for the app which is the sum on all its associated games <20 rows
The users can play multiple times and their best score for each game is recorded. Formulating the query is easy, I've done several variations but they have a nasty tendency to get locked in "copying temp table" for 30-60 seconds when under load.
What can I do? Are there server variables that I should be tweaking or is there a way to reformulate the query to make it faster? The derived version of the query that I'm using is as follows (minus a user table join to grab the name):
select userID,sum(score) as cumscore from
(select userID, gameID,max(p.score) as score
from play p join app_game ag using (gameID)
where ag.appID = 1 and p.score>0
group by userID,gameID ) app_stats
group by userid order by cumscore desc limit 0,20;
Or as a temp table:
drop table if exists app_stats;
create temporary table app_stats
select userID,gameID,max(p.score) as score
from play p join app_game ag using (gameID)
where ag.appID = 1 and p.score>0
group by userid,gameID;
select userID,sum(score) as cumscore from app_stats group by userid
order by cumscore desc limit 0,20;
I have indexes as follows:
show indexes from play;
+-------+------------+----------------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+-------+------------+----------------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+
| play | 0 | PRIMARY | 1 | playID | A | 38353712 | NULL | NULL | | BTREE | |
| play | 0 | uk_play_uniqueID | 1 | uniqueID | A | 38353712 | NULL | NULL | YES | BTREE | |
| play | 1 | play_score_added | 1 | dateTimeFinished | A | 19176856 | NULL | NULL | YES | BTREE | |
| play | 1 | play_score_added | 2 | score | A | 19176856 | NULL | NULL | | BTREE | |
| play | 1 | fk_playData_game | 1 | gameID | A | 76098 | NULL | NULL | | BTREE | |
| play | 1 | user_hiscore | 1 | userID | A | 650062 | NULL | NULL | YES | BTREE | |
| play | 1 | user_hiscore | 2 | score | A | 2397107 | NULL | NULL | | BTREE | |
+-------+------------+----------------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+
I suspect both queries when you create the temp table basically needs to go through all the data in your table (and likewise in your do-everything-at-once query). If you have a lot of data that's just going to take a little while.
I'd maintain a separate table with the ID and total score for each player. Whenever you update the play table, also update the summary table. If they get out of sync, just stop the summary table and re-create the data from the play table. (Or if you already use redis in your infrastructure, you could maintain the summary there -- it has functions to make this particular thing really fast).
Instead of making temporary tables, try making a view instead. You can query against it just like you do with your normal table, but it also updates when any data in the view changes. That's far faster than dropping the table and re-creating it every time.
Related
I have a thousand of records in my database mysql and I use pagination to retrieve just 10 results.
When i add a order by in my query it slow down but when i omit it the query run very fast.
I know that the problem come from that the query load whole results, sort them and after that it get the 10 records.
I don't use index because the column use for order is a PK and i think if i'm not wrong in mysql a index is created automatically on every primary key
Why the index on my PK which is the column I'm ordering.
not used ?
Is there any alternative solution to perform sorting without load all the data ?
How to add new inserted data at the first row of tables and not at the end of the table ?
My sql query
select distinct ...... order by appeloffre0_.ID_APPEL_OFFRE desc limit 10
and my indexes
mysql> show index from appel_offre;
+-------------+------------+--------------------+--------------+---------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+-------------+------------+--------------------+--------------+---------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| appel_offre | 0 | PRIMARY | 1 | ID_APPEL_OFFRE | A | 13691 | NULL | NULL | | BTREE | | |
| appel_offre | 1 | appel_offre_ibfk_1 | 1 | ID_APPEL_OFFRE_MERE | A | 2 | NULL | NULL | YES | BTREE | | |
| appel_offre | 1 | appel_offre_ibfk_2 | 1 | ID_ACHETEUR | A | 2 | NULL | NULL | | BTREE | | |
| appel_offre | 1 | appel_offre_ibfk_3 | 1 | USER_SAISIE | A | 2 | NULL | NULL | YES | BTREE | | |
| appel_offre | 1 | appel_offre_ibfk_4 | 1 | USER_VALIDATION | A | 4 | NULL | NULL | YES | BTREE | | |
| appel_offre | 1 | ao_fk_3 | 1 | TYPE_MARCHE | A | 2 | NULL | NULL | YES | BTREE | | |
| appel_offre | 1 | ao_fk_5 | 1 | USER_CONTROLE | A | 2 | NULL | NULL | YES | BTREE | | |
+-------------+------------+--------------------+--------------+---------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
7 rows in set (0.03 sec)
no index was chosen in explain cmd:
+----+-------------+---------------+--------+-------------------------------------+--------------------+---------+----------------
| id | select_type | table | type | possible_keys | key | key_len | ref
+----+-------------+---------------+--------+-------------------------------------+--------------------+---------+----------------
| 1 | SIMPLE | appeloffre0_ | ALL | NULL | NULL | NULL | NULL
UPDATE SOLUTION
the problem was from distinct when i delete it the query finnaly use the index.
Because you already use an index on "USER_VALIDATION", MySQL won't use the ID index instead.
Try rebuilding the USER_VALIDATION index to include the ID too:
CREATE UNIQUE INDEX appel_offre_ibfk_4 ON appel_offre (USER_VALIDATION, ID);
Update
Log all Hibernate queries, extract the slow query and use EXPLAIN in a db console to understand what execution plan MySQL selects for this query. It may be possible for the db to use a FULL TABLE SCAN even when you have an index, because the index is too large to fit into memory. Try giving it a HINT as explained in this post.
According to MySQL ORDER BY optimization documentation you should:
To increase ORDER BY speed, check whether you can get MySQL to use indexes rather than an extra sorting phase. If this is not possible, you can try the following strategies:
• Increase the sort_buffer_size variable value.
• Increase the read_rnd_buffer_size variable value.
• Use less RAM per row by declaring columns only as large as they need
to be to hold the values stored in them. For example, CHAR(16) is
better than CHAR(200) if values never exceed 16 characters.
• Change the tmpdir system variable to point to a dedicated file
system with large amounts of free space. The variable value can list
several paths that are used in round-robin fashion; you can use this
feature to spread the load across several directories. Paths should be
separated by colon characters (“:”) on Unix and semicolon characters
(“;”) on Windows, NetWare, and OS/2. The paths should name directories
in file systems located on different physical disks, not different
partitions on the same disk.
Also make sure DISTINCT doesn't overrule your index. Try removing it and see if it helps.
Add an index to the column by which you are ordering.
You can't add rows to the beginning of the table, just like you can't add rows to the end of the table. Database tables are multisets. Multisets are by definition unordered collections. The notion of a first element or a last element makes no sense for multisets.
Is there anyway to get better performance out of this.
select * from p_all where sec='0P00009S33' order by date desc
Query took 0.1578 sec.
Table structure is shown below. There are more than 100 Millions records in this table.
+------------------+---------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+------------------+---------------+------+-----+---------+-------+
| sec | varchar(10) | NO | PRI | NULL | |
| date | date | NO | PRI | NULL | |
| open | decimal(13,3) | NO | | NULL | |
| high | decimal(13,3) | NO | | NULL | |
| low | decimal(13,3) | NO | | NULL | |
| close | decimal(13,3) | NO | | NULL | |
| volume | decimal(13,3) | NO | | NULL | |
| unadjusted_close | decimal(13,3) | NO | | NULL | |
+------------------+---------------+------+-----+---------+-------+
EXPLAIN result
+----+-------------+-----------+------+---------------+---------+---------+-------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-----------+------+---------------+---------+---------+-------+------+-------------+
| 1 | SIMPLE | price_all | ref | PRIMARY | PRIMARY | 12 | const | 1731 | Using where |
+----+-------------+-----------+------+---------------+---------+---------+-------+------+-------------+
How can i speed up this query?
In your example, you do a SELECT *, but you only have an INDEX that contains the columns sec and date.
In result, MySQLs execution plan roughly looks like the following:
Find all rows that have sec = 0P00009S33 in the INDEX. This is fast.
Sort all returned rows by date. This is also possibly fast, depending on the size of your MySQL buffer. Here is possibly room for improvement by optimizing the sort_buffer_size.
Fetch all columns (= full row) for each returned row from the previous INDEX query. This is slow! see (1)
You can optimize this drastically by reducing the SELECTed fields to the minimum. Example: If you only need the open price, do only a SELECT sec, date, open instead of SELECT *.
When you identified the minimum columns you need to query, add a combined INDEX that contains exactly these colums (all columns involved - in the WHERE, SELECT or ORDER BY clause)
This way you can completely skip the slow part of this query, (3) in my example above. When the INDEX already contains all necessary columns, MySQLs optimizer can avoid looking up the full columns and serve your query directly from the INDEX.
Disclaimer: I'm unsure in which order MySQL executes the steps, possibly i ordered (2) and (3) the wrong way round. But this is not important to answer this question, though.
I'm working on how to implement a leaderboard. What I'd like to do is be able to sort the table by several different filters(score,number of submissions, average). The table might look like this.
+--------+-----------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+--------+-----------------------+------+-----+---------+-------+
| userID | mediumint(8) unsigned | NO | PRI | 0 | |
| score | int | YES | MUL | NULL | |
| numSub | int | YES | MUL | NULL | |
+--------+-----------------------+------+-----+---------+-------+
And a sample set of data like so:
+--------+----------+--------+
| userID | score | numSub |
+--------+----------+--------+
| 505610 | 1245 | 2 |
| 544222 | 1458 | 2 |
| 547278 | 245 | 1 |
| 659241 | 12487 | 8 |
| 681087 | 5487 | 3 |
+--------+----------+--------+
My queries will be coming from PHP.
// get the top 100 scores
$q = "select userID, score from table order by score desc limit 0, 100";
this will return a set of userID/score sorted highest score first
I also have a query to sort by numSub (number of submissions)
What I would like is to sort the table by the avg score that being score/numSub; The table could be large so efficiency is important to me.
Thanks in advance!
If efficiency is important, then add a column avgscore and assign it the value of score/numsub. Then, create an index on the column.
You can use an insert/update trigger to do the average calculation automatically when a row is added or modified.
Once your tables gets large, the sort is going to take a noticeable amount of time.
As far as I can see, there's no reason to make it more complicated than this;
SELECT userID, score/numsub AS average_score
FROM Table1
ORDER BY score/numsub DESC;
In the web page that I'm working on I need to show some statistics based on a different user details which are in three tables. So I have the following query that I join to more different tables:
SELECT *
FROM `user` `u`
LEFT JOIN `subscriptions` `s` ON `u`.`user_id` = `s`.`user_id`
LEFT JOIN `devices` `ud` ON `u`.`user_id` = `ud`.`user_id`
GROUP BY `u`.`user_id`
When I execute the query with LIMIT 1000 it takes about 0.05 seconds and since I'm using the data from all the three tables in a lot of queries I've decided to put it inside a VIEW:
CREATE VIEW `user_details` AS ( the same query from above )
And now when I run:
SELECT * FROM user_details LIMIT 1000
it takes about 7-10 seconds.
So my question is can I do something to optimize the view because the query seems to be pretty quick or I should the whole query instead of the view ?
Edit: this is what EXPLAIN SELECT * FROM user_details returns
+----+-------------+------------+--------+----------------+----------------+---------+------------------------+--------+-------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+------------+--------+----------------+----------------+---------+------------------------+--------+-------+
| 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 322666 | |
| 2 | DERIVED | u | index | NULL | PRIMARY | 4 | NULL | 372587 | |
| 2 | DERIVED | s | eq_ref | PRIMARY | PRIMARY | 4 | db_users.u.user_id | 1 | |
| 2 | DERIVED | ud | ref | device_id_name | device_id_name | 4 | db_users.u.user_id | 1 | |
+----+-------------+------------+--------+----------------+----------------+---------+------------------------+--------+-------+
4 rows in set (8.67 sec)
this is what explain retuns for the query:
+----+-------------+-------+--------+----------------+----------------+---------+------------------------+--------+-------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+--------+----------------+----------------+---------+------------------------+--------+-------+
| 1 | SIMPLE | u | index | NULL | PRIMARY | 4 | NULL | 372587 | |
| 1 | SIMPLE | s | eq_ref | PRIMARY | PRIMARY | 4 | db_users.u.user_id | 1 | |
| 1 | SIMPLE | ud | ref | device_id_name | device_id_name | 4 | db_users.u.user_id | 1 | |
+----+-------------+-------+--------+----------------+----------------+---------+------------------------+--------+-------+
3 rows in set (0.00 sec)
Views and joins are extremely bad if it comes to performance. This is more or less true for all relational database management systems. Sounds strange, since that is what those systems are designed for, but it is true nevertheless.
Try to avoid the joins if this is a query in heavy usage on your page: instead create a real table (not a view) that is filled from the three tables. you can automate that process using triggers. So each time an entry is inserted into one of the original tables the triggers takes care that the data is propagated to the physical user_details table.
This strategy certainly means a one time investment for the setup, but you definitely will get a much better performance.
The object of my query is to get all rows from table a where gender = f and username does not exist in table b where campid = xxxx. Here is the query I am using with success:
SELECT `id`
FROM pool
LEFT JOIN sent
ON pool.username = sent.username
AND sent.campid = 'YA1LGfh9'
WHERE sent.username IS NULL
AND pool.gender = 'f'
The problem is that the query takes over 9 minutes to complete, the pool table contains over 10 million rows and the sent table is eventually going to grow even larger than that. I have created indexes for many of the columns including username and gender. However, MySQL refuses to use any of my indexes for this query. I even tried using FORCE INDEX. Here are my indexes from pool and the output of EXPLAIN for my query:
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+
| pool | 0 | PRIMARY | 1 | id | A | 9326880 | NULL | NULL | | BTREE | |
| pool | 1 | username | 1 | username | A | 9326880 | NULL | NULL | | BTREE | |
| pool | 1 | source | 1 | source | A | 6 | NULL | NULL | | BTREE | |
| pool | 1 | gender | 1 | gender | A | 9 | NULL | NULL | | BTREE | |
| pool | 1 | location | 1 | location | A | 59030 | NULL | NULL | | BTREE | |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+
6 rows in set (0.00 sec)
mysql> explain SELECT `id` FROM pool FORCE INDEX (username) LEFT JOIN sent ON pool.username = sent.username AND sent.campid = 'YA1LGfh9' WHERE sent.username IS NULL AND pool.gender = 'f';
+----+-------------+-------+------+---------------+------+---------+------+---------+-------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+------+---------------+------+---------+------+---------+-------------------------+
| 1 | SIMPLE | pool | ALL | NULL | NULL | NULL | NULL | 9326881 | Using where |
| 1 | SIMPLE | sent | ALL | NULL | NULL | NULL | NULL | 351 | Using where; Not exists |
+----+-------------+-------+------+---------------+------+---------+------+---------+-------------------------+
2 rows in set (0.00 sec)
also, here are my indexes for the sent table:
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+
| sent | 0 | PRIMARY | 1 | primary_key | A | 351 | NULL | NULL | | BTREE | |
| sent | 1 | username | 1 | username | A | 351 | NULL | NULL | | BTREE | |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+
2 rows in set (0.00 sec)
You can see that no indexes are not being used and so my query takes extremely too long. If anyone has a solution that involves reworking the query, please show me an example of how to do it using my data structure so that I won't have any confusion of how to implement and test. Thank you.
First, your original query was correct in your placement of everything... including the camp. By using a LEFT JOIN from Pool to Sent, and then pulling a required equality such as "CAMP" into the WHERE clause as previously suggested is ultimately converting that into an INNER JOIN, thus requiring entry on both sides. Leave it as you had it.
You already have an index on user name on the sent table, but I would do the following.
build an index on the "sent" table on (CampID, UserName) as a composite (ie: multiple key) index. This way the left join will be optimized for BOTH entries.
On your "pool" table, try a composite index on 3 fields of (gender, username, id ).
By doing this, you can take advantage of NOT having to go through all the actual pages of data that encompass your 10+ million records. Since the index HAS the columns for compare, it doesn't have to find the actual record and look at the columns, it can use those of the index directly.
Also, for grins, I added keyword "STRAIGHT_JOIN" which tells MySQL to query exactly as I show and don't try to think for me. MANY times, I've found this to significantly improve query performance... On very few have I been given feedback that it has NOT helped.
SELECT STRAIGHT_JOIN
p.id
FROM
pool p
LEFT JOIN sent s
ON s.campid = 'YA1LGfh9'
AND p.username = s.username
WHERE
p.gender = 'f'
AND s.username IS NULL
All that said, you are still going to be returning how many records out of the 10+ million... if the pool has 10+ million, and the single camp only has 5,000. You will still be returning almost the entire set.