I have an app which get the data from PHP backend API.
now there are more and more people online and have chats, the messages loading time need so much time per messages(take over 10s). I would like to know how can I decrease the loading time of the loading of the message.
codes are as following, even I change the data from 20 to 5, although it show 5 messages per time, still same long loading time.
Can you guys help me check this PHP code? can I edit something to fix this issue? Thanks
$stmt = $this->db->prepare("SELECT * FROM messages
WHERE chatId = (:chatId)
AND id < (:msgId)
AND removeAt = 0
ORDER BY id DESC LIMIT 20");
code seems too long and not allow post here..please check in this link: paste.ofcode.org/i6rHbdVjhfscaMVVva9E9L
database table structure:
https://www.imageupload.net/image/msg.OIVDM
https://www.imageupload.net/image/database.OIgHh
You need to learn about indexes. If you set up index on column (or columns) MYSQL can search and sort questies much faster. Try adding few indexes.
I would start with columns removeAt, chatId, fromUserId, toUserId. You can also experiment with complex indexes (removeAt + chatId <-- in this order or fromUserId + toUserId).
After adding indexes run EXPLAIN command on your queries (eg. EXPLAIN SELECT * FROM messages WHERE chatId = 5 AND id < 10 AND removeAt = 0 ORDER BY id DESC LIMIT 20). In results you should see which indexes where possible and which index was used by MySQL.
One thing to note - do not add too many indexes ("just in case") because they will slow down adding new rows to database.
To add index go to table view in phpMyAdmin (structure) and select column on which you ant to add index. Then press index below columns:
You need to add indexes one by one. If you select more than one column it will create single complex index.
INDEX(chatId, removedAt, -- in either order
id) -- third
This index with be "covering" and perfect for the WHERE and ORDER BY, hence able to also handle LIMIT 20.
That is, it will look at only 20 rows, whereas any other index will require looking at all the rows.
More on optimal indexes: http://mysql.rjweb.org/doc.php/index_cookbook_mysql
Related
i have this query which is very simple but i dont want to use index here due to some constraints.
so my worry is how to avoid huge load on server if we are calling non indexed item in where clause.
the solution i feel will be limit.
i am sure of having data in 1000 rows so if i use limit i can check the available values.
SELECT *
from tableA
where status='1' and student='$student_no'
order by id desc
limit 1000
here student column is not indexed in mysql so my worry is it will cause huge load in server
i tried with explain and it seems to be ok but problem is less no of rows in table and as u know mysql goes crazy with more data like millions of rows.
so what are my options ??
i should add index for student ??
if i will add index then i dont need 1000 rows in limit. one row is sufficient and as i said table is going to be several millions of rows so it requires lot of space so i was thinking to avoid indexing of student column and other query is 1000 row with desc row should not cause load on server as id is indexed.
any help will be great
You say:
but i dont want to use index here due to some constraints...
and also say:
how to avoid huge load on server...
If you don't use an index, you'll produce "huge load" on the server. If you want this query to be less resource intensive, you need to add an index. For the aforementioned query the ideal index is:
create index on tableA (student, status, id);
This index should make your query very fast, even with millions of rows.
LIMIT 100 doesn't force the database to search in the first 100 rows.
It just stop searching after 100 matches are found.
So it is not used for performance.
In the query below
SELECT *
from tableA
where status='1' and student='$student_no'
order by id desc
limit 1000
The query will run until it finds 1000 matches.
It doesn't have to search only the first 1000 rows
So this is the behaviour of the above query:
int nb_rows_matched = 0;
while (nb_rows_matched < 1000){
search_for_match();
}
I have the following query:
SELECT *
FROM shop_user_member_spots
WHERE delete_flag = 0
ORDER BY date_spotted desc
LIMIT 10
When run, it takes a few minutes. The table is around 2.5 million rows.
Here is the table (not designed by me but I am able to make some changes):
And finally, here are the indexes (again, not made by me!):
I've been attempting to get this query running fast for hours now, to no avail.
Here is the output of EXPLAIN:
Any help / articles are much appreciated.
Based on your query, it seems the index you would want would be on (delete_flag, date_spotted). You have an index that has the two columns, but the id column is in between them, which would make the index unhelpful in sorting based on date_spotted. Now whether mysql will use the index based on Zohaib's answer I can't say (sorry, I work most often with SQL Server).
The problem that I see in the explain plan is that the index on spotted date is not being used, insted filesort mechanism is being used to sort (as far as index on delete flag is concerned, we actually gain performance benefit of index if the column on which index is being created contains unique values)
the mysql documentation says
Index will not used for order by clause if
The key used to fetch the rows is not the same as the one used in the ORDER BY:
SELECT * FROM t1 WHERE key2=constant ORDER BY key1;
http://dev.mysql.com/doc/refman/5.0/en/order-by-optimization.html
I guess same is the case here. Although you can try using Force Index
Say, I've table with 100,000 rows, I want to fetch rows in slabs of 50 with a particular Where clause
Standard way of doing this: select * from table where userid=5 limit 50 offset 90500;
This runs awefully slow.
Cause: All 100,000 rows are analyzed first and Limit is applied at the last stage.
Any thoughts how to speed this up. Anyone ?
Putting an index on "userid" should really help.
Use a primary key in the order by as galz says e.g order by id assuming your table has the column id
Alternatively, try use an index field (generalization of 1)
You may also use partitions
1 - Using ORDER BY you can improve, but no so much;
2 - Enabling the cache and then selecting from the cache, may improve the query;
3 - Setting an index helps, but no so much too.
I want to run a simple query to get the "n" oldest records in the table. (It has a creation_date column).
How can i get that without using "order-by". It is a very big table and using order by on entire table to get only "n" records is not so convincing.
(Assume n << size of table)
When you are concerned about performance, you should probably not discard the use of order by too early.
Queries like that can be implemende as Top-N query supported by an appropriate index, that's running very fast because it doesn't need to sort the entire table, not even the selecte rows, because the data is already sorted in the index.
example:
select *
from table
where A = ?
order by creation_date
limit 10;
without appropriate index it will be slow if you are having lot's of data. However, if you create an index like that:
create index test on table (A, creation_date );
The query will be able to start fetching the rows in the correct order, without sorting, and stop when the limit is reached.
Recipe: put the where columns in the index, followed by the order by columns.
If there is no where clause, just put the order by into the index. The order by must match the index definition, especially if there are mixed asc/desc orders.
The indexed Top-N query is the performance king--make sure to use them.
I few links for further reading (all mine):
How to use index efficienty in mysql query
http://blog.fatalmind.com/2010/07/30/analytic-top-n-queries/ (Oracle centric)
http://Use-The-Index-Luke.com/ (not yet covering Top-N queries, but that's to come in 2011).
I haven't tested this concept before but try and create an index on the creation_date column. Which will automatically sort the rows is ascending order. Then your select query can use the orderby creation_date desc with the Limit 20 to get the first 20 records. The database engine should realize the index has already done the work sorting and wont actually need to sort, because the index has already sorted it on save. All it needs to do is read the last 20 records from the index.
Worth a try.
Create an index on creation_date and query by using order by creation_date asc|desc limit n and the response will be very fast (in fact it cannot be faster). For the "latest n" scenario you need to use desc.
If you want more constraints on this query (e.g where state='LIVE') then the query may become very slow and you'll need to reconsider the indexing strategy.
You can use Group By if your grouping some data and then Having clause to select specific records.
I got a spicy question about mysql...
The idea here is to select the n last records from a table, filtering by a property, (possibly from another table). That simple.
At this point you wanna reply :
let n = 10
SELECT *
FROM huge_table
JOIN another_table
ON another_table.id = huge_table.another_table_id
AND another_table.some_interesting_property
ORDER BY huge_table.id DESC
LIMIT 10
Without the JOIN that's OK, mysql reads the index from the end and trow me 10 items, execution time is negligible
With the join, the execution time become dependent of the size of the table and in many case not negligible, the explain stating that mysql is : "Using where; Using index; Using temporary; Using filesort"
MySQL documentation (http://dev.mysql.com/doc/refman/5.1/en/order-by-optimization.html) states that :
"You are joining many tables, and the columns in the ORDER BY are not all from the first nonconstant table that is used to retrieve rows. (This is the first table in the EXPLAIN output that does not have a const join type.)"
explaining why MySQL can't use index to resolve my ORDER BY prefering a huge file sort ...
My question is : Is it natural to use ORDER BY ... LIMIT 10 to get last items ? Do you really do it while picking last 10 cards in an ascending ordered card deck ? Personally i just pick 10 from the bottom ...
I tried many possibilities but all ended giving the conclusion that i'ts really fast to query 10 first elements and slow to query 10 last cause of the ORDER BY clause.
Can a "Select last 10" really be fast ? Where i am wrong ?
Nice question, I think you should make order by column i.e., id a DESC index.
That should do the trick.
http://dev.mysql.com/doc/refman/5.0/en/create-index.html
With the join you're now restricting rows to "some_interesting_property" and the ID's in your huge_table may no longer be consecutive... Try an index on another_table (some_interesting_property, id) and also huge_table (another_table_id, id) and see if your EXPLAIN gives you better hints.
I'm having trouble reproducing your situation. Whether I use ASC or DESC with my huge_table/another_table mock up, my EXPLAINs and execution time all show approx N rows read and a logical join. Which version of MySQL are you using?
Also, from the EXPLAIN doc, it states that Using index indicates
The column information is retrieved from the table using only information in the index tree without having to do an additional seek to read the actual row
which doesn't correspond with the fact you're doing a SELECT *, unless you have an index which covers your whole table.
Perhaps you should show your schema, including indexes, and the EXPLAIN output.