MYSQL SELECT INDEX NOT FAST - mysql

I’m facing problems with Mysql and Index. My problem is that I created an index to mysql database with 50M rows.
I’m trying to do this :
SELECT userid from database where userid=4 ORDER by id Desc limit 10
I created an index in userid, and when I explain the query it shows me in Rows field 92000 records, but it takes at least 15-40s to display results in first selection. If I try again a second later of first select it runs very fast, 0.02s.
I realized that if I change Select without use index my explain goes to 3000 in Rows field, but have the same time problem.
Other information that can be important is that my table is myIsam.
What’s the problem with my index and my query?

Related

Index is working on one Column but not working on another column same table iN mysql PHPMYADMIN

Currently, I have uploaded about 500 million records on PHPMyAdmin MySQL. For fast searching I used index. There is two columns from which I want to do searching and for that, I had created an index on both. After indexing one column becomes too fast and another column searching is the same as before indexing. I have about 30 40 tables and I did indexing on all and MySQL shows that indexing is successful but when I am doing searching it is slow. I am not able to figure out what is wrong with that.
This is the image. I have added index on n and cnic column. Working fine in n but not on cnic.
select statement is following
SELECT * FROM TABLE_NAME WHERE n='$VARIABLE' // WORKINGG VVERY FAST
SELECT * FROM TABLE_NAME WHERE cnic='$VARIABLE' //working slow

MySQL Query across 40,000 rows v/s one time load and for loop

I have a table which has 40,000 rows. From code side around at same exact second about 20,000 users will need to run a query to find their related row. What is better approach here ?
Loading all 40,000 rows in cache and running a for loop on them to find record ?
Simply query database.
Here is what query will look like. Where parameter will be users IP.
SELECT * FROM iplist where ipfrom <= INET_ATON('xxx.xxx.xx.xx') limit 1;
MySQL already caches the data, in the form of the InnoDB Buffer Pool. As pages of data and indexes are requested, they are copied to RAM, and used for any subsequent queries.
You should define an index for the column you search on, if you don't already have an index or a primary key defined for that column:
ALTER TABLE iplist ADD INDEX (ipfrom);
Then searching for a specific value in that column won't require a table-scan, it will narrow down the search efficiently.
Note when you use LIMIT, you should also use ORDER BY, otherwise the row you get will be the first one read in index order, which may not always be what you want. If you use ORDER BY redundantly (i.e. the same order it reads the index), then it will be optimized out.
SELECT * FROM iplist where ipfrom <= INET_ATON(?) ORDER BY ipfrom LIMIT 1;

MySQL Locking Tables with millions of rows

I've been running a website, with a large amount of data in the process.
A user's save data like ip , id , date to the server and it is stored in a MySQL database. Each entry is stored as a single row in a table.
Right now there are approximately 24 million rows in the table
Problem 1:
Things are getting slow now, as a full table scan can take too many minutes but I already indexed the table.
Problem 2:
If a user is pulling a select data from table it could potentially block all other users (as the table is locked) access to the site until the query is complete.
Our server
32 Gb Ram
12 core with 24 thread cpu
table use MyISAM engine
EXPLAIN SELECT SUM(impresn), SUM(rae), SUM(reve), `date` FROM `publisher_ads_hits` WHERE date between '2015-05-01' AND '2016-04-02' AND userid='168' GROUP BY date ORDER BY date DESC
Lock to comment from #Max P. If you write to MyIsam Tables ALL SELECTs are blocked. There is only a Table lock. If you use InnoDB there is a ROW Lock that only locks the ROWs they need. Aslo show us the EXPLAIN of your Queries. So it is possible that you must create some new one. MySQL can only handle one Index per Query. So if you use more fields in the Where Condition it can be useful to have a COMPOSITE INDEX over this fields
According to explain, query doesn't use index. Try to add composite index (userid, date).
If you have many update and delete operations, try to change engine to INNODB.
Basic problem is full table scan. Some suggestion are:
Partition the table based on date and dont keep more than 6-12months data in live system
Add an index on user_id

Most efficient query to get last modified record in large table

I have a table with a large number of records ( > 300,000). The most relevant fields in the table are:
CREATE_DATE
MOD_DATE
Those are updated every time a record is added or updated.
I now need to query this table to find the date of the record that was modified last. I'm currently using
SELECT mod_date FROM table ORDER BY mod_date DESC LIMIT 1;
But I'm wondering if this is the most efficient way to get the answer.
I've tried adding a where clause to limit the date to the last month, but it looks like that's actually slower (and I need the most recent date, which could be older than the last month).
I've also tried the suggestion I read elsewhere to use:
SELECT UPDATE_TIME
FROM information_schema.tables
WHERE TABLE_SCHEMA = 'db'
AND TABLE_NAME = 'table';
But since I might be working on a dump of the original that query might result into NULL. And it looks like this is actually slower than the original query.
I can't resort to last_insert_id() because I'm not updating or inserting.
I just want to make sure I have the most efficient query possible.
The most efficient way for this query would be to use an index for the column MOD_DATE.
From How MySQL Uses Indexes
8.3.1 How MySQL Uses Indexes
Indexes are used to find rows with specific column values quickly.
Without an index, MySQL must begin with the first row and then read
through the entire table to find the relevant rows. The larger the
table, the more this costs. If the table has an index for the columns
in question, MySQL can quickly determine the position to seek to in
the middle of the data file without having to look at all the data. If
a table has 1,000 rows, this is at least 100 times faster than reading
sequentially.
You can use
SHOW CREATE TABLE UPDATE_TIME;
to get the CREATE statement and see, if an index on MOD_DATE is defined.
To add an Index you can use
CREATE INDEX
CREATE [UNIQUE|FULLTEXT|SPATIAL] INDEX index_name
[index_type]
ON tbl_name (index_col_name,...)
[index_option]
[algorithm_option | lock_option] ...
see http://dev.mysql.com/doc/refman/5.6/en/create-index.html
Make sure that both of those fields are indexed.
Then I would just run -
select max(mod_date) from table
or create_date, whichever one.
Make sure to create 2 indexes, one on each date field, not a compound index on both.
As for a discussion of the difference between this and using limit, see MIN/MAX vs ORDER BY and LIMIT
Use EXPLAIN:
http://dev.mysql.com/doc/refman/5.0/en/explain.html
This tells You how mysql executes statement, thanks to that You can figure out most efficient way, cause it depends on Your db structure and there is no one universal solution.

mysql query optimization - order by + indexes

I'm trying to learn something about Optimizing and indexes because I ran a insert select-query that required 4 min to complete. Now, I've added multiple indexes and it seems to have made my query run in 0.160sec. Now what I'm wondering is why the customer table is getting the using filesort message when i'm ordering by orderdate in my order table. Query and explain:
I've even tried an index in O(Orders) for (orderdate, orderid) and (orderdate, orderid, customerid). I thought one of them would help, but no dice. Can anyone help me understand why?
There is nothing wrong with having a query that uses "filesort"; all that means is that the results can't be sorted based on an index.
Now the reason why the sort can't be performed on an index is in this case because your ORDER BY contains columns from tables other than the first table in the join queue.
Since your query result doesn't contain very many rows, the temporary table being used is probably in memory.
What happens is as the query results are fetched from that query is that the results are put into a temporary table so they can later be sorted.
Adding the initial indexes sped up your query most likely because MySQL was doing a full table scan to fetch the results initially which was very time consuming. Once you added the proper indexes, finding the records is extremely quick. It probably had to do a filesort on a temporary table originally but this was likely no slower or faster than it is now.
If you try moving the join for the Orders table and put it before the join of the Products table, you may be able to eliminate the use of the temporary table and file sort.
Check out what does using filesort mean? and How MySQL Uses Internal Temporary Tables for more information.