In a tb with 1 mil. rows if I do (after I restart the computer - so nothing it's cached):
1. SELECT price,city,state FROM tb1 WHERE zipId=13458;
the result is 23rows in 0.270s
after I run 'LOAD INDEX INTO CACHE tb1' (key_buffer_size=128M and total index size for tb is 82M):
2. SELECT price,city,state FROM tb1 WHERE zipId=24781;
the result is 23rows in 0.252s, Key_reads remains constant, Key_read_requests is incremented with 23
BUT after I load 'zipId' into OS cache, if I run again the query:
2. SELECT price,city,state FROM tb1 WHERE zipId=20548;
the result is 22rows in 0.006s
This it's just a simple example, but I run tens of tests and combinations. But the results are always the same.
I use: MySql with MyISAM, WINDOWS 7 64, and the query_cache is 0;
zipId it's a regular index (not primary key)
SHOULDN'T key_cache be faster than OS cache ??
SHOULDN'T be a huge difference in speed, after I load the index into cache ??
(in my test it's almost no difference).
I've read a lot of websites,tutorials and blogs on this matter but none of them really discuss the difference in speed. So, any ideas or links will be greatly appreciated.
Thank you.
Under normal query processing, MySQL will scan the index for the where clause values (i.e. zipId = 13458). Then uses the index to look up the corresponding values from the MyISAM main table (a second disk access). When you load the table into memory, the disk accesses are all done in memory, not from reading a real disk.
The slow part of the query is the lookup from the index into the main table. So loading the index into memory may not improve the query speed.
One thing to try is Explain Select on your queries to see how the index is being used.
Edit: Since I don't think the answers to your comments will fit in a comment space. I'll answer them here.
MyISAM in and of itself does not have a cache. It relies upon the OS to do the disk caching. How much of your table is cached by depends upon what else you are running in the system, and how much data you are reading through. Windows in particular does not allow the user much control over what data is cached and for how long.
The OS caches disk blocks (either 4K or 8K chunks) of the index file or the full table file.
SELECT indexed_col FROM tb1 WHERE zipId+0>1
Queries like this where you use functions on the predicate (Where clause) can cause MySQL to do full table scans rather than using any index. As I suggested above, use EXPLAIN SELECT to see what MySQL is doing.
If you want more control over the cache, try using an INNODB table. The InnoDB engine creates its own cache which you can size, and does a better job of keeping the most recent used stuff in it.
Related
How's that possible when I added index to a column it slowed down the execution time?
Trying to get rid of the query from slow queries log.
My slow-query settings:
slow_query_log = 1
long_query_time = 1 # seconds
log_queries_not_using_indexes = 1
slow_query_log_file = /var/log/mysql-slow.log
Indexes do not always speed up execution. The effect of an index depends primarily on the "selectivity" of the query: how many rows are processed by the overall query.
In general, reading a database (a "full table scan") is an efficient operation. The database engine knows what pages it needs to read and can read ahead to get them. Such I/O often occurs in the background, while processing the pages is in the foreground. When the next page is needed though, there is a good chance it is already in the page cache.
The performance issue with full table scans is that tables are big. So even efficient reads take time. When you are looking for one row in a million ("needle-in-the-haystack" queries), the reads are a waste of time. This is where indexes fix things.
However, say you have 100 records per page and you are reading more than 1% of the records. On average, every page will need to be read -- whether you are using an index or a full-table scan. The problem is that index reads are less efficient than scan reads. A read-ahead mechanism doesn't help them, because the reads are random.
This problem can be further exacerbated through something called thrashing. If the table does not fit into memory, then each random read is likely to be a "cache miss", incurring the overhead of a read from disk. The full table scan would just read the data, and with a decent look-ahead system, there would be no cache misses.
In your example, you could increase the selectivity of the index by including both banner and event in the index (these are compared using equality) and one of the other fields.
Depending on structure of the data on disk, it might be faster to just load the entire db/column and sort/filter it in ram (which will likely happen when no index exists), than to traverse a sparsed index on disk. I don't know if this applies to your specific context or if you have another issue here though.
I have a pretty simple query over a table with about 14 millions records that is taking about 30 minutes to complete. Here is the query:
select a.switch_name, a.recording_id, a.recording_date, a.start_time,
a.recording_id, a.duration, a.ani, a.dnis, a.agent_id, a.campaign,
a.call_type, a.agent_call_result, a.queue_name, a.rec_stopped,
a.balance, a.client_number, a.case_number, a.team_code
from recording_tbl as a
where client_number <> '1234567'
Filtering on client_number seems to be the culprit and that columns does have an index. I'm not sure what else to try.
You can start from creating INDEX on client_number and see how it helps, but the best results you'll get when you analyze your problem using EXPLAIN command.
http://dev.mysql.com/doc/refman/5.5/en/execution-plan-information.html
Is the table myisam or innodb? If innodb increase innodb buffer to a large amount so entire table can fit into memory. If myisam well it should automatically load into memory via OS cache buffers. Install more RAM. Install faster disk drives. These seem to be your only solutions considering you are doing an entire table scan (minus whatever client number which appears to be your testing client id?)
It takes awhile to load the tables into RAM as well so dont expect it as soon as the db starts up.
Your query is doing a full table scan on the one table in the query, recording_tbl. I am assuming this is a table and not a view, because of the "tbl" prefix. If this is a view, then you need to optimize the view.
There is no need to look at the explain. An index is unlikely to be helpful, unless 99% or so of the records have a client_number of 1234567. An index might makes things work, because of a phenomenon called thrashing.
Your problem is either undersized hardware or underallocated resources for the MySQL query engine. I would first look at buffering for the engine, and then the disk hardware and bandwidth to the processor.
Maybe...
where client_number = '1234567'
...would be a bit faster.
If Client_Number is stored as a number field then
where client_number = 1234567
May be faster if the string comparison was causing it to do a cast and possibly preventing the indexes being used.
Why do you need to return 14m rows? (I'm assuming that most records do not have the ID you are searching on).
If you don't need all 14m rows, add LIMIT to the end of your query. Less rows -> less memory -> faster query.
Example:
select a.switch_name, a.recording_id, a.recording_date, a.start_time,
a.recording_id, a.duration, a.ani, a.dnis, a.agent_id, a.campaign,
a.call_type, a.agent_call_result, a.queue_name, a.rec_stopped,
a.balance, a.client_number, a.case_number, a.team_code
from recording_tbl as a
where client_number <> '1234567'
LIMIT 1000
Would return the first 1000 rows.
And here's a comparison of how to return the top N rows across different SQL RDBMS:
http://www.petefreitag.com/item/59.cfm
I have a mysql table with over 30 million records that was originally being stored with myisam. Here is a description of the table:
I would run the following query against this table which would generally take around 30 seconds to complete. I would change #eid each time to avoid database or disk caching.
select count(fact_data.id)
from fact_data
where fact_data.entity_id=#eid
and fact_data.metric_id=1
I then converted this table to innoDB without making any other changes and afterwards the same query now returns in under a second every single time I run the query. Even when I randomly set #eid to avoid caching, the query returns in under a second.
I've been researching the differences between the two storage types to try to explain the dramatic improvement in performance but haven't been able to come up with anything. In fact, much of what I read indicates that Myisam should be faster.
The queries I'm running are against a local database with no other processes hitting the database at the time of the tests.
That's a surprisingly large performance difference, but I can think of a few things that may be contributing.
MyISAM has historically been viewed as faster than InnoDB, but for recent versions of InnoDB, that is true for a much, much smaller set of use cases. MyISAM is typically faster for table scans of read-only tables. In most other use cases, I typically find InnoDB to be faster. Often many times faster. Table locks are a death knell for MyISAM in most of my usage of MySQL.
MyISAM caches indexes in its key buffer. Perhaps you have set the key buffer too small for it to effectively cache the index for your somewhat large table.
MyISAM depends on the OS to cache table data from the .MYD files in the OS disk cache. If the OS is running low on memory, it will start dumping its disk cache. That could force it to keep reading from disk.
InnoDB caches both indexes and data in its own memory buffer. You can tell the OS not to also use its disk cache if you set innodb_flush_method to O_DIRECT, though this isn't supported on OS X.
InnoDB usually buffers data and indexes in 16kb pages. Depending on how you are changing the value of #eid between queries, it may have already cached the data for one query due to the disk reads from a previous query.
Make sure you created the indexes identically. Use explain to check if MySQL is using the index. Since you included the output of describe instead of show create table or show indexes from, I can't tell if entity_id is part of a composite index. If it was not the first part of a composite index, it wouldn't be used.
If you are using a relatively modern version of MySQL, run the following command before running the query:
set profiling = 1;
That will turn on query profiling for your session. After running the query, run
show profiles;
That will show you the list of queries for which profiles are available. I think it keeps the last 20 by default. Assuming your query was the first one, run:
show profile for query 1;
You will then see the duration of each stage in running your query. This is extremely useful for determining what (e.g., table locks, sorting, creating temp tables, etc.) is causing a query to be slow.
My first suspicion would be that the original MyISAM table and/or indexes became fragmented over time resulting in the performance slowly degrading. The InnoDB table would not have the same problem since you created it with all the data already in it (so it would all be stored sequentially on disk).
You could test this theory by rebuilding the MyISAM table. The easiest way to do this would be to use a "null" ALTER TABLE statement:
ALTER TABLE mytable ENGINE = MyISAM;
Then check the performance to see if it is better.
Another possibility would be if the database itself is simply tuned for InnoDB performance rather than MyISAM. For example, InnoDB uses the innodb_buffer_pool_size parameter to know how much memory should be allocated for storing cached data and indexes in memory. But MyISAM uses the key_buffer parameter. If your database has a large innodb buffer pool and a small key buffer, then InnoDB performance is going to be better than MyISAM performance, especially for large tables.
What are your index definitions, there are ways in which you can create indexes for MyISAM in which your index fields will not be used when you think they would.
I currently have a table with 10 million rows and need to increase the performance drastically.
I have thought about dividing this 1 table into 20 smaller tables of 500k but I could not get an increase in performance.
I have created 4 indexes for 4 columns and converted all the columns to INT's and I have another column that is a bit.
my basic query is select primary from from mytable where column1 = int and bitcolumn = b'1', this still is very slow, is there anything I can do to increase the performance?
Server Spec
32GB Memory, 2TB storage, and using the standard ini file, also my processor is AMD Phenom II X6 1090T
In addition to giving the mysql server more memory to play with, remove unnecessary indexes and make sure you have index on column1 (in your case). Add a limit clause to the sql if possible.
Download this (on your server):
MySQLTuner.pl
Install it, run it and see what it says - even better paste the output here.
There is not enough information to reliably diagnose the issue, but you state that you're using "the default" my.cnf / my.ini file on a system with 32G of memory.
From the MySQL Documentation the following pre-configured files are shipped:
Small: System has <64MB memory, and MySQL is not used often.
Medium: System has at least 64MB memory
Large: System has at least 512MB memory and the server will run mainly MySQL.
Huge: System has at least 1GB memory and the server will run mainly MySQL.
Heavy: System has at least 4GB memory and the server will run mainly MySQL.
Best case, you're using a configuration file that utilizes 1/8th of the memory on your system (if you are using the "Heavy" file, which as far as I recall is not the default one. I think the default one is Medium or perhaps Large).
I suggest editing your my.cnf file appropriately.
There several areas of MySQL for which the memory allocation can be tweaked to maximize performance for your particular case. You can post your my.cnf / my.ini file here for more specific advice. You can also use MySQL Tuner to get some automated advice.
I made something that make a big difference in the query time
but it is may not useful for all cases, just in my case
I have a huge table (about 2,350,000 records), but I can expect the exact place that I should play with
so I added this condition WHERE id > '2300000' as I said this is my case, but it may help others
so the full query will be:
SELECT primary from mytable where id > '2300000' AND column1 = int AND bitcolumn = b'1'
The query time was 2~3 seconds and not it is less than 0.01
First of all, your query
select primary from from mytable where column1 = int and bitcolumn = b'1'
has some errors, like two from clauses. Second thing, splitting the table and using an unnecessary index never helps in performance. Some tips to follow are:
1) Use a composite index if you repeatedly query some columns together. But precautions must be taken, because in a composite index the order of placing a column in the index matters a lot.
2) The primary key is more helpful if it's on int column.
3) Read some articles on indices and optimization, they are so many, search on Google.
I'm running:
MySQL v5.0.67
InnoDB engine
innodb_buffer_pool_size = 70MB
Question: What command can I run to ensure that my entire 50 MB database is stored entirely in RAM?
I am curious about why you want to store the entire table in memory. My guess is that you are not. The most important thing for me is if your queries are running well and if you are tied up on disk access. It is also possible that the OS has cached disk blocks that you need if there is memory available. In this case, even though MySQL might not have it in memory, the OS will. If your queries are not running well, and you can do it, I highly recommend adding more memory if you want it all in RAM. If you have slowdowns it is more likely that you are running into contention.
show table status
will show you some of the information.
If you get the server IO/buffer/cache statistics from
show server status
and then run a query that requires each row to be accessed (say sum the non empty values from each row using a column that is not indexed) and check to see if any IO has occurred.
I doubt you are caching the entire thing in memory though with only 70MB. You have to take out a lot of cache, temp, and index buffers from that total.
If you run SELECT COUNT(*) FROM yourtable USE INDEX (PRIMARY) then InnoDB will put every page of the PRIMARY index into buffer pool (assuming there is enough room in it). If the table has secondary indexes and if you want to load them into the buffer pool, too, then craft a similar query that would read from a secondary index and do the job.