Find least recently used mysql table index - mysql

I am cleaning up duplicate indices from (innodb)tables in a mysql database. The database has about 50 tables.
While I can check for duplicate keys using pt-duplicate-key-checker, I was wondering if there is a tool that could help me find out least recently used indices from all the tables.
For eg. , if table "A" has three indices defined "index_1", "index_2" and "index_3", and out of them "index_3" is used least frequently , assume its used every 1/10000 queries made on the table, then the output of the script or tool should be "index_3".
Is there a good way or a tool that could help me run this analysis on the database?
Thanks in advance.

Starting with MySQL 5.6, the performance_schema instruments table io, and computes aggregated statistics by table, and by index.
See table performance_schema.table_io_waits_summary_by_index_usage:
http://dev.mysql.com/doc/refman/5.6/en/table-waits-summary-tables.html#table-io-waits-summary-by-index-usage-table
Finding the least recently used index involves time and timestamps.
What the performance schema measure is counting io against an index, so it can be used to find the least often used index, which in practice should be pretty close.
Full example here:
http://sqlfiddle.com/#!9/c129c/4
Note: This question is also duplicated in
https://dba.stackexchange.com/questions/25406/find-least-recently-used-mysql-table-index

Related

Does using union or joins in query causes slower performance in Sphinx?

I am willing to use sphinx with MySQL for my current project.
MYISAM as database engine as this db is gonna be only read-only with 10-25 millions of records.
so i would like to know whether ,
Does using union or joins in query causes performance issues in Sphinx ?
as i am about to design database and if union/joins gonna cause the slower performance then i can go for optimized design for sphinx.
Maybe like creating one big table with all fields and data and then creating separate INDEXES in sphinx depending on the data to be searched.
please guide me in correct direction.
thanks for your time.
Sphinx cant do joins anyway. Can do unions, just searching multiple indexes at once.
Or do you mean to build the sphinx index (ie in sql_query)? Indexer will only run the queries to build the indexes in the first place.
As you say read only - hence no updates, the indexes should never rebuilding, so doesnt really matter how slow they are.
In general a sphinx index will perform very similar regardless of how many feilds. So shouldnt need to split into different indexes. JUst have one multi purpose index (if its possible).
YOu can however shard the index into bits, so can distribute to multiple servers if performance becomes an issue.

Solr for indexing one table VS regular MySQL innoDB table buffer

I'm planning to use Solr for search over one mysql table, this table should have a millions of records.
i Did normalised this table so it can be indexed and searched by solr for more performance results.
Is this right or it should be searching with mysql table it self, i mean normal select.
please advice which is better since it's just one table to be indexed and searched.
Thanks
If your requirement is to "search" only one MySQL table using regular SQL SELECT statements, then Apache Solr seems a bit roundabout and overkill.
With solr, you'd need to periodically import data from the MySQL table into the solr schema, which essentially means you'll be maintaining two copies of the data. So there's also a delay between the time data is changed in the MySQL table and when the solr schema is refreshed from the database.
Aside from that, as far as performance, I'd expect that solr would be nearly as fast searching its schema as MySQL would be at searching it's own table. But you'd really need to test that, to make that determination for your particular requirements.
You'd likely want to make use of the solr "delta import", specifying appropriate queries to identify rows in the MySQL table that have been inserted and updated since the last import.
If this is the only usage of the MySQL table, then the only indexes you would really need on the MySQL table, as far as solr search/import performance is concerned, would be those indexes needed to optimize the performance of the two "delta import" queries.
Try using 3rd party search service like www.rockitsearch.com . It is an out of the box search solution. No need to worry about maintenance of solr cluster.

Modify database files

I have a system that a client designed and the table was originally not supposed to get larger than 10 gigs (maybe 10 million rows) over a few years. Well, they've imported a lot more information than they were thinking and within a month, the table is now up to 208 gigs (900 million rows).
I have very little experience with MySQL and a lot more experience with Microsoft SQL. Is there anything in MySQL that would allow the client to have the database span multiple files so the queries that are run wouldn't have to use the entire table and index? There is a field on the table that could easily be split on, but I wasn't sure how to do this.
The main issue I'm trying to solve is a retrieval query from this table. Inserts aren't a big deal at all since it's all done by a back-end service. I have a test system where the table is about 2 gigs (6 million rows) and my query takes less than a second. When this same query is run on the production system, it takes 20 seconds. I have feeling that the query is doing well, it's just the size of the table that's causing the issue. There is an index on this table created specifically for this query, and using an EXPLAIN, it is using it.
If you have any other suggestions/questions, please feel free to ask.
Use partitioning and especially the part of create table that sets the data_directory and index_directory.
With these options you can put partitions on separate drives if needed. Usially though, it's enough to partition with a key that you can use on each query, usually time.
In addition to partitioning which has been mentioned you might also want to run the tuning-primer script to ensure your mysql configuration is optimal.

mysql speed, table index and select/update/insert

We have got a MySQL table which has got more than 7.000.000 (yes seven million) rows.
We are always doing so much SELECT / INSERT / UPDATE queries per 5 seconds.
Is it a good thing that if we create MySQL INDEX for that table? Will there be some bad consequences like data corrupting or loosing MySQL services etc.?
Little info:
MySQL version 5.1.56
Server CentOS
Table engines are MyISAM
MySQL CPU load between 200% - 400% always
In general, indexes will improve the speed of SELECT operations and will slow down INSERT/UPDATE/DELETE operations, as both the base table and the indexes must be modified when a change occurs.
It is very difficult to say such a thing. I would expect that the indexing itself might take some time. But after that you should have some improvements. As said by #Joe and #Patrick, it might hurt your modification time, but the selecting will be faster.
Ofcourse, there are some other ways of improving performance on inserting and updating. You could ofcourse batch updates if it is not important to have change visible immediatly.
The indexes will help dramatically with selects. Especially if the match up well with the commonly filtered fields. And you have a good simple primary key. They will help with both the time of the queries and the processing cycles.
The drawbacks are if you are very often updating/altering/deleting these records, especially the indexed fields. Even in this case though, it is often worth it.
How much you're going to be reporting (select statement) vs updating (should!) hugely affects both your initial design as well as your later adjustments once your db is in the wild. Since you already have what you have, testing will give you the answers you need. If you really do a lot of select queries, and a lot of updating, your solution might be to copy out data now and then to a reporting table. Then you can index like crazy with no ill effects.
You have actually asked a large question, and you should study up on this more. The general things I've mentioned above hold for most all relational dbs, but there are also particular behaviors of the particular databases (MySQL in your case), mainly in how they decide when and where to use indexes.
If you are looking for performance, indexes are the way to go. Indexes speed up your queries. If you have 7 Million records, your queries are probably taking many seconds possibley a minute depending on your memory size.
Generally speaking, I would create indexes that match the most frequent SELECT statements. Everyone talks about the negative impact of indexes on table size and speed but I would neglect those impacts unless you have a table for which you are doing 95% of the time inserts and updates but even then, if those inserts happen at night and you query during the day, go and create those indexes, your users during daytime will appreciate it.
What is the actual time impact to an insert or update statement if there is an additional index, 0.001 secondes maybe? If the index saves you many seconds per each query, I guess the additional time required to update index is well worth it.
The only time I ever had an issue with creating an index (it actually broke the program logic) was when we added a primary key to a table that was previously created (by someone else) without a primary key and the program was expecting that the SELECT statement returns the records in the sequence they were created. Creating the primary key changed that, the records when selecting without any WHERE clause were returned in a different sequence.
This is obviously a wrong design in the first place, nevertheless, if you have an older program and you encounter tables without primary key, I suggest to look at the code that reads that table before adding a primary key, just in case.
One more last thought about creating indexes, the choice of fields and the sequence in which the fields appear in the index have an impact on the performance of the index.
I had the same kind of problem that you describe.
I did a few changes and 1 query passed from 11sec to a few milliseconds
1- Upgraded to MariaDB 10.1
2- Changed ALL my DB to ARIA engine
3- Changed my.cnf to the strict mininum
4- Upgraded php 7.1 (but this one had a little impact)
5- with CentOS : "Yum update" in the terminal or via ssh (by keeping everything up to date)
1- MariaDB is the new Open source version of MYSQL
2- ARIA engine is the evolution of MYISAM
3- my.cnf have usually too much change that affect performance
Here an example
[mysqld]
performance-schema=1
general_log=0
slow_query_log=0
max_allowed_packet=268435456
By removing all extra options from the my.cnf, it's telling mysql to use default values.
In MYSQL 5 (5.1, 5.5, 5.6...) When I did that ; I only noticed a small difference.
But in MariaDB -> the small my.cnf like this did a BIG difference.
******* ALL of those changes ; the server hardware remained the same.
Hope it can help you

Only Mysql OR mysql+sqlite OR mysql+own solution

Currently I am building quite big web system and I need strong SQL database solution. I chose Mysql over Postgres because some of tasks needs to be read-only (MyISAM engine) and other are massive-writes (InnoDB).
I have a question about this read-only feature. It has to be extremely fast. User must get answer a lot less than one second.
Let say we have one well-indexed table named "object" with not more than 10 millions of rows and another one named "element" with around 150 millions of rows.
We also have table named "element_object" containing information connecting objects from table "element" with table "object" (hundreds of millions of rows)
So we're going to do partitioning on tables "element" and "element_object" and have 8192 tables "element_hash_n{0..8191}a" and 24576 of tables "element_object_hash_n{0..8191}_m{0..2}".
An Answer on user's question would be a 2-step searching:
Find id of element from tables "element_hash_n"
Do main sql select on table "object" and join with table "element_object..hash_n_m" to filter result with found (from first step) ID
I wonder about first step:
What would be better:
store (all) over 32k tables in mysql
create one sqlite database and store there 8192 tables for first step purpose
create 8192 different sqlite files (databases)
create 8192 files in file system and make own binary solution to find ID.
I'm sorry for my English. Its not my native language.
I think you make way to many partitions. If you have more than 32000 partitions you have a tremendous overhead of management. Given the name element_hash_* it seams as if you want to make a hash of your element and partition it this way. But a hash will give you a (most likely) even distribution of the data over all partitions. I can't see how this should improve performance. If your data is accessed over all those partitions you don't gain anything by having partitions in size of your memory - you will need to load for every query data from another partition.
We used partitions on a transaction systems where more than 90% of the queries used the current day as criteria. In such a case the partition based on days worked very well. But we only had 8 partitions and moved the data then off to another database for long time storage.
My advice: Try to find out what data will be needed that fast and try to group it together. And you will need to make your own performance tests. If it is so important to deliver data that fast there should be enough management support to build a decent test environment.
Maybe your test result will show that you simply can't deliver the data fast enough with a relational database system. If so you should look at NoSQL (as in Not only SQL) solutions.
In what technology do you build your web system? You should test this part as well. A super fast database will not help you much if you lose the time in a poorly performing web application.