I have a database that is about 20 GB in size. I want to know if there are any optimization tips specific to working with a database that is static. When I mean static, I don't mean changes infrequently, I mean won't change at all. Are there any extreme settings for values or other things that normally you stay away from with a volatile database, that can benefit a truly static database; especially considering there will only be SELECT statements and absolutely no INSERT statements? I'm using MyISAM tables.
-- roschler
Since everything is MyISAM, you need to focus on two major things:
KEY CACHE
The main mechanism used for caching is the key cache. It only caches index pages from .MYI files. To size your key cache, run the following query:
SELECT CONCAT(ROUND(KBS/POWER(1024,IF(PowerOfTwo<0,0,IF(PowerOfTwo>3,0,PowerOfTwo)))+0.4999),
SUBSTR(' KMG',IF(PowerOfTwo<0,0,IF(PowerOfTwo>3,0,PowerOfTwo))+1,1)) recommended_key_buffer_size
FROM (SELECT LEAST(POWER(2,32),KBS1) KBS FROM
(SELECT SUM(index_length) KBS1 FROM information_schema.tables
WHERE engine='MyISAM' AND table_schema NOT IN ('information_schema','mysql')) AA) A,
(SELECT 2 PowerOfTwo) B;
This will give the Recommended Setting for MyISAM Key Cache (key_buffer_size) given your current data set (the query will cap the recommendation at 4G (4096M)).For 32-bit OS, 4GB is the limit. For 64-bit, 8GB.
FULLTEXT Indexes
You should change the stopword list. You may want to change the stop words because MySQL will not index this list of 643 words. Try creating your own stopword list and changing the min word length.
Step 1) Create a stop word list of your own. You could add 'a','an', and 'the'.
echo "a" > /var/lib/mysql/custom_stopwords.txt<BR>
echo "an" >> /var/lib/mysql/custom_stopwords.txt<BR>
echo "the" >> /var/lib/mysql/custom_stopwords.txt
Step 2) Add these options to /etc/my.cnf
ft_min_word_len=2
ft_stopword_file=/var/lib/mysql/custom_stopwords.txt
Step 3) service mysql restart
Step 4) Create new FULLTEXT indexes. Any existing FULLTEXT indexes before restart of mysql should be reindexed.
Give These Ideas a Try !!!
Related
I want to change engine of 2 million rows table from MyISAM to InnoDB. I am afraid of this long time operation, so I created similar structure InnoDB table and now I want to copy all data from old one to this new one. What is the fastest way? SELECT INSERT? What about START TRANSACTION? Please, help. I dont want to hang my server.
Do yourself a favor: copy the whole setup to your local machine and try it all out there. You'll have a much better idea of what you are getting into. Just be aware of potential differences in hardware between your production server and your local machine.
The fastest way is probably the most straightforward way:
INSERT INTO table2 SELECT * FROM table1;
I suspect that you cannot do it any faster than what is built into the ALTER. And it does have to copy over all the data and rebuild all the indexes.
Be sure to have innodb_buffer_pool_size raised to prepare for InnoDB. And lower key_buffer_size to allow room. Suggest 35% and 12% of RAM, respectively, for the transition. After all tables are converted, suggest 70% and a mere 20MB.
One slight speedup is to do some select that fetches the entire table and the entire PRIMARY KEY (if it can be cached). This will do some I/O before really starting. Example: SELECT avg(id) FROM tbl where id is the primary key. And SELECT avg(foo) FROM tbl where foo is not indexed but it numeric. These will force a full scan of the PK index and the data, thereby caching the stuff that the ALTER will have to read.
Other tips on converting: http://mysql.rjweb.org/doc.php/myisam2innodb .
I have a table having 14 million rows and i am trying to perform a full text search on this table. The query for this is performing really slow, it is taking around 9 seconds for a simple binary AND query. The same stuff executes instantly on my private cluster. Size of this table is around 3.1 GB and it contains 14 million rows. Can someone explain this behavior of RDS instance?
SELECT count(*)
FROM table_name WHERE id=97
AND match(body) against ('+data +big' IN BOOLEAN MODE)
A high IO rate often indicates insufficient memory, or buffers too small. A 3GB table, including indexes, should fit entirely in memory of a (much-less-than) 500$-per-month dedicated server.
MySQL has many different buffers, and as many parameters to fiddle with. The following buffers are the most important, compare their sizes in the two environments:
If InnoDB: innodb_buffer_pool_size
If MyISAM: key_buffer_size and read_buffer_size
have you added FULLTEXT index on body column if not then try this one surely it will make a big difference
ALTER TABLE `table_name` ADD FULLTEXT INDEX `bodytext` (`body`);
Hope it helps
Try this
SELECT count(1)
FROM table_name WHERE id=97
AND match(body) against ('+data +big' IN BOOLEAN MODE)
This should speed it up a little since you dont have to count all columns just the rows.
Can you post the explain itself?
Since DB version, table, indexes and execution plans are the same, you need to compare machine/cluster configurations. Main points of comparison CPU power available, cores used in single transaction, storage read speed, memory size and read speed/frequency. I can see Amazon provides a variety of configurations, so maybe you private cluster is much more powerful, than Amazon RDS instance config.
To add to above, you can level the load between CPU, IO and Memory to increase throughput.
Using match() against() you perform your research across your entire 3GB fulltext index and there is no way to force another index in this case.
To speed up your query you need to make your fulltext index lighter so you can:
1 - clean all the useless characters and stopwords from your fulltext index
2 - create multiple fulltext indexes and peek the appropriate one
3 - change fulltext searches to LIKE clause and force an other index such as 'id'.
Try placing id in the text index and say:
match(BODY,ID) against (+big +data +97) and id=97
You might also look at sphinx which can be used with MySQL easily.
I'm interesting in performing full text searches in MySQL, but the words that I am specifically interested in, will tend to be short words, or words that will likely appear on the stop list. For example, I might want to search for all entries that begin with "It is".
What is the best approach to this? Should I just manually remove all the stop words and set the min word length to 0? Or is there another way to do this?
Thank you very much.
In my.ini text file (MySQL) :
ft_stopword_file = "" or link an empty file "empty_stopwords.txt"
ft_min_word_len = 2
// set your minimum length, but be aware that shorter words (3,2) will increase the query time dramatically, especially if the fulltext indexed column fields are large.
Save the file, restart the server.
The next step should be to repair the indexes with this query:
REPAIR TABLE tbl_name QUICK.
However, this will not work if you table is using InnoDB storage engine. You will have to change it to MyISAM :
ALTER TABLE t1 ENGINE = MyISAM;
So, once again:
Edit my.ini file and save
Restart your server (this cannot be done dynamically)
Change the table engine (if needed) ALTER TABLE tbl_name ENGINE = MyISAM;
Perform repair REPAIR TABLE tbl_name QUICK.
Be aware that InnoDB and MyISAM have their speed differences. One read faster, other writes faster ( read more about that on the internet )
I currently have a table with 10 million rows and need to increase the performance drastically.
I have thought about dividing this 1 table into 20 smaller tables of 500k but I could not get an increase in performance.
I have created 4 indexes for 4 columns and converted all the columns to INT's and I have another column that is a bit.
my basic query is select primary from from mytable where column1 = int and bitcolumn = b'1', this still is very slow, is there anything I can do to increase the performance?
Server Spec
32GB Memory, 2TB storage, and using the standard ini file, also my processor is AMD Phenom II X6 1090T
In addition to giving the mysql server more memory to play with, remove unnecessary indexes and make sure you have index on column1 (in your case). Add a limit clause to the sql if possible.
Download this (on your server):
MySQLTuner.pl
Install it, run it and see what it says - even better paste the output here.
There is not enough information to reliably diagnose the issue, but you state that you're using "the default" my.cnf / my.ini file on a system with 32G of memory.
From the MySQL Documentation the following pre-configured files are shipped:
Small: System has <64MB memory, and MySQL is not used often.
Medium: System has at least 64MB memory
Large: System has at least 512MB memory and the server will run mainly MySQL.
Huge: System has at least 1GB memory and the server will run mainly MySQL.
Heavy: System has at least 4GB memory and the server will run mainly MySQL.
Best case, you're using a configuration file that utilizes 1/8th of the memory on your system (if you are using the "Heavy" file, which as far as I recall is not the default one. I think the default one is Medium or perhaps Large).
I suggest editing your my.cnf file appropriately.
There several areas of MySQL for which the memory allocation can be tweaked to maximize performance for your particular case. You can post your my.cnf / my.ini file here for more specific advice. You can also use MySQL Tuner to get some automated advice.
I made something that make a big difference in the query time
but it is may not useful for all cases, just in my case
I have a huge table (about 2,350,000 records), but I can expect the exact place that I should play with
so I added this condition WHERE id > '2300000' as I said this is my case, but it may help others
so the full query will be:
SELECT primary from mytable where id > '2300000' AND column1 = int AND bitcolumn = b'1'
The query time was 2~3 seconds and not it is less than 0.01
First of all, your query
select primary from from mytable where column1 = int and bitcolumn = b'1'
has some errors, like two from clauses. Second thing, splitting the table and using an unnecessary index never helps in performance. Some tips to follow are:
1) Use a composite index if you repeatedly query some columns together. But precautions must be taken, because in a composite index the order of placing a column in the index matters a lot.
2) The primary key is more helpful if it's on int column.
3) Read some articles on indices and optimization, they are so many, search on Google.
I need to simulate sql by creating a wrapper over mysql(customer requirement :P), and hence my application requires to create/drop tables(and possibly databases) during runtime.
The frequency of such create/drop operations will not be very high. I'm not a database expert, but I believe that such operations could lead to some side-effects over long term.
Is it advisable to do go ahead with these creation/deletion of databases and what are the possible complications I can run into?
This is only a problem under two scenarios
SCENARIO #1
For InnoDB tables, the innodb buffer pool should be optimally set to the sum of all data pags and index pages that make up InnoDB tables.
Even worse can be that innodb_file_per_table is disabled (default)
This will produce a file called /var/lib/mysql/ibdata1 which can grow and never shrink. This is true no matter how many times you drop and create databases.
If one forgets to make the necessary changes in /etc/my.cnf, this could also expose innodb buffer pool to under-utilization until the data fills back up.
Changes to make for InnoDB are straightforward.
Run this query
SELECT CONCAT(KeyBuf,'M') BufferPoolSetting FROM (SELECT CEILING(SumInnoDB/POWER(1024,2)) KeyBuf FROM (SELECT SUM(data_length+index_length) SumInnoDB FROM information_schema.tables WHERE engine='InnoDB' and table_schema NOT IN ('information_schema','mysql')) A) AA;
The output of this query should be used as the innodb_buffer_pool_size in /etc/my.cnf just before you drop all databases and create new ones.
SCENARIO #2
For MyISAM tables, the key buffer should be optimally set to the sum of all .MYI files.
If one forgets to make the necessary changes in /etc/my.cnf, this could also expose MyISAM key cache (key buffer) to under-utilization until the data fills back up.
Changes to make for MyISAM are straightforward.
Run this query
SELECT CONCAT(KeyBuf,'M') KeyBufferSetting FROM (SELECT CEILING(SumIndexes/POWER(1024,2)) KeyBuf FROM (SELECT SUM(index_length) SumIndexes FROM information_schema.tables WHERE engine='MyISAM' and table_schema NOT IN ('information_schema','mysql')) A) AA;
The output of this query should be used as the key_buffer_size in /etc/my.cnf just before you drop all databases and create new ones.