Using more memory in MySQL Server - mysql

Summary:
I haven't yet been able to get MySQL to use more than 1 core for a select statement and it doesn't get above 10 or 15 GB of RAM.
The machine:
I have a dedicated Database server running MariaDB using MySQL 5.6. The machine is strong with 48 cores and 192GB of RAM.
The data:
I have about 250 million rows in one large table (also several other tables ranging from 5-100 million rows). I have been doing a lot of reading from the tables, sometimes inserting into a new table to denormalize the data a bit. I am not setting this system up as a transactional system, rather, it will be used more similarly to a data warehouse with few connections.
The problem:
When I look at my server's stats, it looks like CPU is at around 70% for one core with a select query running, and memory is at about 5-8%. There is no IO waiting, so I am convinced that I have a problem with MySQL memory allocation. After searching on how to increase the usage of memory in MySQL I have noticed that the config file may be the way to increase memory usage.
The solution I have tried based on my online searching:
I have changed the tables to MyISAM engine and added many indexes. This has helped performance, but querying these tables is still incredibly slow. The write speed using load data infile is very fast, however, running a mildly complex select query takes hours or even days.
I have also tried adjusting the following configurations:
key-buffer-size = 64G
read_buffer_size = 1M
join_buffer_size = 4294967295
read_rnd_buffer_size = 2M
key_cache_age_threshold = 400
key_cache_block_size = 800
myisam_data_pointer_size = 7
preload_buffer_size = 2M
sort_buffer_size = 2M
myisam_sort_buffer_size = 10G
bulk_insert_buffer_size = 2M
myisam_repair_threads = 8
myisam_max_sort_file_size = 30G
max-allowed-packet = 256M
tmp-table-size = 32M
max-heap-table-size = 32M
query-cache-type = 0
query-cache-size = 0
max-connections = 500
thread-cache-size = 150
open-files-limit = 65535
table-definition-cache = 1024
table-open-cache = 2048
These config changes have slightly improved the amount of memory being used, but I would like to be able to use 80% of memory or so... or as much as possible to get maximum performance. Any ideas on how to increase the memory allocation to MySQL?

As you have already no IO waiting you are using a good amount of memory. Your buffers also seem quite big. So I would doubt that you can have significant CPU savings with using additional memory. You are limited by the CPU power of a single core.
Two strategies could help:
Use EXPLAIN or query analyzers to find out if you can optimize your queries to save CPU time. Adding missing indexes could help a lot. Sometimes you also might need combined indexes.
Evaluate an alternative storage engine (or even database) that is better suited for analytical queries and can use all of your cores. MariaDB supports InfiniDB but there are also other storage engines and databases available like Infobright, MonetDB.

Use show global variables like "%thread%" and you may get some clues on enabling thread concurrency options.
read_rnd_buffer_size at 2M tested at 16384 with your data may produce significant reduction in time required to complete your query.

Related

what is the best query_cache_size / Ram ratio

hi I want to change my /etc/my.cnf file (mysql's config file).
What should the below values for better performance on my queries.
query_cache_type = 1
query_cache_limit = 1M
query_cache_size = 16M
Is there an optimal ratio for cache_size / RAM ? I have 8GB of ram on my ubuntu machine.
If there were a well-defined optimum there would be no need for a configuration option. MySQL would use that optimum by default. The query cache is also only useful for very specific circumstances (you read a lot more from the table than you write to it) because the cache is emptied on a per-table basis every time you write anything to the table. It also only works if you state the exact same queries, with the same parameters, over and over.
The optimal value for you needs to be measured out and depends a lot on your use case. If you have a lot of InnoDB tables you will get much more use out of the InnoDB buffer pool: innodb_buffer_pool_size. Set this variable as high as possible (and on a MySQL-only, InnoDB-only machine this might mean as much as 80% of your available RAM).
We host hundreds of small websites on our 8GB RAM server, which runs both database and web server on the same machien, with a mixture of MyISAM and InnoDB tables. Here is our configuration for comparison:
innodb_file_per_table=1
open_files_limit=50000
max_allowed_packet=268435456
innodb_buffer_pool_size=1G
innodb_log_file_size=256M
innodb_flush_method=O_DIRECT
innodb_io_capacity=1000
innodb_old_blocks_time=1000
innodb_open_files=5000
key_buffer_size=16M
read_buffer_size=256K
read_rnd_buffer_size=256K
query_cache_size=256M
query_cache_limit=5M
join_buffer_size=4M
sort_buffer_size=4M
max_heap_table_size=64M
tmp_table_size=64M
table_open_cache=4500
table_definition_cache=4000
thread_cache_size=50
If your machine has a lot of writes, turn the Query cache completely off (type=0, size=0). That is because every write to a table causes all entries in the QC for that table to be removed.
As a corollary to that, having too big a QC can be "slow". I recommend no more than 50M for query_cache_size.
I hope that explains why I did not address your title question about percent of RAM.
This depends on the size of the query results. If you have a query cache limit of 5M and query_cache_size of 256M a worst case scenario will let you end up with 55 query results of 5M in your cache.
Depending on the type of queries you run most you are better of setting a smaller query_cache_limit (64k) giving you a total of 4096 smaller query results in the cache. On top of this the results in the cache are smaller and will not lock the query cache longer then needed.
The query cache of MySQL uses a single thread that locks the cache on every request. If to many requests connect to the query cache the overall performance will drop.

How long should it take to build a single column index in MySQL for a 100K row table?

I have been trying to create an index on a varchar(20) column with 100K rows, and it's been running for 30 minutes so far. On an 8 core i7 processor with 16GB of memory and an SSD drive, I just don't understand what's taking it so long.
Any ideas? I'm a bit new to MySQL, but this is just a basic vanilla index on a relatively small table. The one other index on the same table took only a few seconds to generate.
How does one debug this sort of thing in MySQL?
What's the total size in memory of the table? If it's big enough that you're getting a lot of hard drive calls, it could still take a while. Also, is your site live while you're doing this?
As far as debugging goes, you could check your SQL process on the system to see how many resources it's using.
Finally, have you looked at creating a multi-column index rather than two single-column indexes?
It turns out that the default Ubuntu server LAMP install of MySQL has incredibly low memory allocated, requiring an enormous amount of disk swapping, even on machines that have obvious excess memory.
Please note that I did not experiment to see which setting(s) solved the issue, but the commands I was running on 100K rows, which previously ran for hours, now only take seconds.
[mysqld]
key_buffer_size = 256M
max_allowed_packet = 16M
# Added
innodb_buffer_pool_size = 2G
innodb_log_buffer_size = 64M
innodb_log_file_size = 64M
skip_name_resolve
query_cache_limit = 16M
query_cache_size = 64M

MYSQL (via WAMP) long update query time

I have spent several weeks crunching on this to no avail, so I'm hopeful you may be able to help. Generally, I have an update query that takes forever to run (I've given up after 12 hours). To knock the obvious out of the way, I have an index on the columns. Also, I am totally self-taught on MYSQL, so I may need additional clarification on data / processes etc. This DB is for my personal use, offline. Said another way... this is not my day job. While I enjoy MYSQL, I am not a super-user.
First, my system specs...
Laptop Samsung QX410
Windows 7, 64 bit
Intel i5, M 480 # 2.67 GHz
RAM: 8 GB (7.79 available)
WAMP 2.5 with MYSQL v5.6.17
Tables are INNODB
MYSQL set up:
' The MySQL server
[wampmysqld]
port = 3306
socket = /tmp/mysql.sock
key_buffer_size = 512M
max_allowed_packet = 32M
sort_buffer_size = 512K
net_buffer_length = 32K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
basedir=c:/wamp/bin/mysql/mysql5.6.17
log-error=c:/wamp/logs/mysql.log
datadir=c:/wamp/bin/mysql/mysql5.6.17/data
' Uncomment the following if you are using InnoDB tables
innodb_data_home_dir = C:\mysql\data/
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = C:\mysql\data/
innodb_log_arch_dir = C:\mysql\data/
' You can set .._buffer_pool_size up to 50 - 80 %
' of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 4000M
innodb_additional_mem_pool_size = 32M
' Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 512M
innodb_log_buffer_size = 256M
innodb_flush_log_at_trx_commit = 0
innodb_lock_wait_timeout = 50
Issue in more detail:
I have two tables Trade_List and Cusip_Table and am trying to populate one column in Trade_List (I need to pre-populate this value, since many queries will be run against it).
Trade_List has 11 columns, two of which are relevant.
CUSIP (varchar 45) - generally this is a 9 digit alpha-numeric number.
TICKER (varchar 45) - generally this is 10 letters or less. I want to populate this.
This table has roughly 10 million rows.
I have removed all indices from this table except one on CUSIP.
Cusip_Table has 5 columns, two of which are relevant.
CUSIP (varchar 45) - generally this is a 9 digit alpha-numeric number.
TICKER (varchar 45) - generally this is 10 letters or less. This is already populated.
This table has roughly 70,000 rows.
I have an index 'CTDuplicateCheck' on (Cusip, Ticker).
When I run...
Select A.cusip, B.ticker
From Trade_list A, Cusip_table B
Where A.cusip = B.cusip;
... MYSQL indicates that the query takes about 13 seconds, but in reality it seems to take about a minute, so I ran profiling on it...
starting 0.000093
checking permissions 0.000006
checking permissions 0.000005
Opening tables 0.000041
init 0.000037
System lock 0.000013
optimizing 0.000015
statistics 0.000041
preparing 0.000030
executing 0.000002
Sending data 10.982211
end 0.000014
query end 0.000010
closing tables 0.000018
freeing items 0.000070
logging slow query 0.000004
cleaning up 0.000019
I don't know what any of this means, but 10 seconds for sending data seems reasonable (the return set is ~9M rows.
Just for kicks, and to make sure the index is working, I ran an 'explain' (shown below). I think this says that my index is working correctly.
1 SIMPLE B index CTDuplicateCheck CTDuplicateCheck 96 53010 Using where; Using index
1 SIMPLE A ref TL1Cusip TL1Cusip 48 13f_master_data.B.CUSIP 154 Using index
**NOTE: 13f_Master_Data is the name of the database.
At any rate, when I run the same query, but change it to an update, everything falls apart and it will not complete. I would expect things to run a bit slower, but 12 hours +? I just can't imagine that this is normal for an update query that touches 9M rows. The original INSERT took less than an hour, and the select takes less than a minute. Code for the update is below...
Update Trade_list A, Cusip_table B
Set A.ticker = B.ticker
Where A.cusip = B.cusip;
Stuff I have tried:
Removed almost all index's from Trade_List. I left one in on CUSIP.
Upgraded RAM from 4 GB to 8 GB. This did nothing. Upon further investigation, my CPU and RAM are not limiting factors. CPU generally sits around 30%, RAM never gets above 5GB utilized. This leads me to believe that the issue is I/O. Is it possible MYSQL is doing a full table-scan? Why would it not utilize the index?
Changed all types of memory allocations per http://www.percona.com/blog/2013/09/20/innodb-performance-optimization-basics-updated/ and https://rtcamp.com/tutorials/mysql/mysqltuner/ and http://www.percona.com/blog/2006/09/29/what-to-tune-in-mysql-server-after-installation/. As far as I can tell, this did nothing. Again, I don't think the limiting factor is memory available. Also, I have no doubt that my memory allocations (shown above) are completely screwed up. I had no idea what I was doing and changed things all over the place. That said, I don't think the memory changes made anything any worse.
Upgraded MYSQL and Wamp versions (did nothing).
Read and learned a lot about index's. Candidly, I know very little about MYSQL, and am totally self-taught. I have learned a lot about memory on this foray, but need someone to step in and tell me where I have totally derailed. This database is for my own offline analysis. I am the only user.
I am happy to provide additional information that may help to analyze the issue. I'm at a total loss on this. The only thing I can come up with is that the system is doing full scans row by row... for every look-up in the update. Though, this could be completely false.
Your thoughts are much appreciated.
PM

myisam_sort_buffer_size vs sort_buffer_size

I am MySQL on server with 6GB RAM. I need to know what is the difference between myisam_sort_buffer_size and sort_buffer_size?
I have following size set to them:
myisam_sort_buffer_size = 8M
sort_buffer_size = 256M
Please also mention if these values are fine or need adjustments?
Thanks
sort_buffer_size:
MySQL documentation:
Each session that needs to do a sort allocates a buffer of this size. sort_buffer_size is not specific to any storage engine and applies in a general manner for optimization.
Your sort_buffer_size value seems extremely high. The default is 2M. I'd recommend going no larger than that since there is a performance penalty for going higher. Some people recommend smaller values such as 256kB. One thing to remember is this is per-client-session, it's not a global value. Large values will add up fast.
myisam_sort_buffer_size:
MySQL documentation:
The size of the buffer that is allocated when sorting MyISAM indexes during a REPAIR TABLE or when creating indexes with CREATE INDEX or ALTER TABLE.
Your myisam_sort_buffer_size seems fine. This won't be relevant unless you are rebuilding indexes using ALTER TABLE or REPAIR TABLE etc.
These arguments is per thread, so check out the number of max_connections.
I.e. with 15GB of RAM
max_connections = 1500
sort_buffer_size = 32M
I'm getting mysqltuner warning:
[--] Total buffers: 928.0M global + 32.7M per thread (1500 max threads)
[!!] Maximum possible memory usage: 48.8G (312% of installed RAM)
So I did lower it to the default value.
sort_buffer_size =256K
is best.you try this and restart the mysql server and monitor for few hours you can easily notice the benefit

mysql tuning variables - current & defaults

I have a pretty vanilla mysql 5.1 setup, and I am trying to tune it. I found this handy script
It made the following suggestions:
query_cache_limit (> 1M, or use smaller result sets)
query_cache_size (> 16M)
join_buffer_size (> 128.0K, or always use indexes with joins)
table_cache (> 64)
innodb_buffer_pool_size (>= 14G)
In reading up on what these mean and what they are currently set to, I found that I can run "mysqladmin variables"
My current values are:
query_cache_limit | 1048576
query_cache_size | 16777216
join_buffer_size | 131072
innodb_buffer_pool_size | 8388608
How do I read these, are they Kbytes? so is that 1M, 16M, 13M and 8M?
My box is only 4G of Ram and on a normal day only had a few hundred megs free of memory. Should I follow these suggestions and do:
#innodb_buffer_pool_size = 15G
#table_cache = 128
#join_buffer_size = 32M
#query_cache_size = 64M
#query_cache_limit = 2M
Im confused by the 15G, is this a disk space thing, not a memory thing? If so then the recommendations are not very good right?
Should I get more memory for my box?
More Info:
- My db size is 34Gigs, I use all innodb, I have 71 tables, 4 of them are huge, the rest are small. Ive been thinking of moving the big ones to SOLR and doing all queries from there, but wanted to see what I can do with basic tuning.
thanks
Joel
You should not set your innodb buffer pool higher than your available memory. The script probably recommended that based on the number of records in your table and their physical size. Innodb performance is very much memory based, if it can fit the indexes in memory, performance is going to drop quickly and noticeably. So setting innodb_buffer_pool_size high is almost always good advice.
Innodb is not the best table type for everything when it comes to mysql. Very large tables that generally have a lot of inserts, but few reads and updates (i.e. logging) are better off as MyISAM tables. Your very active tables (inserts, updates, deletes, selects) are better off Innodb. There may be a flame war on this advice, and it is generic advice.
But that said, no script is going to being to tell you what your settings should be. It can only make a best guest. The best settings are based on your data access patterns. You really have to read up on what all the variables are. mysqlperformanceblog.com is an excellent place for learning about mysql, in addition to the manual.
When in mysql, use "show variables" and "show status" to see what's going on. You can also run "show innodb status", but you may not understand that output if you don't know what the variables are.