I'm currently trying to optimize the indices for a quite large table of a project and experiencing a very counter intuitive behavior between the explain result and the actual query runtime.
The server is running MariaDB version 10.1.26-MariaDB-0+deb9u1 with the following configuration options:
key_buffer_size = 5G
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam_sort_buffer_size = 512M
read_buffer_size = 2M
read_rnd_buffer_size = 1M
query_cache_type = 0
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 0M
join_buffer_size = 8M
sort_buffer_size = 8M
tmp_table_size = 64M
max_heap_table_size = 64M
table_open_cache = 4K
performance_schema = ON
innodb_buffer_pool_size = 30G
innodb_log_buffer_size = 4MB
innodb_log_file_size = 1G
innodb_buffer_pool_instances = 10
The table looks contains about 6.8 million rows summing up to 12.1GB and looks like this:
CREATE TABLE `ad_master_test` (
`ID_AD_MASTER` BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT,
/* Some more attribute fields (mainly integers) ... */
`FK_KAT` BIGINT(20) UNSIGNED NOT NULL,
/* Some more content fields (mainly varchars/integers) ... */
`STAMP_START` DATETIME NULL DEFAULT NULL,
`STAMP_END` DATETIME NULL DEFAULT NULL,
PRIMARY KEY (`ID_AD_MASTER`),
INDEX `TEST1` (`STAMP_START`, `FK_KAT`),
INDEX `TEST2` (`FK_KAT`, `STAMP_START`)
)
COLLATE='utf8_general_ci'
ENGINE=InnoDB
ROW_FORMAT=DYNAMIC
AUTO_INCREMENT=14149037;
I already simplyfied the query as far as possible to better illustrate the Problem. I'm using FORCE INDEX to illustrate my issue here.
This first index is optimized using the explain statement and looks pretty promising (regarding the explain output):
SELECT *
FROM `ad_master_test`
FORCE INDEX (TEST1)
WHERE FK_KAT IN
(94169,94163,94164,94165,94166,94167,94168,94170,94171,94172,
94173,94174,94175,94176,94177,94162,99606,94179,94180,94181,
94182,94183,94184,94185,94186,94187,94188,94189,94190,94191,
94192,94193,94194,94195,94196,94197,94198,94199,94200,94201,
94202,94203,94204,94205,94206,94207,94208,94209,94210,94211,
94212,94213,94214,94215,94216,94217,94218,94219,94220,94221,
94222,94223,94224,94225,94226,94227,94228,94229,94230,94231,
94232,94233,94234,94235,94236,94237,94238,94239,94240,94241,
94178,94161)
ORDER BY STAMP_START DESC
LIMIT 24
Results in this explain:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE ad_master_test index (NULL) TEST1 14 (NULL) 24 Using where
And this profile:
Status Duration
starting 0.000180
checking permissions 0.000015
Opening tables 0.000041
After opening tables 0.000013
System lock 0.000011
Table lock 0.000013
init 0.000115
optimizing 0.000044
statistics 0.000050
preparing 0.000039
executing 0.000009
Sorting result 0.000016
Sending data 4.827512
end 0.000023
query end 0.000008
closing tables 0.000004
Unlocking tables 0.000014
freeing items 0.000011
updating status 0.000132
cleaning up 0.000021
The second index is just the fields reversed (the way I understood it here: https://dev.mysql.com/doc/refman/8.0/en/order-by-optimization.html ) which looks pretty horrible (regarding the explain output):
SELECT *
FROM `ad_master_test`
FORCE INDEX (TEST2)
WHERE FK_KAT IN (94169,94163,94164,94165,94166,94167,94168,94170,94171,94172,94173,94174,94175,94176,94177,94162,99606,94179,94180,94181,94182,94183,94184,94185,94186,94187,94188,94189,94190,94191,94192,94193,94194,94195,94196,94197,94198,94199,94200,94201,94202,94203,94204,94205,94206,94207,94208,94209,94210,94211,94212,94213,94214,94215,94216,94217,94218,94219,94220,94221,94222,94223,94224,94225,94226,94227,94228,94229,94230,94231,94232,94233,94234,94235,94236,94237,94238,94239,94240,94241,94178,94161)
ORDER BY STAMP_START DESC
LIMIT 24
Results in this explain:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE ad_master_test range TEST2 TEST2 8 (NULL) 497.766 Using index condition; Using filesort
And this profile:
Status Duration
starting 0.000087
checking permissions 0.000007
Opening tables 0.000021
After opening tables 0.000007
System lock 0.000006
Table lock 0.000005
init 0.000058
optimizing 0.000023
statistics 0.000654
preparing 0.000480
executing 0.000008
Sorting result 0.433607
Sending data 0.001681
end 0.000010
query end 0.000007
closing tables 0.000003
Unlocking tables 0.000011
freeing items 0.000010
updating status 0.000158
cleaning up 0.000021
Edit: When not using force index the explain changes as following:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE ad_master_test index TEST2 TEST1 14 (NULL) 345 Using where
The profile and runtime stays (as expected) the same it was when using FORCE INDEX on the TEST1 index.
/Edit
I honestly can't wrap my head around this. Why does the explain and the actual query performance differ that extremely. What does the server do while the 5 seconds "Sending data"?
It looks like there are some TEXT or BLOB or even large VARCHAR columns?? 12.1GB/6.8M = 1.8KB. If you don't need them, don't fetch them; this may speed up any such query. How much RAM do you have?
The two indexes do seem to take a different time (4.8s vs 0.4s).
(STAMP_START, FK_KAT)
This avoids the "filesort" by scanning the index BTree in the desired order. It has to check every entry for a matching fk_kat. I think it will stop after 24 (see LIMIT) matching rows, but that could be the first 24 (fast), the last 24 (very slow), or something in between.
(FK_KAT, STAMP_START)
This show go directly to all 82 ids, scan each (assuming not unique), collecting perhaps hundreds of rows. Then do a "filesort". (Note: this will be a disk sort if any TEXT columns are being fetched.) Then deliver the first 24. (Oops; I don't think MariaDB 10.1 has that feature.)
Even though this takes more steps, by avoiding the full index scan it turns out to be faster.
Other Notes
key_buffer_size = 20G - Don't use MyISAM. But if you do, change this to 10% of RAM. If you don't, change it to 30M and give 70% of RAM to innodb_buffer_pool_size.
If you want to discuss further, please provide EXPLAIN FORMAT=JSON SELECT ... for each query. This will have the "cost" analysis, which should explain why it picked the worse index.
Another experiment
Instead of SELECT *, run the timings and EXPLAINs with just SELECT ID_AD_MASTER. If that proves to be "fast", then reformulate the query thus:
SELECT b.* -- (or selected columns from `b`)
FROM ( SELECT ID_AD_MASTER FROM ... ) AS a
JOIN ad_master_test AS b USING(ad_master_test)
ORDER BY STAMP_START DESC ; -- (yes, repeat the ORDER BY)
Suggestions to consider for your my.cnf [mysqld] section
(RPS is RatePerSecond)
thread_handling=pool-of-threads # from one-thread-per-connection see refman
max_connections=100 # from 151 because max_used_connections < 60
read_rnd_buffer_size=256K # from 1M to reduce RAM used, < handler_read_rnd_next RPS
aria_pagecache_division_limit=50 # from 100 for WARM cache for < aria_pagecache_reads RPS
key_cache_division_limit=50 # from 100 for WARM cache for < key_reads
key_buffer_size=2G # from 5G Mysqltuner reports 1G used (this could be WRONG-test it)
innodb_io_capacity=30000 # from 200 since you have SSD
innodb_buffer_pool_instances=8 # from 16 for your volume of data
innodb_lru_scan_depth=128 # from 1024 to conserve CPU every SECOND see refman
innodb_buffer_pool_size=36G # from 30G for effective size of 32G when
innodb_change_buffer_pool_size=10 # from 25% set aside for Del,Ins,Upd activities
for additional suggestions, view profile, Network profile, for contact info including my Skype ID. There are additional opportunities available to improve your configuration.
Remember the advice ONLY one change per day, monitor, if positive results proceed to next suggestion. Otherwise let me know any seriously adverse result and which change seemed to cause the problem, please.
Analysis of VARIABLES and GLOBAL STATUS:
Observations:
Version: 10.1.26-MariaDB-0+deb9u1
64 GB of RAM
Uptime = 7d 22:50:19
You are not running on Windows.
Running 64-bit version
You appear to be running entirely (or mostly) InnoDB.
The More Important Issues:
A "Load Average" of 1 (or more) usually indicates an inefficient query. This is further confirmed by the large value for Created_tmp_disk_tables and Handler_read_rnd_next for a "mere" 91 queries per second. Let's see the slowest query. See Recommendations for further investiataion.
thread_cache_size = 20
Having gotten rid of MyISAM, there is no need to such a large key_buffer_size; decrease from 5G to 50M.
I'm not a fan of ROW_FORMAT=COMPRESSED; this has two relevant impacts for your Question: Increased CPU for compress/uncompress, and need for extra buffer_pool space. On the other hand, the GLOBAL STATUS does not indicate that 30GB is "too small". Is there a need for shrinking the disk space usage?
You have turned off some optimizations? Was this in response to some other problem?
Details and other observations:
( (key_buffer_size - 1.2 * Key_blocks_used * 1024) / _ram ) = (5120M - 1.2 * 25 * 1024) / 65536M = 7.8% -- Percent of RAM wasted in key_buffer.
-- Decrease key_buffer_size.
( Key_blocks_used * 1024 / key_buffer_size ) = 25 * 1024 / 5120M = 0.00% -- Percent of key_buffer used. High-water-mark.
-- Lower key_buffer_size to avoid unnecessary memory usage.
( innodb_buffer_pool_size / _ram ) = 30720M / 65536M = 46.9% -- % of RAM used for InnoDB buffer_pool
( table_open_cache ) = 4,096 -- Number of table descriptors to cache
-- Several hundred is usually good.
( Innodb_os_log_written / (Uptime / 3600) / innodb_log_files_in_group / innodb_log_file_size ) = 6,714,002,432 / (687019 / 3600) / 2 / 1024M = 0.0164 -- Ratio
-- (see minutes)
( Uptime / 60 * innodb_log_file_size / Innodb_os_log_written ) = 687,019 / 60 * 1024M / 6714002432 = 1,831 -- Minutes between InnoDB log rotations Beginning with 5.6.8, this can be changed dynamically; be sure to also change my.cnf.
-- (The recommendation of 60 minutes between rotations is somewhat arbitrary.) Adjust innodb_log_file_size. (Cannot change in AWS.)
( default_tmp_storage_engine ) = default_tmp_storage_engine =
( Innodb_rows_deleted / Innodb_rows_inserted ) = 1,319,619 / 2015717 = 0.655 -- Churn
-- "Don't queue it, just do it." (If MySQL is being used as a queue.)
( innodb_thread_concurrency ) = 0 -- 0 = Let InnoDB decide the best for concurrency_tickets.
-- Set to 0 or 64. This may cut back on CPU.
( innodb_print_all_deadlocks ) = innodb_print_all_deadlocks = OFF -- Whether to log all Deadlocks.
-- If you are plagued with Deadlocks, turn this on. Caution: If you have lots of deadlocks, this may write a lot to disk.
( innodb_buffer_pool_populate ) = OFF = 0 -- NUMA control
( query_prealloc_size / _ram ) = 24,576 / 65536M = 0.00% -- For parsing. Pct of RAM
( query_alloc_block_size / _ram ) = 16,384 / 65536M = 0.00% -- For parsing. Pct of RAM
( net_buffer_length / max_allowed_packet ) = 16,384 / 16M = 0.10%
( bulk_insert_buffer_size / _ram ) = 8M / 65536M = 0.01% -- Buffer for multi-row INSERTs and LOAD DATA
-- Too big could threaten RAM size. Too small could hinder such operations.
( Created_tmp_tables ) = 19,436,364 / 687019 = 28 /sec -- Frequency of creating "temp" tables as part of complex SELECTs.
( Created_tmp_disk_tables ) = 17,887,832 / 687019 = 26 /sec -- Frequency of creating disk "temp" tables as part of complex SELECTs
-- increase tmp_table_size and max_heap_table_size.
Check the rules for temp tables on when MEMORY is used instead of MyISAM. Perhaps minor schema or query changes can avoid MyISAM.
Better indexes and reformulation of queries are more likely to help.
( Created_tmp_disk_tables / Questions ) = 17,887,832 / 62591791 = 28.6% -- Pct of queries that needed on-disk tmp table.
-- Better indexes / No blobs / etc.
( Created_tmp_disk_tables / Created_tmp_tables ) = 17,887,832 / 19436364 = 92.0% -- Percent of temp tables that spilled to disk
-- Maybe increase tmp_table_size and max_heap_table_size; improve indexes; avoid blobs, etc.
( tmp_table_size ) = 64M -- Limit on size of MEMORY temp tables used to support a SELECT
-- Decrease tmp_table_size to avoid running out of RAM. Perhaps no more than 64M.
( Handler_read_rnd_next ) = 703,386,895,308 / 687019 = 1023824 /sec -- High if lots of table scans
-- possibly inadequate keys
( Handler_read_rnd_next / Com_select ) = 703,386,895,308 / 58493862 = 12,024 -- Avg rows scanned per SELECT. (approx)
-- Consider raising read_buffer_size
( Select_full_join ) = 15,981,913 / 687019 = 23 /sec -- joins without index
-- Add suitable index(es) to tables used in JOINs.
( Select_full_join / Com_select ) = 15,981,913 / 58493862 = 27.3% -- % of selects that are indexless join
-- Add suitable index(es) to tables used in JOINs.
( Select_scan ) = 1,510,902 / 687019 = 2.2 /sec -- full table scans
-- Add indexes / optimize queries (unless they are tiny tables)
( sort_buffer_size ) = 8M -- One per thread, malloced at full size until 5.6.4, so keep low; after that bigger is ok.
-- This may be eating into available RAM; recommend no more than 2M.
( binlog_format ) = binlog_format = STATEMENT -- STATEMENT/ROW/MIXED. ROW is preferred; it may become the default.
( slow_query_log ) = slow_query_log = OFF -- Whether to log slow queries. (5.1.12)
( long_query_time ) = 10 -- Cutoff (Seconds) for defining a "slow" query.
-- Suggest 2
( Threads_created / Connections ) = 3,081 / 303642 = 1.0% -- Rapidity of process creation
-- Increase thread_cache_size (non-Windows)
Abnormally large:
Connection_errors_peer_address = 2
Handler_icp_attempts = 71206 /sec
Handler_icp_match = 71206 /sec
Handler_read_next / Handler_read_key = 283
Handler_read_prev = 12522 /sec
Handler_read_rnd_deleted = 16 /sec
Innodb_rows_read = 1255832 /sec
Key_blocks_unused = 4.24e+6
Performance_schema_table_instances_lost = 32
Select_range / Com_select = 33.1%
Sort_scan = 27 /sec
Tc_log_page_size = 4,096
innodb_lru_scan_depth / innodb_io_capacity = 5.12
innodb_max_dirty_pages_pct_lwm = 0.10%
max_relay_log_size = 100MB
myisam_sort_buffer_size = 512MB
Abnormal strings:
Compression = ON
innodb_cleaner_lsn_age_factor = HIGH_CHECKPOINT
innodb_empty_free_list_algorithm = BACKOFF
innodb_fast_shutdown = 1
innodb_foreground_preflush = EXPONENTIAL_BACKOFF
innodb_log_checksum_algorithm = INNODB
myisam_stats_method = NULLS_UNEQUAL
opt_s__engine_condition_pushdown = off
opt_s__mrr = off
opt_s__mrr_cost_based = off
I know there are plenty of information on this topic, but I think I've tried everything I can, and don't know any new ideas (if any?) about increasing my MySQL database performance.
Situation: I use eTesting platform (taotesting if anybody knows it). It uses MySQL database, with 8 tables. At this moment one of those tables has ~500k rows. Others are either empty or has ~10-15 rows. At first mysql performance was terrible using this platform, then I decided to convert MyISAM tables to InnoDB and make some my.cnf changes. This seemed to have improved performance, but not as much as I wanted.
Server has 1 CPU / 4 cores. 6 GB RAM. It's not dedicated to MySQL, it also hosts PHP/apache/nginx.
What's more, there are about 80% more selects from database then inserts/updates/deletes.
Any ideas how to further (if possible) improve mysql configuration are welcome .
Here's my.cnf:
#
# The MySQL database server configuration file.
#
# You can copy this to one of:
# - "/etc/mysql/my.cnf" to set global options,
# - "~/.my.cnf" to set user-specific options.
#
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.
#
# For explanations see
# http://dev.mysql.com/doc/mysql/en/server-system-variables.html
# This will be passed to all mysql clients
# It has been reported that passwords should be enclosed with ticks/quotes
# escpecially if they contain "#" chars...
# Remember to edit /etc/mysql/debian.cnf when changing the socket location.
[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
# Here is entries for some specific programs
# The following values assume you have at least 32M ram
# This was formally known as [safe_mysqld]. Both versions are currently parsed.
[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0
[mysqld]
#
# * Basic Settings
#
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
#
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
bind-address = 127.0.0.1
#
# * Fine Tuning
#
key_buffer_size = 32M
max_allowed_packet = 16M
thread_stack = 256K
thread_cache_size = 50
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam-recover = BACKUP
max_connections = 200
#table_cache = 1M
sort_buffer_size = 1M
read_buffer_size = 1M
join_buffer_size = 1M
#thread_concurrency = 8
max_heap_table_size = 64M
tmp_table_size = 64M
#
# * Query Cache Configuration
#
query_cache_limit = 2M
query_cache_size = 2M
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
# As of 5.1 you can enable the log at runtime!
#general_log_file = /var/log/mysql/mysql.log
#general_log = 1
#
# Error log - should be very few entries.
#
log_error = /var/log/mysql/error.log
#
# Here you can see queries with especially long duration
log_slow_queries = /var/log/mysql/mysql-slow.log
long_query_time = 1
#log-queries-not-using-indexes
#log = /var/log/mysql/testing_req_nec.log
#
# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
# other settings you may need to change.
#server-id = 1
#log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
#binlog_do_db = include_database_name
#binlog_ignore_db = include_database_name
#
# * InnoDB
#
# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
# Read the manual for more InnoDB related options. There are many!
#
# * Security Features
#
# Read the manual, too, if you want chroot!
# chroot = /var/lib/mysql/
#
# For generating SSL certificates I recommend the OpenSSL GUI "tinyca".
#
# ssl-ca=/etc/mysql/cacert.pem
# ssl-cert=/etc/mysql/server-cert.pem
# ssl-key=/etc/mysql/server-key.pem
innodb_buffer_pool_size = 4G
innodb_thread_concurrency = 8
innodb_log_file_size = 1G
innodb_log_buffer_size = 16M
innodb_buffer_pool_instances = 8
innodb_flush_log_at_trx_commit = 0
innodb_file_per_table = 1
innodb_read_io_threads = 64
innodb_write_io_threads = 64
innodb_additional_mem_pool_size = 16M
innodb_max_dirty_pages_pct = 90
[mysqldump]
quick
quote-names
max_allowed_packet = 16M
[mysql]
#no-auto-rehash # faster start of mysql but no tab completition
[isamchk]
key_buffer_size = 16M
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
#open_files_limit = 8192
!includedir /etc/mysql/conf.d/
EDIT:
Ok, I thought actual numbers were not needed since I said that performance I got wasn't enough for me. I'm using Jmeter to do performance testing. One example would be: Student can login, select test, submit his answers, and end test. I've tried doing this with ~30 students, and here's what I got:
Login: ~2 seconds (best case scenario), 8s (worst case)
Select test: ~5s - 7s (best case), ~20s - 25s worst case.
Submit answers to question: ~5 - 7s (best case), ~30s worst case.
Test ending, same as other submits.
See now I would like to have best case scenario for more students (more threads). Problem is that this eTesting platform doesn't use tradidional relational DB model (it uses RDF triples if you've heard) and stores them into MySQL tables. There are a lot of queries, one submit ~80 queries. Sof if test has ~15 items, 1 student sends ~2k queries.
And I've tried EXPLAIN. Can't really help since I can't change eTesting platform source code without breaking everything else, nor can I change table structure (besides changing it's engine, maybe some indexes?)
EDIT:
submit queries example:
http://codeviewer.org/view/code:3d10
tables structure:
http://codeviewer.org/view/code:3d11
mysql> show variables LIKE '%buffer_pool%';
+------------------------------+------------+
| Variable_name | Value |
+------------------------------+------------+
| innodb_buffer_pool_instances | 8 |
| innodb_buffer_pool_size | 4294967296 |
+------------------------------+------------+
2 rows in set (0.00 sec)
Explain on one of more complex queries:
EXPLAIN SELECT count(*) as count FROM statements WHERE (predicate = 'http://www.tao.lu/Ontologies/TAODelivery.rdf#DeliveryExecutionDelivery' AND (object = 'https://etestas.nec.lt/tao_ssl_dev.rdf#i139266227459751316')) AND subject IN (SELECT subject FROM statements WHERE (predicate = 'http://www.tao.lu/Ontologies/TAODelivery.rdf#DeliveryExecutionSubject' AND (object = 'https://etestas.nec.lt/tao_ssl_dev.rdf#i1392637693114892'))) AND subject IN (SELECT subject FROM statements WHERE predicate = 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type' AND object in ('http://www.tao.lu/Ontologies/TAODelivery.rdf#DeliveryExecution'));
+----+--------------------+------------+----------------+---------------+------+---------+-------------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+------------+----------------+---------------+------+---------+-------------+------+-------------+
| 1 | PRIMARY | statements | ref | k_po | k_po | 990 | const,const | 1 | Using where |
| 3 | DEPENDENT SUBQUERY | statements | index_subquery | k_sp,k_po | k_sp | 990 | func,const | 1 | Using where |
| 2 | DEPENDENT SUBQUERY | statements | index_subquery | k_sp,k_po | k_sp | 990 | func,const | 1 | Using where |
+----+--------------------+------------+----------------+---------------+------+---------+-------------+------+-------------+
You can check MySQL Tuner and Tuning Primer, both scripts can help you optimize your server configuration
The InnoDB performance measures varies according to Hardware,OS and also various factors.You can check out this for the detailed InnoDB Performance
This is really maddening. I have followed every instruction for settings that I have found on the interwebs, and I can't get past this.
Basically, I have a table with about 8 million rows. I need to create a backup of this table like so:
create table mytable_backup like mytable
And that takes several hours on my production server, which is an Amazon EC2 instance running through EngineYard. It takes only minutes on my MacBook Pro. This is another one of those annoying things that MySQL does in the background, and you can't guess how it is making the decision to do something so stupidly slow.
BTW, there is over 330G available in the tmp directory, so that is not the issue.
But here is what "free -m" yields:
deploy#domU-12-31-39-02-35-31 ~ $ free -m
total used free shared buffers cached
Mem: 1740 1728 11 0 14 1354
-/+ buffers/cache: 359 1380
Swap: 895 2 893
I don't know how to read that, but the "11" under the free column doesn't look very good.
I am running:
Server version: 5.0.51-log Gentoo Linux mysql-community-5.0.51
Here is my configuration file:
# /etc/mysql/my.cnf: The global mysql configuration file.
# $Header: /var/cvsroot/gentoo-x86/dev-db/mysql/files/my.cnf-4.1,v 1.3 2006/05/05 19:51:40 chtekk Exp $
# The following options will be passed to all MySQL clients
[client]
port = 3306
[mysql]
character-sets-dir=/usr/share/mysql/charsets
default-character-set=utf8
[mysqladmin]
character-sets-dir=/usr/share/mysql/charsets
default-character-set=utf8
[mysqlcheck]
character-sets-dir=/usr/share/mysql/charsets
default-character-set=utf8
[mysqldump]
character-sets-dir=/usr/share/mysql/charsets
default-character-set=utf8
[mysqlimport]
character-sets-dir=/usr/share/mysql/charsets
default-character-set=utf8
[mysqlshow]
character-sets-dir=/usr/share/mysql/charsets
default-character-set=utf8
[myisamchk]
character-sets-dir=/usr/share/mysql/charsets
[myisampack]
character-sets-dir=/usr/share/mysql/charsets
[mysqld_safe]
err-log = /db/mysql/log/mysql.err
# To allow table cache to be raised
open-file-limit = 4096
[mysqld]
max_connections = 300
innodb_file_per_table = 1
log-slow-queries = /db/mysql/log/slow_query.log
long_query_time = 2000000
ft_min_word_len = 3
max_heap_table_size = 64M
tmp_table_size = 64M
server-id = 1
log-bin = /db/mysql/master-bin
log-bin-index = /db/mysql/master-bin.index
# END master/slave configuration
character-set-server = utf8
default-character-set = utf8
user = mysql
port = 3306
socket = /var/run/mysqld/mysqld.sock
pid-file = /var/run/mysqld/mysqld.pid
log-error = /db/mysql/log/mysqld.err
basedir = /usr
datadir = /db/mysql
key_buffer = 32M
max_allowed_packet = 32M
table_cache = 1024
thread_cache = 512
sort_buffer_size = 100M
net_buffer_length = 64K
read_buffer_size = 1M
read_rnd_buffer_size = 1M
myisam_sort_buffer_size = 100M
myisam_max_sort_file_size = 2G
myisam_repair_threads = 1
language = /usr/share/mysql/english
# security:
# using "localhost" in connects uses sockets by default
# skip-networking
# bind-address = 127.0.0.1
# point the following paths to different dedicated disks
tmpdir = /mnt/mysql/tmp
# log-update = /path-to-dedicated-directory/hostname
# you need the debug USE flag enabled to use the following directives,
# if needed, uncomment them, start the server and issue
# #tail -f /tmp/mysqld.sql /tmp/mysqld.trace
# this will show you *exactly* what's happening in your server ;)
#log = /tmp/mysqld.sql
#gdb
#debug = d:t:i:o,/tmp/mysqld.trace
#one-thread
# the rest of the innodb config follows:
# don't eat too much memory, we're trying to be safe on 64Mb boxes
# you might want to bump this up a bit on boxes with more RAM
innodb_buffer_pool_size = 1275M
# this is the default, increase it if you have lots of tables
innodb_additional_mem_pool_size = 16M
#
# i'd like to use /var/lib/mysql/innodb, but that is seen as a database :-(
# and upstream wants things to be under /var/lib/mysql/, so that's the route
# we have to take for the moment
#innodb_data_home_dir = /var/lib/mysql/
#innodb_log_arch_dir = /var/lib/mysql/
#innodb_log_group_home_dir = /var/lib/mysql/
# you may wish to change this size to be more suitable for your system
# the max is there to avoid run-away growth on your machine
innodb_data_file_path = ibdata1:20M:autoextend
# we keep this at around 25% of of innodb_buffer_pool_size
# sensible values range from 1MB to (1/innodb_log_files_in_group*innodb_buffer_pool_size)
innodb_log_file_size = 96M
# this is the default, increase it if you have very large transactions going on
innodb_log_buffer_size = 8M
# this is the default and won't hurt you
# you shouldn't need to tweak it
innodb_log_files_in_group = 2
# see the innodb config docs, the other options are not always safe
# This is not good for performance when used with bin_sync. Disabling.
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
innodb_lock_wait_timeout = 50
query_cache_size = 16M
query_cache_type = 1
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
# uncomment the next directive if you are not familiar with SQL
#safe-updates
[isamchk]
key_buffer = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M
[myisamchk]
key_buffer = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M
ft_min_word_len = 3
[mysqlhotcopy]
interactive-timeout
For what it's worth, 11 megs free is perfectly fine. That's 11 megs of memory not being used for anything, and "wasted" as far as the hardware is concerned. The real number is the "1380" used in caches, PLUS the 11 megs unused. Caches can be blown away as necessary.
Your system has nearly 1400 MB of RAM available.
You could try
create table backup_table as (select * from production table) engine=myisam
This should create the table with only the data and none of the keys. You can then add the keys on by doing
alter table backup_table add index(column_name)
I've done this successfully several times, and it is usually a factor of 2 times faster than inserting with the keys in place.
You have to look your setting for myisam_max_sort_file_size and myisam_sort_buffer_size
If the sum of all the keys is less that myisam_max_sort_file_size, a sort will, in the worst case, land in a MyISAM table, which is a good thing.
Otherwise, it will revert to the keycache. That means loading the necessary .MYI index pages into the keycache and traversing those index pages in memory. Nobody wants that !!!!
Your current setting for this variable says 2G.
Look at the keys being built. Add them up. If the sum of all the key sizes exceed 2G, keycache all the way !!! You will have to up this value. You could up this value for the session to 4G with
SET myisam_max_sort_file_size = 1024 * 1024 * 1024 * 4;
SET myisam_sort_buffer_size = 1024 * 1024 * 1024 * 4;
or you could plant the number directly like this:
SET myisam_max_sort_file_size = 4294967296;
SET myisam_sort_buffer_size = 4294967296;
before doing the ENABLE KEYS;
If you are just interested in backing up the data, why index it to begin with ??? Try using the ARCHIVE storage engine. It has no indexing whatsoever. Do the following:
CREATE TABLE mytable_backup LIKE mytable;
ALTER TABLE mytable_backup ENGINE=ARCHIVE;
INSERT INTO mytable_backup SELECT * FROM mytable;
I also noticed you are using Amazon EC2. I have never been in EC2 before. Run this command:
SHOW ENGINES;
+------------+---------+----------------------------------------------------------------+--------------+------+------------+
| Engine | Support | Comment | Transactions | XA | Savepoints |
+------------+---------+----------------------------------------------------------------+--------------+------+------------+
| InnoDB | DEFAULT | Supports transactions, row-level locking, and foreign keys | YES | YES | YES |
| MRG_MYISAM | YES | Collection of identical MyISAM tables | NO | NO | NO |
| BLACKHOLE | YES | /dev/null storage engine (anything you write to it disappears) | NO | NO | NO |
| CSV | YES | CSV storage engine | NO | NO | NO |
| MEMORY | YES | Hash based, stored in memory, useful for temporary tables | NO | NO | NO |
| FEDERATED | YES | Federated MySQL storage engine | NO | NO | NO |
| ARCHIVE | YES | Archive storage engine | NO | NO | NO |
| MyISAM | YES | Default engine as of MySQL 3.23 with great performance | NO | NO | NO |
+------------+---------+----------------------------------------------------------------+--------------+------+------------+
If the ARCHIVE storage engine appears in the list and Support is Yes, you have the option to backup to an ARCHIVE table. If not, you must get the myisam_max_sort_file_size and myisam_sort_buffer_size adjusted.