Weird behavior optimizing query indices (MariaDB + InnoDB) - mysql

I'm currently trying to optimize the indices for a quite large table of a project and experiencing a very counter intuitive behavior between the explain result and the actual query runtime.
The server is running MariaDB version 10.1.26-MariaDB-0+deb9u1 with the following configuration options:
key_buffer_size = 5G
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam_sort_buffer_size = 512M
read_buffer_size = 2M
read_rnd_buffer_size = 1M
query_cache_type = 0
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 0M
join_buffer_size = 8M
sort_buffer_size = 8M
tmp_table_size = 64M
max_heap_table_size = 64M
table_open_cache = 4K
performance_schema = ON
innodb_buffer_pool_size = 30G
innodb_log_buffer_size = 4MB
innodb_log_file_size = 1G
innodb_buffer_pool_instances = 10
The table looks contains about 6.8 million rows summing up to 12.1GB and looks like this:
CREATE TABLE `ad_master_test` (
`ID_AD_MASTER` BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT,
/* Some more attribute fields (mainly integers) ... */
`FK_KAT` BIGINT(20) UNSIGNED NOT NULL,
/* Some more content fields (mainly varchars/integers) ... */
`STAMP_START` DATETIME NULL DEFAULT NULL,
`STAMP_END` DATETIME NULL DEFAULT NULL,
PRIMARY KEY (`ID_AD_MASTER`),
INDEX `TEST1` (`STAMP_START`, `FK_KAT`),
INDEX `TEST2` (`FK_KAT`, `STAMP_START`)
)
COLLATE='utf8_general_ci'
ENGINE=InnoDB
ROW_FORMAT=DYNAMIC
AUTO_INCREMENT=14149037;
I already simplyfied the query as far as possible to better illustrate the Problem. I'm using FORCE INDEX to illustrate my issue here.
This first index is optimized using the explain statement and looks pretty promising (regarding the explain output):
SELECT *
FROM `ad_master_test`
FORCE INDEX (TEST1)
WHERE FK_KAT IN
(94169,94163,94164,94165,94166,94167,94168,94170,94171,94172,
94173,94174,94175,94176,94177,94162,99606,94179,94180,94181,
94182,94183,94184,94185,94186,94187,94188,94189,94190,94191,
94192,94193,94194,94195,94196,94197,94198,94199,94200,94201,
94202,94203,94204,94205,94206,94207,94208,94209,94210,94211,
94212,94213,94214,94215,94216,94217,94218,94219,94220,94221,
94222,94223,94224,94225,94226,94227,94228,94229,94230,94231,
94232,94233,94234,94235,94236,94237,94238,94239,94240,94241,
94178,94161)
ORDER BY STAMP_START DESC
LIMIT 24
Results in this explain:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE ad_master_test index (NULL) TEST1 14 (NULL) 24 Using where
And this profile:
Status Duration
starting 0.000180
checking permissions 0.000015
Opening tables 0.000041
After opening tables 0.000013
System lock 0.000011
Table lock 0.000013
init 0.000115
optimizing 0.000044
statistics 0.000050
preparing 0.000039
executing 0.000009
Sorting result 0.000016
Sending data 4.827512
end 0.000023
query end 0.000008
closing tables 0.000004
Unlocking tables 0.000014
freeing items 0.000011
updating status 0.000132
cleaning up 0.000021
The second index is just the fields reversed (the way I understood it here: https://dev.mysql.com/doc/refman/8.0/en/order-by-optimization.html ) which looks pretty horrible (regarding the explain output):
SELECT *
FROM `ad_master_test`
FORCE INDEX (TEST2)
WHERE FK_KAT IN (94169,94163,94164,94165,94166,94167,94168,94170,94171,94172,94173,94174,94175,94176,94177,94162,99606,94179,94180,94181,94182,94183,94184,94185,94186,94187,94188,94189,94190,94191,94192,94193,94194,94195,94196,94197,94198,94199,94200,94201,94202,94203,94204,94205,94206,94207,94208,94209,94210,94211,94212,94213,94214,94215,94216,94217,94218,94219,94220,94221,94222,94223,94224,94225,94226,94227,94228,94229,94230,94231,94232,94233,94234,94235,94236,94237,94238,94239,94240,94241,94178,94161)
ORDER BY STAMP_START DESC
LIMIT 24
Results in this explain:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE ad_master_test range TEST2 TEST2 8 (NULL) 497.766 Using index condition; Using filesort
And this profile:
Status Duration
starting 0.000087
checking permissions 0.000007
Opening tables 0.000021
After opening tables 0.000007
System lock 0.000006
Table lock 0.000005
init 0.000058
optimizing 0.000023
statistics 0.000654
preparing 0.000480
executing 0.000008
Sorting result 0.433607
Sending data 0.001681
end 0.000010
query end 0.000007
closing tables 0.000003
Unlocking tables 0.000011
freeing items 0.000010
updating status 0.000158
cleaning up 0.000021
Edit: When not using force index the explain changes as following:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE ad_master_test index TEST2 TEST1 14 (NULL) 345 Using where
The profile and runtime stays (as expected) the same it was when using FORCE INDEX on the TEST1 index.
/Edit
I honestly can't wrap my head around this. Why does the explain and the actual query performance differ that extremely. What does the server do while the 5 seconds "Sending data"?

It looks like there are some TEXT or BLOB or even large VARCHAR columns?? 12.1GB/6.8M = 1.8KB. If you don't need them, don't fetch them; this may speed up any such query. How much RAM do you have?
The two indexes do seem to take a different time (4.8s vs 0.4s).
(STAMP_START, FK_KAT)
This avoids the "filesort" by scanning the index BTree in the desired order. It has to check every entry for a matching fk_kat. I think it will stop after 24 (see LIMIT) matching rows, but that could be the first 24 (fast), the last 24 (very slow), or something in between.
(FK_KAT, STAMP_START)
This show go directly to all 82 ids, scan each (assuming not unique), collecting perhaps hundreds of rows. Then do a "filesort". (Note: this will be a disk sort if any TEXT columns are being fetched.) Then deliver the first 24. (Oops; I don't think MariaDB 10.1 has that feature.)
Even though this takes more steps, by avoiding the full index scan it turns out to be faster.
Other Notes
key_buffer_size = 20G - Don't use MyISAM. But if you do, change this to 10% of RAM. If you don't, change it to 30M and give 70% of RAM to innodb_buffer_pool_size.
If you want to discuss further, please provide EXPLAIN FORMAT=JSON SELECT ... for each query. This will have the "cost" analysis, which should explain why it picked the worse index.
Another experiment
Instead of SELECT *, run the timings and EXPLAINs with just SELECT ID_AD_MASTER. If that proves to be "fast", then reformulate the query thus:
SELECT b.* -- (or selected columns from `b`)
FROM ( SELECT ID_AD_MASTER FROM ... ) AS a
JOIN ad_master_test AS b USING(ad_master_test)
ORDER BY STAMP_START DESC ; -- (yes, repeat the ORDER BY)

Suggestions to consider for your my.cnf [mysqld] section
(RPS is RatePerSecond)
thread_handling=pool-of-threads # from one-thread-per-connection see refman
max_connections=100 # from 151 because max_used_connections < 60
read_rnd_buffer_size=256K # from 1M to reduce RAM used, < handler_read_rnd_next RPS
aria_pagecache_division_limit=50 # from 100 for WARM cache for < aria_pagecache_reads RPS
key_cache_division_limit=50 # from 100 for WARM cache for < key_reads
key_buffer_size=2G # from 5G Mysqltuner reports 1G used (this could be WRONG-test it)
innodb_io_capacity=30000 # from 200 since you have SSD
innodb_buffer_pool_instances=8 # from 16 for your volume of data
innodb_lru_scan_depth=128 # from 1024 to conserve CPU every SECOND see refman
innodb_buffer_pool_size=36G # from 30G for effective size of 32G when
innodb_change_buffer_pool_size=10 # from 25% set aside for Del,Ins,Upd activities
for additional suggestions, view profile, Network profile, for contact info including my Skype ID. There are additional opportunities available to improve your configuration.
Remember the advice ONLY one change per day, monitor, if positive results proceed to next suggestion. Otherwise let me know any seriously adverse result and which change seemed to cause the problem, please.

Analysis of VARIABLES and GLOBAL STATUS:
Observations:
Version: 10.1.26-MariaDB-0+deb9u1
64 GB of RAM
Uptime = 7d 22:50:19
You are not running on Windows.
Running 64-bit version
You appear to be running entirely (or mostly) InnoDB.
The More Important Issues:
A "Load Average" of 1 (or more) usually indicates an inefficient query. This is further confirmed by the large value for Created_tmp_disk_tables and Handler_read_rnd_next for a "mere" 91 queries per second. Let's see the slowest query. See Recommendations for further investiataion.
thread_cache_size = 20
Having gotten rid of MyISAM, there is no need to such a large key_buffer_size; decrease from 5G to 50M.
I'm not a fan of ROW_FORMAT=COMPRESSED; this has two relevant impacts for your Question: Increased CPU for compress/uncompress, and need for extra buffer_pool space. On the other hand, the GLOBAL STATUS does not indicate that 30GB is "too small". Is there a need for shrinking the disk space usage?
You have turned off some optimizations? Was this in response to some other problem?
Details and other observations:
( (key_buffer_size - 1.2 * Key_blocks_used * 1024) / _ram ) = (5120M - 1.2 * 25 * 1024) / 65536M = 7.8% -- Percent of RAM wasted in key_buffer.
-- Decrease key_buffer_size.
( Key_blocks_used * 1024 / key_buffer_size ) = 25 * 1024 / 5120M = 0.00% -- Percent of key_buffer used. High-water-mark.
-- Lower key_buffer_size to avoid unnecessary memory usage.
( innodb_buffer_pool_size / _ram ) = 30720M / 65536M = 46.9% -- % of RAM used for InnoDB buffer_pool
( table_open_cache ) = 4,096 -- Number of table descriptors to cache
-- Several hundred is usually good.
( Innodb_os_log_written / (Uptime / 3600) / innodb_log_files_in_group / innodb_log_file_size ) = 6,714,002,432 / (687019 / 3600) / 2 / 1024M = 0.0164 -- Ratio
-- (see minutes)
( Uptime / 60 * innodb_log_file_size / Innodb_os_log_written ) = 687,019 / 60 * 1024M / 6714002432 = 1,831 -- Minutes between InnoDB log rotations Beginning with 5.6.8, this can be changed dynamically; be sure to also change my.cnf.
-- (The recommendation of 60 minutes between rotations is somewhat arbitrary.) Adjust innodb_log_file_size. (Cannot change in AWS.)
( default_tmp_storage_engine ) = default_tmp_storage_engine =
( Innodb_rows_deleted / Innodb_rows_inserted ) = 1,319,619 / 2015717 = 0.655 -- Churn
-- "Don't queue it, just do it." (If MySQL is being used as a queue.)
( innodb_thread_concurrency ) = 0 -- 0 = Let InnoDB decide the best for concurrency_tickets.
-- Set to 0 or 64. This may cut back on CPU.
( innodb_print_all_deadlocks ) = innodb_print_all_deadlocks = OFF -- Whether to log all Deadlocks.
-- If you are plagued with Deadlocks, turn this on. Caution: If you have lots of deadlocks, this may write a lot to disk.
( innodb_buffer_pool_populate ) = OFF = 0 -- NUMA control
( query_prealloc_size / _ram ) = 24,576 / 65536M = 0.00% -- For parsing. Pct of RAM
( query_alloc_block_size / _ram ) = 16,384 / 65536M = 0.00% -- For parsing. Pct of RAM
( net_buffer_length / max_allowed_packet ) = 16,384 / 16M = 0.10%
( bulk_insert_buffer_size / _ram ) = 8M / 65536M = 0.01% -- Buffer for multi-row INSERTs and LOAD DATA
-- Too big could threaten RAM size. Too small could hinder such operations.
( Created_tmp_tables ) = 19,436,364 / 687019 = 28 /sec -- Frequency of creating "temp" tables as part of complex SELECTs.
( Created_tmp_disk_tables ) = 17,887,832 / 687019 = 26 /sec -- Frequency of creating disk "temp" tables as part of complex SELECTs
-- increase tmp_table_size and max_heap_table_size.
Check the rules for temp tables on when MEMORY is used instead of MyISAM. Perhaps minor schema or query changes can avoid MyISAM.
Better indexes and reformulation of queries are more likely to help.
( Created_tmp_disk_tables / Questions ) = 17,887,832 / 62591791 = 28.6% -- Pct of queries that needed on-disk tmp table.
-- Better indexes / No blobs / etc.
( Created_tmp_disk_tables / Created_tmp_tables ) = 17,887,832 / 19436364 = 92.0% -- Percent of temp tables that spilled to disk
-- Maybe increase tmp_table_size and max_heap_table_size; improve indexes; avoid blobs, etc.
( tmp_table_size ) = 64M -- Limit on size of MEMORY temp tables used to support a SELECT
-- Decrease tmp_table_size to avoid running out of RAM. Perhaps no more than 64M.
( Handler_read_rnd_next ) = 703,386,895,308 / 687019 = 1023824 /sec -- High if lots of table scans
-- possibly inadequate keys
( Handler_read_rnd_next / Com_select ) = 703,386,895,308 / 58493862 = 12,024 -- Avg rows scanned per SELECT. (approx)
-- Consider raising read_buffer_size
( Select_full_join ) = 15,981,913 / 687019 = 23 /sec -- joins without index
-- Add suitable index(es) to tables used in JOINs.
( Select_full_join / Com_select ) = 15,981,913 / 58493862 = 27.3% -- % of selects that are indexless join
-- Add suitable index(es) to tables used in JOINs.
( Select_scan ) = 1,510,902 / 687019 = 2.2 /sec -- full table scans
-- Add indexes / optimize queries (unless they are tiny tables)
( sort_buffer_size ) = 8M -- One per thread, malloced at full size until 5.6.4, so keep low; after that bigger is ok.
-- This may be eating into available RAM; recommend no more than 2M.
( binlog_format ) = binlog_format = STATEMENT -- STATEMENT/ROW/MIXED. ROW is preferred; it may become the default.
( slow_query_log ) = slow_query_log = OFF -- Whether to log slow queries. (5.1.12)
( long_query_time ) = 10 -- Cutoff (Seconds) for defining a "slow" query.
-- Suggest 2
( Threads_created / Connections ) = 3,081 / 303642 = 1.0% -- Rapidity of process creation
-- Increase thread_cache_size (non-Windows)
Abnormally large:
Connection_errors_peer_address = 2
Handler_icp_attempts = 71206 /sec
Handler_icp_match = 71206 /sec
Handler_read_next / Handler_read_key = 283
Handler_read_prev = 12522 /sec
Handler_read_rnd_deleted = 16 /sec
Innodb_rows_read = 1255832 /sec
Key_blocks_unused = 4.24e+6
Performance_schema_table_instances_lost = 32
Select_range / Com_select = 33.1%
Sort_scan = 27 /sec
Tc_log_page_size = 4,096
innodb_lru_scan_depth / innodb_io_capacity = 5.12
innodb_max_dirty_pages_pct_lwm = 0.10%
max_relay_log_size = 100MB
myisam_sort_buffer_size = 512MB
Abnormal strings:
Compression = ON
innodb_cleaner_lsn_age_factor = HIGH_CHECKPOINT
innodb_empty_free_list_algorithm = BACKOFF
innodb_fast_shutdown = 1
innodb_foreground_preflush = EXPONENTIAL_BACKOFF
innodb_log_checksum_algorithm = INNODB
myisam_stats_method = NULLS_UNEQUAL
opt_s__engine_condition_pushdown = off
opt_s__mrr = off
opt_s__mrr_cost_based = off

Related

high CPU with high user connection, mysql database [migrated]

This question was migrated from Stack Overflow because it can be answered on Database Administrators Stack Exchange.
Migrated 18 days ago.
I use a VPS with 56 CPU cores and 64 GB of memory, a with MySQL database, but the CPU usage is always high for my apps - user 10.000.
This is in my mysqld.cnf. What's wrong with my settings?
innodb_buffer_pool_size=50G
innodb_change_buffering=all
innodb_log_file_size = 3125M
innodb_log_buffer_size = 3125M
innodb_file_per_table = ON
innodb_log_files_in_group =4
innodb_flush_method = O_DSYNC
innodb_lock_wait_timeout = 50
innodb_buffer_pool_instances = 50
innodb_flush_log_at_trx_commit = 0
innodb_thread_concurrency=112
innodb_stats_on_metadata = OFF
innodb_thread_sleep_delay=1000
innodb_purge_threads=8
innodb_read_io_threads = 32
innodb_write_io_threads = 32
innodb_io_capacity = 5000
innodb_io_capacity_max=15000
key_buffer_size = 4G
max_allowed_packet = 1G
thread_stack = 5M
sort_buffer_size = 50M
read_buffer_size = 50M
read_rnd_buffer_size = 20M
myisam_sort_buffer_size = 20M
join_buffer_size = 1G
myisam-recover-options = BACKUP
max_connections = 1000
max_user_connections = 500
thread_cache_size = 1000
query_cache_limit = 0
query_cache_size = 0
long_query_time = 10
expire_logs_days = 5
max_binlog_size = 200M
innodb_log_buffer_size - I would keep that under 1% of RAM. (innodb_log_file_size looks OK.)
innodb_log_files_in_group there is some evidence that more that "2" degrades performance.
innodb_flush_method -- this depends on MySQL version and disk filesystem type. (Your choice is rarely picked by other DBAs.)
Did you also set innodb_adaptive_max_sleep_delay? I see that MariaDB abandoned innodb_thread_sleep_delay in 10.5 -- either it was obviated by some improvement, or it was deemed not useful. I don't know MySQL's stand on the two.
innodb_io_capacity_max=15000 -- Do you have a super-duper SSD?
key_buffer_size = 4G: Unless you are using MyISAM (you should not be), set this to only 50M.
thread_stack -- Leave at the default value
long_query_time = 1 to make better use of the slowlog. After a day, analyze it with pt-query-digest. SlowLog
But the real way to deal with high CPU (or Load Average) is to find the 'worst' queries (according to the slowlog) and work on improving them. (Random tuning of settings can only get you into trouble.)
For further analysis of the settings, please provide SHOW GLOBAL STATUS; and SHOW VARIABLES; after running at least a day. ( http://mysql.rjweb.org/doc.php/mysql_analysis#tuning )

MySQL 5.7 low performance with GROUP BY or DATETIME columns

I'm using MySQL 5.7 almost always and I never had performances issues.
But the project I'm working on lately is realy making me doubt about MySQL.
This project in the last year generated a pretty huge DB with tables sometime with 15mln records.
I'm thinking about partitioning but I need to get rid of foreign keys so this is all another problem that I'll probably discuss here in the next future with you guys.
So techinically my staff and I are trying to optmize all the tables, indexes and queries in order to perform.
What I've been taught since I was a student is "let do the database what the database is capable of doing" and that's always been my creed.
First of I noticed that MySQL does not perform with datetime columns in the right way. The first optimization was to convert all the datetime columns into integer values (converting datetime into int timestamp values) and with this little modification and by indexing those columns I decreased significantly the amount of time to get the same result.
Moreover I noticed that with some queries using the "GROUP BY" inside a query or doing a group using the programming language in charge to elaborate results the database is significantly slower.
So first datetime columns (maybe treated like strings don't allows MySQL core to perform) and in second place the "GROUP BY" which I think is one of the most used command.
I'm starting thinking that probably what's wrong here is MySQL tuning so here I share my configuration file (my.cnf):
symbolic-links=0
query_cache_size = 32M
thread_cache_size = 8
myisam_sort_buffer_size = 64M
read_rnd_buffer_size = 8M
read_buffer_size = 2M
sort_buffer_size = 2M
table_open_cache = 512
max_allowed_packet = 1M
key_buffer_size = 384M
sql-mode = "STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
max_allowed_packet = 32M
max_connections = 2000
open_files_limit = 10000
tmp_table_size = 64M
max_heap_table_size = 64M
tmpdir = /home/mysql/tmp
default_storage_engine = InnoDB
skip_name_resolve
query_cache_type=0
query_cache_size=0
log_bin
server_id = 1
max_binlog_size = 100M
expire_logs_days = 7
sync_binlog = 0
binlog_format = MIXED
innodb_buffer_pool_size = 26G
innodb_log_file_size = 3GB
innodb_log_buffer_size = 16M
innodb_flush_log_at_trx_commit = 0
innodb_flush_method = O_DIRECT
innodb_buffer_pool_instances = 8
innodb_thread_concurrency = 8
innodb_stats_on_metadata = 0
#innodb_buffer_pool_dump_at_shutdown = 1
#innodb_buffer_pool_load_at_startup = 1
innodb_buffer_pool_dump_pct = 75
innodb_adaptive_hash_index = ON
innodb_adaptive_hash_index_parts = 16
innodb_read_io_threads = 16
innodb_write_io_threads = 16
innodb_file_per_table
innodb_flush_neighbors = 0
innodb_page_cleaners = 8
long_query_time = 4
slow_query_log = 1
So as you can see I talked about simple query operations such "GROUP BY" and DATETIME columns, that's why I probabily think the problem is related to MySQL tuning.
I'm also thinking about migrating to MySQL 8.0 but I need to try this database on a test machine before going on a production environment.
I don't know where I can hit my head to get a solution.

Improve sql code performance for a mariaDB query

I developed an application connected to mariaDB (Ver 15.1 Distrib 10.1.31-MariaDB, for Win32) in DelphiXE8.
I want to improve query performance.
Describe the simplified scenario:
de_User Table (innoDB) (rows 81762)
ID_U INT PRIMARY KEY
Name VARCHAR(30)
INDEX ID_U, Name
de_doc Table (innoDB) (rows 260452)
IDD INT PRIMARY KEY
DataFi Date
UserID INT
...
INDEX IDD, UserID, DataFi
----
CONSTRAINT UserID_LK
FOREIGN KEY de_Doc (UserID)
REFERENCES de_User (ID_U)
ON DELETE CASCADE
ON UPDATE CASCADE
my query
select User.*, Doc.LastDoc
FROM de_Users AS Us
LEFT JOIN (
SELECT UserID,MAX(DataFi) AS LastDoc
FROM de_doc
GROUP BY UserID
) as Doc on Doc.UserID = Us.ID_U
ORDER BY Us.Name ASC, Doc.LastDoc DESC;
--
EXPLAIN select ...
+------+-------------+----------------+-------+---------------+---------------+---------+----------------+--------+---------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+----------------+-------+---------------+---------------+---------+----------------+--------+---------------------------------+
| 1 | PRIMARY | de_User | ALL | NULL | NULL | NULL | NULL | 81762 | Using temporary; Using filesort |
| 1 | PRIMARY | <derived2> | ref | key0 | key0 | 5 | Base.Us.ID_U | 10 | |
| 2 | DERIVED | de_Doc | index | NULL | UserID_LK| 4 | NULL | 260452 | |
+------+-------------+----------------+-------+---------------+---------------+---------+----------------+--------+---------------------------------+
my.ini
...
# The MySQL server
[mysqld]
...
key_buffer = 4096M
key_buffer_size=1024M
table_open_cache = 2048
query_cache_size = 128M
max_connections = 100
...
max_allowed_packet = 256M
sort_buffer_size = 4096M
net_buffer_length = 16M
read_buffer_size = 256M
myisam_sort_buffer_size = 256M
log_error = "mysql_error.log"
...
# Comment the following if you are using InnoDB tables
innodb_data_home_dir = "C:/xampp/mysql/data"
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = "C:/xampp/mysql/data"
innodb_log_arch_dir = "C:/xampp/mysql/data"
## You can set .._buffer_pool_size up to 50 - 80 %
## of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 2048M
# DEPRECATED innodb_additional_mem_pool_size = 1024M
## Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 512M
innodb_log_buffer_size = 128M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50
...
thread_concurrency = 4
...
[isamchk]
key_buffer = 1024M
sort_buffer_size = 256M
read_buffer = 8M
write_buffer = 16M
[myisamchk]
key_buffer = 1024M
sort_buffer_size = 256M
read_buffer = 8M
write_buffer = 8M
TEST phpmyadmin:
83705 total, the query employed 1,0000 sec.
if I remove "order by Doc.LastDoc DESC" it is very fast
83705 total, the query employed 0,0000 sec.
TEST in my application developed with delphiEX8
view table all rows 2,8 sec.
if I remove "order by Doc.LastDoc DESC" it is very fast
view table all rows 1,8 sec.
How can I improve performance?
Suggestions for your my.ini [mysqld] SECTION
sort_buffer_size=2M # from 4096M (4G) of RAM per connection, next 2 are per connect also
read_buffer_size=256K # from 256M to reduce volume of data retrieved by 99%
read_rnd_buffer_size=256K # from ? to a reasonable size
These three could be dynamically set (as root) with SET GLOBAL variable_name=value replace K with *1024 and M with *1024*1024 for Kbytes and Megabytes, please. Please post positive/negative results after a full BUSINESS DAY of uptime.
This is ambiguous: INDEX IDD, UserID, DataFi
Probably User.* was supposed to be Us.*? Be aware that "simplifying" a query may turn it into a different problem.
Probably LEFT JOIN is unnecessary; use JOIN.
You need this composite INDEX(UserID, LastDoc)
Do you really want 82K rows in the output? What will the client do with that much data? I ask because if the client will further digest the results, maybe such would be better done in SQL.
When timing, be sure to avoid the Query cache by using SELECT SQL_NO_CACHE.
phpmyadmin probably tacks on a LIMIT, thereby changing what the Optimizer will do!
ORDER BY t1.a, t2.b (different tables) makes it impossible to use an index for ordering. This will prevent any sort of short-circuiting of the query.
Changing these values in my.ini, in phpmyadmin here is the improved result.
The time it takes to populate the grid in my Delphi application, now 1.9 sec compared to before 2.8 sec.
my pc has 8Gb RAM;
Can I reduce the time to populate the grid in Delphi? Maybe I have to make a new request for this.
innodb_buffer_pool_size = 2048M
# Set .._log_file_size to 25 % of buffer pool size
BEFORE
innodb_log_file_size = 64M
(83705 del total, The query employed 1,0000 sec.)
AFTER
innodb_log_file_size = 512M
(83705 del total, The query employed 0,0000 sec.)
If your goal is "grouwise-max", then you left out a clause:
select User.*, Doc.LastDoc
FROM de_Users AS Us
LEFT JOIN
(
SELECT UserID,MAX(DataFi) AS LastDoc
FROM de_doc
GROUP BY UserID
) as Doc ON Doc.UserID = Us.ID_U
AND Doc.LastDoc = Us.DataFi -- this was missing
ORDER BY Us.Name ASC, Doc.LastDoc DESC;
That will also lead to many fewer rows being delivered, hence addressing the performance question.
Try this query and check whether the output is same as your query
select Us.*, max(Doc.DataFi) as LastDoc
FROM de_Users AS Us
LEFT JOIN de_doc as Doc on Doc.UserID = Us.ID_U
group by Us.ID_U
ORDER BY Us.Name ASC, LastDoc DESC;

MYSQL Slow Insertion and Update in Big Database

I'm having trouble tuning mysql configuration to maximize the speed of insertion and update queries.
The problem occurs when we have to insert daily data approximately half a million record everyday and it would run for minutes before it completes.
While it performing the job I've checked and found out that it was using less than 5% for CPU and half of memory. My question is how can I increase the speed by maximize mysql to use all available resources.
Thank you.
Performance
Insert/Update is around 2,000-4,000 records per second on both MyISAM and InnoDB tables
Table#1
Engine: MyISAM
Columns : 21
Existed Rows : 5,400,000
Key : One Unique key on 7 columns and One Primary Key
Table#2
Engine: InnoDD
Columns : 14
Existed Rows : 1,500,000
Key : One Primary Key, One Unique Key on 6 columns, Two Indexes
Insert Method
LOAD DATA LOCAL INFILE
Hardware Specifications
2 x Intel Xeon E5-2640v2 2.1GHz, 20M Cache, 7.2GT/s
RAM 16GB
2 x HDD 300GB 15K RPM,6Gbps SAS 2.5
my.cnf Configuration
[mysqld]
local-infile=1
max_connections = 600
max_user_connections=1000
key_buffer_size = 3584M
myisam_sort_buffer_size = 64M
read_buffer_size = 256K
table_open_cache = 5000
thread_cache_size = 384
wait_timeout = 20
connect_timeout = 10
tmp_table_size = 256M
max_heap_table_size = 128M
max_allowed_packet=268435456
net_buffer_length = 16384
max_connect_errors = 10
concurrent_insert = 2
read_rnd_buffer_size = 786432
bulk_insert_buffer_size = 8M
query_cache_limit = 5M
query_cache_size = 1024M
query_cache_type = 1
query_prealloc_size = 262144
query_alloc_block_size = 65535
transaction_alloc_block_size = 8192
transaction_prealloc_size = 4096
max_write_lock_count = 8
log-error
external-locking=FALSE
open_files_limit=50000
#expire-logs-days = 7
innodb_buffer_pool_size = 2024M
innodb_log_buffer_size = 8M
innodb_thread_concurrency = 0
innodb_read_io_threads = 64
innodb_write_io_threads = 64
innodb_log_file_size = 64M
innodb_flush_method = O_DIRECT
sort_buffer_size = 512K
read_rnd_buffer_size = 1M
tmp_table_size = 1G
max_heap_table_size = 512M
[mysqld_safe]
[mysqldump]
quick
max_allowed_packet = 16M
[isamchk]
key_buffer = 384M
sort_buffer = 384M
read_buffer = 256M
write_buffer = 256M
[myisamchk]
key_buffer = 384M
sort_buffer = 384M
read_buffer = 256M
write_buffer = 256M
#### Per connection configuration ####
sort_buffer_size = 1M
join_buffer_size = 1M
thread_stack = 192K
MysqlTuner Results
-------- Storage Engine Statistics -------------------------------------------
[--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM
[--] Data in MyISAM tables: 5G (Tables: 306)
[--] Data in InnoDB tables: 269M (Tables: 441)
[--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 52)
[!!] Total fragmented tables: 34
-------- Security Recommendations -------------------------------------------
[OK] All database users have passwords assigned
-------- Performance Metrics -------------------------------------------------
[--] Up for: 1d 12h 58m 9s (4M q [37.247 qps], 70K conn, TX: 21B, RX: 1B)
[--] Reads / Writes: 67% / 33%
[--] Total buffers: 7.0G global + 2.2M per thread (600 max threads)
[OK] Maximum possible memory usage: 8.3G (53% of installed RAM)
[OK] Slow queries: 0% (72/4M)
[OK] Highest usage of available connections: 2% (15/600)
[OK] Key buffer size / total MyISAM indexes: 3.5G/1.5G
[OK] Key buffer hit rate: 99.6% (304M cached / 1M reads)
[OK] Query cache efficiency: 97.0% (3M cached / 3M selects)
[OK] Query cache prunes per day: 11
[!!] Sorts requiring temporary tables: 14% (1K temp sorts / 9K sorts)
[!!] Temporary tables created on disk: 28% (1K on disk / 4K total)
[OK] Thread cache hit rate: 99% (15 created / 70K connections)
[OK] Table cache hit rate: 74% (1K open / 1K opened)
[OK] Open file limit used: 1% (831/50K)
[OK] Table locks acquired immediately: 99% (755K immediate / 755K locks)
[OK] InnoDB buffer pool / data size: 2.0G/269.9M
[OK] InnoDB log waits: 0
-------- Recommendations -----------------------------------------------------
General recommendations:
Run OPTIMIZE TABLE to defragment tables for better performance
Temporary table size is already large - reduce result set size
Reduce your SELECT DISTINCT queries without LIMIT clauses
Variables to adjust:
sort_buffer_size (> 512K)
read_rnd_buffer_size (> 1M)
query_cache_size = 1024M
query_cache_type = 1
Those are bad. Everytime you write something to a table, the Query cache needs to have all references to that table removed. 1G is much too big; 50M is what I recommend. Also, unless you have demonstrated a need for the Query cache, I recommend turning it OFF.
On the other hand, "Query cache efficiency: 97.0% (3M cached / 3M selects)" says that you are using the QC, and it is effective. So perhaps you should leave it on, but shrink the size.
As for loading -- Are you 'replacing' the table? Or adding to a table. If you are replacing, then load into a new table, then RENAME TABLE to put it into place.
tmp_table_size = 1G
max_heap_table_size = 512M
These are dangerously high. If multiple threads needed tmp tables at the same time, you could run out of RAM. Put them back to the defaults.
"Temporary tables created on disk: 28%" cannot necessarily be improved by increasing those settings. If there are TEXT or BLOB columns, tmp tables will go to disk. If you like, show us SHOW CREATE TABLE and the naughty SELECTs.
"Run OPTIMIZE TABLE to defragment tables for better performance" -- That tool always says that. It is almost always bogus advice.
Are you loading only via LOAD DATA? You also mentioned UPDATE; please elaborate.
"5% for CPU" -- How many 'cores' do you have? Keep in mind that one MySQL connection will use only one CPU core.
"half of memory" -- That's bogus. MyISAM is using some of the other half for caching data. And nothing else can make use of the space.
Here's a potential optimization (for LOAD DATA): Sort the data by the PRIMARY KEY before doing LOAD DATA. Please provide SHOW CREATE TABLE; there could be further tips in this area.
Do you delete 'old' data? Is that time-based? If so, let's talk about PARTITIONing.

table_open_cache not working mariadb

today i was optimizing my mariadb since my website was running too slow
My machine is a Centos 7 , 4 gbs ram 3 cpu
i runned a script called mysql_tuner.pl and the results were:
-- MYSQL PERFORMANCE TUNING PRIMER --
- By: Matthew Montgomery -
MySQL Version 5.5.40-MariaDB x86_64
Uptime = 0 days 0 hrs 0 min 12 sec
Avg. qps = 1
Total Questions = 16
Threads Connected = 1
Warning: Server has not been running for at least 48hrs.
It may not be safe to use these recommendations
To find out more information on how each of these
runtime variables effects performance visit:
http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html
Visit http://www.mysql.com/products/enterprise/advisors.html
for info about MySQL's Enterprise Monitoring and Advisory Service
SLOW QUERIES
The slow query log is NOT enabled.
Current long_query_time = 10.000000 sec.
You have 0 out of 37 that take longer than 10.000000 sec. to complete
Your long_query_time seems to be fine
BINARY UPDATE LOG
The binary update log is NOT enabled.
You will not be able to do point in time recovery
See http://dev.mysql.com/doc/refman/5.5/en/point-in-time-recovery.html
WORKER THREADS
Current thread_cache_size = 0
Current threads_cached = 0
Current threads_per_sec = 1
Historic threads_per_sec = 1
Your thread_cache_size is fine
MAX CONNECTIONS
Current max_connections = 151
Current threads_connected = 1
Historic max_used_connections = 1
The number of used connections is 0% of the configured maximum.
You are using less than 10% of your configured max_connections.
Lowering max_connections could help to avoid an over-allocation of memory
See "MEMORY USAGE" section to make sure you are not over-allocating
INNODB STATUS
Current InnoDB index space = 110 M
Current InnoDB data space = 1.39 G
Current InnoDB buffer pool free = 71 %
Current innodb_buffer_pool_size = 128 M
Depending on how much space your innodb indexes take up it may be safe
to increase this value to up to 2 / 3 of total system memory
MEMORY USAGE
Max Memory Ever Allocated : 274 M
Configured Max Per-thread Buffers : 419 M
Configured Max Global Buffers : 272 M
Configured Max Memory Limit : 691 M
Physical Memory : 4.00 G
Max memory limit seem to be within acceptable norms
KEY BUFFER
No key reads?!
Seriously look into using some indexes
Current MyISAM index space = 58 M
Current key_buffer_size = 128 M
Key cache miss rate is 1 : 0
Key buffer free ratio = 81 %
Your key_buffer_size seems to be fine
QUERY CACHE
Query cache is supported but not enabled
Perhaps you should set the query_cache_size
SORT OPERATIONS
Current sort_buffer_size = 2 M
Current read_rnd_buffer_size = 256 K
No sort operations have been performed
Sort buffer seems to be fine
JOINS
./mysql_tuner.pl: line 402: export: `2097152': not a valid identifier
Current join_buffer_size = 132.00 K
You have had 0 queries where a join could not use an index properly
Your joins seem to be using indexes properly
OPEN FILES LIMIT
Current open_files_limit = 1024 files
The open_files_limit should typically be set to at least 2x-3x
that of table_cache if you have heavy MyISAM usage.
Your open_files_limit value seems to be fine
TABLE CACHE
Current table_open_cache = 400 tables
Current table_definition_cache = 400 tables
You have a total of 801 tables
You have 400 open tables.
Current table_cache hit rate is 16%
, while 100% of your table cache is in use
You should probably increase your table_cache
You should probably increase your table_definition_cache value.
TEMP TABLES
Current max_heap_table_size = 16 M
Current tmp_table_size = 16 M
Of 347 temp tables, 9% were created on disk
Created disk tmp tables ratio seems fine
TABLE SCANS
Current read_buffer_size = 128 K
Current table scan ratio = 28 : 1
read_buffer_size seems to be fine
TABLE LOCKING
Current Lock Wait ratio = 0 : 295
Your table locking seems to be fine
so, i realized that i should raise table_open_cache...
even i confirmed throught mysql command line
+--------------------+
| ##table_open_cache |
+--------------------+
| 400 |
+--------------------+
1 row in set (0.00 sec)
MariaDB [(none)]>
ok , so i ran into my.cnf
and edited like this:
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
#table_cache = 1000
#max_open_files = 4000
#max_connections = 800
key_buffer_size = 60M
max_allowed_packet = 1G
table_open_cache = 2000
table_definition_cache = 2000
#sort_buffer_size = 2M
#read_buffer_size = 1M
#read_rnd_buffer_size = 8M
#myisam_sort_buffer_size = 64M
#thread_cache_size = 15
#query_cache_size = 32M
#thread_concurrency = 8
innodb_buffer_pool_size = 2G
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Recommended in standard MySQL setup
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
[client]
default-character-set=utf8
[mysql]
default-character-set=utf8
but table_open_cache is still 400!
my server is reading all the other variables, except table_open_cache
results after changing the cnf file
TABLE CACHE
Current table_open_cache = 400 tables
Current table_definition_cache = 400 tables
You have a total of 801 tables
You have 400 open tables.
Current table_cache hit rate is 16%
, while 100% of your table cache is in use
You should probably increase your table_cache
You should probably increase your table_definition_cache value.
tried everything, any help?
Thank you
Increase limits by
ulimit -n 2000
then restart server.