MySQL Heavy Read operations on production database - mysql

MySQL running - 5.5.24
DB size: 3 tables with 200 M each
DB is running on AWS ec2 with 16GB RAM on server.
8GB been allocated to MySQL.
As per expectation, write performance is fine:
100+ INSERT/sec, 100+ UPDATE/sec.
DB Parameters:
innodb_file_per_table | ON
innodb_flush_log_at_trx_commit | 2
innodb_flush_method | O_DIRECT
innodb_read_io_threads | 8
innodb_write_io_threads | 8
innodb_log_buffer_size | 33554432
innodb_log_file_size | 402653184
Issue: Suddenly, during checking
mysql> show engine innodb status\G
...
...
--------------
ROW OPERATIONS
--------------
0 queries inside InnoDB, 0 queries in queue
1 read views open inside InnoDB
Main thread process no. 9924, id 140339369645824, state: sleeping
Number of rows inserted 1686580712, updated 1684027265, deleted 1681594351, read 66159327657
87.88 inserts/s, 88.21 updates/s, 87.77 deletes/s, **117894.90 reads/s**
We not expecting, 100K + reads/s? Is there an issue? What can we check?
Please suggest.
Thanks for help.

Related

MYSQL Database Repair Taking Extremely Long

I have a MYSQL MYISAM table which is approximately 7GB in size and has quite a bit of indexes. The table got corrupted yesterday and I have MYSQL repair working for 12+ Hours now.
I would like to know how long does a MYSQL repair actually take for such a table? (I cant exactly get number of rows and exact size at the moment, due to the repair running).
The variables I used are :
| myisam_max_sort_file_size | 9223372036853727232 |
| myisam_mmap_size | 18446744073709551615 |
| myisam_recover_options | FORCE |
| myisam_repair_threads | 1 |
| myisam_sort_buffer_size | 268435456
| read_buffer_size | 67108864 |
| read_only | OFF |
| read_rnd_buffer_size | 4194304
I was not able to change any of the global variables due to using GODADDY Managed Hosting.
The repair has always been "Repair by sorting" as seen by state.
Is there any other way I can speed up this repair process??
Thank you
Edit:
My memory and CPU usage can be seen in the image below
I have also tried restoring the database from a 2 day old backup (unto a new database), it also is stuck on "Repair with keycache" on the same table for the past 5 hours.
I have tried mysqlcheck and REPAIR TABLE, not myisamchk, as I cannot access the specific database folder in /var/lib/mysql which gives Permission Denied error. As well as myisamchk empty command gives command not found.
It should take minutes. If it hasn't finished after 12 hours, it probably hung and is never going to finish.
MyISAM hasn't really been maintained in over a decade, and it is quite likely you hit a bug. You might stand a better chance with myisamchk if you can get your hands on the raw database files.

MySQL Memory Usage with many Tables [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I have a MySQL with a few 100,000 tables. This is currently the best design for my system, since these tables are not related to each other and select queries will only be down on a single table. In addition, users will most likely not access the same tables very often.
I have 16GB RAM, but after about a day, MySQL consumes about 90% of that, with my total memory use of the system being at 99-100%. I tried numerous things, but simply can't get the memory usage down.
My innodb_buffer_pool_size is currently at 8GB, but I had it at 1G with the same issue. I also tried reducing open_files_limit but that didn't help either.
Here is my output for
SHOW GLOBAL STATUS LIKE '%Open_%';
+----------------------------+----------+
| Variable_name | Value |
+----------------------------+----------+
| Com_show_open_tables | 0 |
| Innodb_num_open_files | 431 |
| Open_files | 0 |
| Open_streams | 0 |
| Open_table_definitions | 615 |
| Open_tables | 416 |
| Opened_files | 4606655 |
| Opened_table_definitions | 4598528 |
| Opened_tables | 4661002 |
| Slave_open_temp_tables | 0 |
| Table_open_cache_hits | 30024782 |
| Table_open_cache_misses | 4661002 |
| Table_open_cache_overflows | 4660579 |
+----------------------------+----------+
And here is my mysqld config:
sql-mode=''
innodb_buffer_pool_size = 8G
open_files_limit=100000
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
bind-address = 127.0.0.1
key_buffer_size = 16M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
query_cache_limit = 1M
query_cache_size = 16M
log_error = /var/log/mysql/error.log
expire_logs_days = 10
max_binlog_size = 100M
Anyone know how to efficiently handle these thousands of tables?
ADDITIONAL INFO
A) Mysqld: https://pastebin.com/PTiz6uRD
B) SHOW GLOBAL STATUS: https://pastebin.com/K4sCmvFz
C) SHOW GLOBAL VARIABLES: https://pastebin.com/Cc64BAUw
D) MySQLTuner: https://pastebin.com/zLzayi56
E) SHOW ENGINE INNODB STATUS: https://pastebin.com/hHDuw6gY
F) top: https://pastebin.com/6WYnSnPm
TEST AFTER SERVER REBOOT (LITTLE MEMORY CONSUMED):
A) ulimit -a: https://pastebin.com/FmPrAKHU
B) iostat -x: https://pastebin.com/L0G7H8s4
C) df -h: https://pastebin.com/d3EttR19
D) MySQLTuner: https://pastebin.com/T3DYDLg8
First block to remove will be in Linux command line,
ulimit -n 124000 to get past the current limit of 1024 Open Files
this can be done NOW, no shutdown/restart of Linux is required for this to be active.
Suggestions to consider for your my.cnf [mysqld] section
table_open_cache=10000 # from 431 available today to a practical upper limit
table_definition_cache=10000 # from 615 available today to a practical upper limit
thread_cache_size=100 # from 8 for V8 refman CAP suggested to avoid OOM
max_heap_table_size=64M # from 16M to reduce created_tmp_disk_tables
tmp_table_size=64M # from 16M should always be equal to max_heap_table_size
innodb_lru_scan_depth=100 # from 1024 to reduce CPU workload every SECOND
innodb_log_buffer_size=512M # from 50M to avoid log rotation every 7 minutes
Considering your situation, I would SKIP the one a day rule and monitor before
next Global Variable change. Make all cnf changes. Stop/start services required because you are limited to open_files_limit in MySQL even though you requested 100,000 ulimit caused runtime to limit you to 1024.
If you copy paste the block to the END of the MySQLD section,
remove same named variables above the block in MYSQLD section only,
you will get rid of the 'multiple variable confusion' for the next analyst.
Please view profile for contact information and get in touch.

table_open_cache not working mariadb

today i was optimizing my mariadb since my website was running too slow
My machine is a Centos 7 , 4 gbs ram 3 cpu
i runned a script called mysql_tuner.pl and the results were:
-- MYSQL PERFORMANCE TUNING PRIMER --
- By: Matthew Montgomery -
MySQL Version 5.5.40-MariaDB x86_64
Uptime = 0 days 0 hrs 0 min 12 sec
Avg. qps = 1
Total Questions = 16
Threads Connected = 1
Warning: Server has not been running for at least 48hrs.
It may not be safe to use these recommendations
To find out more information on how each of these
runtime variables effects performance visit:
http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html
Visit http://www.mysql.com/products/enterprise/advisors.html
for info about MySQL's Enterprise Monitoring and Advisory Service
SLOW QUERIES
The slow query log is NOT enabled.
Current long_query_time = 10.000000 sec.
You have 0 out of 37 that take longer than 10.000000 sec. to complete
Your long_query_time seems to be fine
BINARY UPDATE LOG
The binary update log is NOT enabled.
You will not be able to do point in time recovery
See http://dev.mysql.com/doc/refman/5.5/en/point-in-time-recovery.html
WORKER THREADS
Current thread_cache_size = 0
Current threads_cached = 0
Current threads_per_sec = 1
Historic threads_per_sec = 1
Your thread_cache_size is fine
MAX CONNECTIONS
Current max_connections = 151
Current threads_connected = 1
Historic max_used_connections = 1
The number of used connections is 0% of the configured maximum.
You are using less than 10% of your configured max_connections.
Lowering max_connections could help to avoid an over-allocation of memory
See "MEMORY USAGE" section to make sure you are not over-allocating
INNODB STATUS
Current InnoDB index space = 110 M
Current InnoDB data space = 1.39 G
Current InnoDB buffer pool free = 71 %
Current innodb_buffer_pool_size = 128 M
Depending on how much space your innodb indexes take up it may be safe
to increase this value to up to 2 / 3 of total system memory
MEMORY USAGE
Max Memory Ever Allocated : 274 M
Configured Max Per-thread Buffers : 419 M
Configured Max Global Buffers : 272 M
Configured Max Memory Limit : 691 M
Physical Memory : 4.00 G
Max memory limit seem to be within acceptable norms
KEY BUFFER
No key reads?!
Seriously look into using some indexes
Current MyISAM index space = 58 M
Current key_buffer_size = 128 M
Key cache miss rate is 1 : 0
Key buffer free ratio = 81 %
Your key_buffer_size seems to be fine
QUERY CACHE
Query cache is supported but not enabled
Perhaps you should set the query_cache_size
SORT OPERATIONS
Current sort_buffer_size = 2 M
Current read_rnd_buffer_size = 256 K
No sort operations have been performed
Sort buffer seems to be fine
JOINS
./mysql_tuner.pl: line 402: export: `2097152': not a valid identifier
Current join_buffer_size = 132.00 K
You have had 0 queries where a join could not use an index properly
Your joins seem to be using indexes properly
OPEN FILES LIMIT
Current open_files_limit = 1024 files
The open_files_limit should typically be set to at least 2x-3x
that of table_cache if you have heavy MyISAM usage.
Your open_files_limit value seems to be fine
TABLE CACHE
Current table_open_cache = 400 tables
Current table_definition_cache = 400 tables
You have a total of 801 tables
You have 400 open tables.
Current table_cache hit rate is 16%
, while 100% of your table cache is in use
You should probably increase your table_cache
You should probably increase your table_definition_cache value.
TEMP TABLES
Current max_heap_table_size = 16 M
Current tmp_table_size = 16 M
Of 347 temp tables, 9% were created on disk
Created disk tmp tables ratio seems fine
TABLE SCANS
Current read_buffer_size = 128 K
Current table scan ratio = 28 : 1
read_buffer_size seems to be fine
TABLE LOCKING
Current Lock Wait ratio = 0 : 295
Your table locking seems to be fine
so, i realized that i should raise table_open_cache...
even i confirmed throught mysql command line
+--------------------+
| ##table_open_cache |
+--------------------+
| 400 |
+--------------------+
1 row in set (0.00 sec)
MariaDB [(none)]>
ok , so i ran into my.cnf
and edited like this:
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
#table_cache = 1000
#max_open_files = 4000
#max_connections = 800
key_buffer_size = 60M
max_allowed_packet = 1G
table_open_cache = 2000
table_definition_cache = 2000
#sort_buffer_size = 2M
#read_buffer_size = 1M
#read_rnd_buffer_size = 8M
#myisam_sort_buffer_size = 64M
#thread_cache_size = 15
#query_cache_size = 32M
#thread_concurrency = 8
innodb_buffer_pool_size = 2G
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Recommended in standard MySQL setup
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
[client]
default-character-set=utf8
[mysql]
default-character-set=utf8
but table_open_cache is still 400!
my server is reading all the other variables, except table_open_cache
results after changing the cnf file
TABLE CACHE
Current table_open_cache = 400 tables
Current table_definition_cache = 400 tables
You have a total of 801 tables
You have 400 open tables.
Current table_cache hit rate is 16%
, while 100% of your table cache is in use
You should probably increase your table_cache
You should probably increase your table_definition_cache value.
tried everything, any help?
Thank you
Increase limits by
ulimit -n 2000
then restart server.

MySQL LIMIT x,y performance huge difference on 2 machine

I have a query in an InnoDb item table which contains 400k records (only...). I need to page the result for the presentation layer (60 per page) so I use LIMIT with values depending on the page to display.
The query is (the 110000 offset is just an example):
SELECT i.id, sale_type, property_type, title, property_name, latitude,
longitude,street_number, street_name, post_code,picture, url,
score, dw_id, post_date
FROM item i WHERE picture IS NOT NULL AND picture != ''
AND sale_type = 0
ORDER BY score DESC LIMIT 110000, 60;
Running this query on my machine takes about 1s.
Running this query on our test server is 45-50s.
EXPLAIN are both the same:
+----+-------------+-------+-------+---------------+-----------+---------+------+--------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+---------------+-----------+---------+------+--------+-------------+
| 1 | SIMPLE | i | index | NULL | IDX_SCORE | 5 | NULL | 110060 | Using where |
+----+-------------+-------+-------+---------------+-----------+---------+------+--------+-------------+
The only configuration difference when query show variables are:
innodb_use_native_aio. It is enabled on the Test server, not on my machine. I tried disabling it and I don't see any significant change
innodb_buffer_pool_size 1G on Test server, 2G on my machine
Test server has 2Gb of ram, 2 core CPU:
mysqld uses > 65% of RAM at all time, but only increase 1-2% running above query
mysqld uses 14% of CPU while running the above query, none when idle
My local machine has 8Gb, 8 core CPU:
mysqld uses 28% of RAM at all time, and doesn't really increase while running the above query (or for a so short time I can see it)
mysqld uses 48% of CPU while running the above query, none when idle
Where and what can I do to have the same performance on the Test server? Is the RAM and/or CPU too low?
UPDATE
I have setup a new Test server with the same specs but 8G of RAM and 4 core CPU and the performance just jumped to values similar to my machine. The original server didn't seem to use all of the RAM/CPU, why are performance so worse?
One of the surest ways to kill performance is to make MySQL scan an index that doesn't fit in memory. So during a query, it has to load part of the index into the buffer pool, then evict that part and load the other part of the index. Causing churn in the buffer pool like this during a query will cause a lot of I/O load, and that makes it very slow. Disk I/O is about 100,000 times slower than RAM.
So there's a big difference between 1GB of buffer pool and 2GB of buffer pool, if your index is, say 1.5GB.
Another tip: you really don't want to use LIMIT 110000, 60. That causes MySQL to read 110000 rows from the buffer pool (possibly loading them from disk if necessary) just to discard them. There are other ways to page through result sets much more efficiently.
See articles such as Optimized Pagination using MySQL.

MySQL Crashing on SQL

For the past 4 days MySQL keeps crashing on running scripts, like once / day
this is the error log
key_buffer_size=134217728
read_buffer_size=1048576
max_used_connections=39
max_threads=100
threads_connected=34
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 336508 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
thd: 0x92025f38
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x95dce36c thread_stack 0x30000
/usr/sbin/mysqld(my_print_stacktrace+0x2d) [0x6b65ad]
/usr/sbin/mysqld(handle_segfault+0x494) [0x3823d4]
[0x110400]
/usr/sbin/mysqld(MYSQLparse(void*)+0x6aa) [0x3b42da]
/usr/sbin/mysqld(mysql_parse(THD*, char const*, unsigned int, char const**)+0x23e) [0x39ce6e]
/usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0xf35) [0x39df25]
/usr/sbin/mysqld(do_command(THD*)+0xf3) [0x39f0e3]
/usr/sbin/mysqld(handle_one_connection+0x2a0) [0x38dbd0]
/lib/tls/i686/cmov/libpthread.so.0(+0x596e) [0x93d96e]
/lib/tls/i686/cmov/libc.so.6(clone+0x5e) [0xd78a4e]
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort...
thd->query at 0x86982ef4 is an invalid pointer
thd->thread_id=2906
thd->killed=NOT_KILLED
The box runs on 2GB RAM, by my calculations it shouldn't have the problem with max memory. I've specifically lowered the memory requirements to a minimum but still getting the errors.
mysql> show variables like "sort_buffer%";
+------------------+---------+
| Variable_name | Value |
+------------------+---------+
| sort_buffer_size | 1048576 |
+------------------+---------+
It crashed today on this SQL query
ALTER TABLE FieldDefaultValue MODIFY value_field varchar(2000) CHARACTER SET utf8 collate utf8_bin;
Anyone got any similar experience ?
EDIT:
The table in question actually doesn't contain much data, the database has much larger tables:
mysql> desc fielddefaultvalue;
+----------------------+---------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------------------+---------------+------+-----+---------+----------------+
| fielddefaultvalue_Id | bigint(20) | NO | PRI | NULL | auto_increment |
| version | bigint(20) | NO | | NULL | |
| value_field | varchar(2000) | YES | MUL | NULL | |
| optimistic_version | bigint(20) | NO | | NULL | |
| property_fk | bigint(20) | YES | MUL | NULL | |
| esg_fk | bigint(20) | YES | MUL | NULL | |
+----------------------+---------------+------+-----+---------+----------------+
6 rows in set (0.02 sec)
mysql> select count(*) from fielddefaultvalue;
+----------+
| count(*) |
+----------+
| 690 |
+----------+
1 row in set (0.00 sec)
It also fails on multiple inserts (400-500) of little data, but not all the time, the same script can run properly once or crash it
EDIT 2: After crash recovery the error log also reports:
InnoDB: which exceeds the log group capacity 9433498.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log files at least 10 times bigger than the
InnoDB: largest such row.
my.cnf
lower_case_table_names = 1
key_buffer = 16M
key_buffer_size = 128M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam-recover = BACKUP
max_connections = 100
table_cache = 512
thread_concurrency = 4
sort_buffer_size = 1M
read_buffer_size = 1M
table_open_cache = 512
read_rnd_buffer_size = 8M
innodb_file_per_table = 1
open_files_limit = 65536
default_character_set=utf8
query_cache_limit = 1M
query_cache_size = 64M
expire_logs_days = 10
max_binlog_size = 250M
innodb_buffer_pool_size = 256M
innodb_additional_mem_pool_size = 20M
EDIT: 5 hours later
It just crashed again on the same "regular" script, it's a 25.000 line update script on a date column.
Same error message:
InnoDB: Log scan progressed past the checkpoint lsn 186 4056481576
110620 17:30:52 InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Read
Funny thing is, I've ran this script today and didn't fail, but it did now.
The most likely explanation is running out of address space; please post your entire my.cnf.
Running 32-bit OS in production is not a good idea.
However, what you should do is:
Reproduce the fault on the same MySQL version on a non-production machine
Check that you are using a properly supported, current build from Oracle. If you are not, then install one of those and reproduce the problem. If you are running Redhat (or similar), then you can use Oracle's RPMs. They also provide some other distributions' packages and binaries in a tar.gz file. Your package vendor may patch MySQL with some dodgy patches. I never run OEM MySQL builds in production.
You seem to be running 32-bit. Ensure that you're not running out of address-space.
If you can reproduce the bug using a standard Oracle build on a supported operating system, you are not running out of memory / adress space and there is no hardware fault, then you can submit the bug to Oracle.
The best idea is to reproduce the test-case with the minimum amount of data / table size.
Sounds like your innodb_log_file_size is not big enough - try with 256 MB in my.cnf:
innodb_log_file_size=256M
You need to shut it down cleanly, remove the old logfiles, then restart - mysql will recreate new log files.
Strange... I don't know how optimized the ALTER TABLE actually is on MySQL. Perhaps it consumes a lot of power. If the table contains a lot of data, try to move all your data into a temporary table and empty the main one. Then do your alter table and push the data back. If it has to do work on each row then you can split the work up like this and do a few records at a time.