I have a MYSQL MYISAM table which is approximately 7GB in size and has quite a bit of indexes. The table got corrupted yesterday and I have MYSQL repair working for 12+ Hours now.
I would like to know how long does a MYSQL repair actually take for such a table? (I cant exactly get number of rows and exact size at the moment, due to the repair running).
The variables I used are :
| myisam_max_sort_file_size | 9223372036853727232 |
| myisam_mmap_size | 18446744073709551615 |
| myisam_recover_options | FORCE |
| myisam_repair_threads | 1 |
| myisam_sort_buffer_size | 268435456
| read_buffer_size | 67108864 |
| read_only | OFF |
| read_rnd_buffer_size | 4194304
I was not able to change any of the global variables due to using GODADDY Managed Hosting.
The repair has always been "Repair by sorting" as seen by state.
Is there any other way I can speed up this repair process??
Thank you
Edit:
My memory and CPU usage can be seen in the image below
I have also tried restoring the database from a 2 day old backup (unto a new database), it also is stuck on "Repair with keycache" on the same table for the past 5 hours.
I have tried mysqlcheck and REPAIR TABLE, not myisamchk, as I cannot access the specific database folder in /var/lib/mysql which gives Permission Denied error. As well as myisamchk empty command gives command not found.
It should take minutes. If it hasn't finished after 12 hours, it probably hung and is never going to finish.
MyISAM hasn't really been maintained in over a decade, and it is quite likely you hit a bug. You might stand a better chance with myisamchk if you can get your hands on the raw database files.
Related
First of all sorry for repeatating this repeated question as it has been asked numerous times already. I have gone through those questions and answers.
I have a table on production environment of size approx 159 GB and now I want to migrate MySQL to another server. But since total DB size is more than 300 GB I am not able to migrate it easily. So I am trying to reclaim space by deleting records from MySQL. I deleted more than 70 % records from this table and tried OPTIMIZE TABLE but its giving an error as :
mysql> OPTIMIZE TABLE table_name;
+--------------------------------------+----------+----------+-----------------------+
| Table | Op | Msg_type | Msg_text |
+--------------------------------------+----------+----------+-----------------------+
| table_name | optimize | note | Table does not support optimize, doing recreate + analyze instead |
| table_name | optimize | error | The table 'table_name' is full |
| table_name | optimize | status | Operation failed |
+--------------------------------------+----------+----------+-----------------------+
innodb_file_per_table is set to ON
SHOW VARIABLES LIKE '%innodb_file_per_table%';
Variable_name Value
--------------------- --------
innodb_file_per_table ON
Mysql Version: 5.7.28-log
I read somewhere that alter table will help however it slows down all MySQL queries.
In one answer I read that copying data in another table and then renaming it and deleting original table (which I assume OPTIMIZE TABLE does internally) will help but doing so will need downtime.
Is there any other way which I can achieve this.?
The following questions will be answered.
How to enable slow query log in MySQL
How to set slow query time
How to read the logs generated by MySQL
Log analysis is becoming a menace day-by-day. Most tech companies have started using ELK stack or similar tools for Log analysis. But what if you don't have hours to spend on the set up of ELK and just want to spend some time on analysing the logs by your on (manually, that is).
Although, it is not the best way but don't underestimate the power of analysing the logs from the terminal. From the terminal too, we can efficiently analyse the logs but there are limitations to what we can or cannot do. I am posting about the basic process of analysing a MySQL log.
(In addition to the 'setup' provided by #MontyPython...)
Run
pt-query-digest, or mysqldumpslow -s t
Either will give you the details of 'worst' query first, so stop the output after a few dozen lines.
I prefer long_query_time=1. It's in seconds; you can specify less than 1.
Also, in more recent versions, you need log_output = FILE.
show variables like '%slow%';
+---------------------------+-----------------------------------+
| Variable_name | Value |
+---------------------------+-----------------------------------+
| log_slow_admin_statements | OFF |
| log_slow_slave_statements | OFF |
| slow_launch_time | 2 |
| slow_query_log | OFF |
| slow_query_log_file | /var/lib/mysql/server-slow.log |
+---------------------------+-----------------------------------+
And then,
show variables like '%long_query%';
+-----------------+----------+
| Variable_name | Value |
+-----------------+----------+
| long_query_time | 5.000000 |
+-----------------+----------+
Change the long query time to whatever you want. Queries taking more than this will be captured in the slow query log.
set global long_query_time = 2.00;
Now, switch on the slow query log.
set global slow_query_log = 'ON';
flush logs;
Go to the terminal and check the directory where the log file is supposed to be.
cd /var/lib/mysql/
la -lah | grep slow
-rw-rw---- 1 mysql mysql 4.6M Apr 24 08:32 server-slow.log
Opening the file - use one of the following commands
cat server-slow.log
tac server-slow.log
less server-slow.log
more server-slow.log
tail -f server-slow.log
How many unique slow queries have been logged during a day?
grep 'Time: 160411.*' server-slow.log | cut -c2-18 | uniq -c
I have MySQL
MySQL version: 5.6.16-enterprise-commercial-advanced-log
MySQL Engine: InnoDB
MySQL Data Size: 35GB (including 9GB of indexes)
Which is running on
VM: Red Hat Enterprise Linux Server release 5.9 (Tikanga)
File system: ext3
Storage technology: SAN
Disk data format: RAID-5
Disk type: SAS with Fibre channel
I found that lot of SELECT queries taking time because of I/O related operations (though necessary indexes and buffer is added to the same)
mysql> show profile for query 1;
+----------------------+------------+
| Status | Duration |
+----------------------+------------+
| starting | 0.000313 |
| checking permissions | 0.000024 |
| checking permissions | 0.000018 |
| Opening tables | 0.000086 |
| init | 0.000121 |
| System lock | 0.000092 |
| optimizing | 0.000079 |
| statistics | 0.000584 |
| preparing | 0.000070 |
| executing | 0.000014 |
| Sending data | 202.362338 |
| end | 0.000068 |
| query end | 0.000027 |
| closing tables | 0.000049 |
| freeing items | 0.000124 |
| logging slow query | 0.000135 |
| cleaning up | 0.000057 |
+----------------------+------------+
Does the following network latency and throughput is good for above mentioned DB instance?
$ time dd if=/dev/zero of=foobar bs=4k count=10000
10000+0 records in
10000+0 records out
40960000 bytes (41 MB) copied, 1.22617 seconds, 33.4 MB/s
real 0m1.233s
user 0m0.002s
sys 0m0.049s
$ time dd if=foobar of=/dev/null bs=4k count=10000
10000+0 records in
10000+0 records out
40960000 bytes (41 MB) copied, 0.026479 seconds, 1.5 GB/s
real 0m0.032s
user 0m0.004s
sys 0m0.024s
$ time dd if=/dev/zero of=foobar bs=128K count=10000
10000+0 records in
10000+0 records out
1310720000 bytes (1.3 GB) copied, 78.1099 seconds, 16.8 MB/s
real 1m18.241s
user 0m0.012s
sys 0m1.117s
$ time dd if=foobar of=/dev/null bs=128K count=10000
10000+0 records in
10000+0 records out
163840000 bytes (164 MB) copied, 0.084886 seconds, 1.9 GB/s
real 0m0.101s
user 0m0.002s
sys 0m0.083s
$ time dd if=/dev/zero of=foobar bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 461.587 seconds, 22.7 MB/s
real 7m42.700s
user 0m0.017s
sys 0m8.229s
$ time dd if=foobar of=/dev/null bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 4.63128 seconds, 2.3 GB/s
real 0m4.634s
user 0m0.003s
sys 0m4.579s
Does the following changes to MySQL system variables gives positive results in the context of MySQL I/O tuning?
innodb_flush_method: O_DSYNC (Referred http://bugs.mysql.com/bug.php?id=54306 for read-heavy workload)
Moving from ext3 to XFS file system
It's very hard to answer your question, because with performance problems, the answer is generally 'it depends'. Sorry.
The first thing you need to do is understand what's actually going on, and why you're performance is less than expected. There's a variety of tools for that, especially on a Linux system.
First off, grab a benchmark of your system read and write performance.
The simple test I tend to use is to time a dd:
time dd if=/dev/zero of=/your/san/mount/point/testfile bs=1M count=100
time dd if=/your/san/mount/point/testfile of=/dev/null bs=1M count=100
(increase the 100 to 1000 if it's quick to complete). This will give you an idea of sustained throughput of your storage system.
Testing IO operations per second is a similar thing - do the same, but use a small block size and a large count. 4k block size, 10,000 as the count - again, if it goes a bit too quick, increase the number.
This will get you an estimate of IOPs and throughput of your storage subsystem.
Now, you haven't been specific as to what type of disks, and number of spindles you're using. As an extremely rough rule of thumb, you should expect 75 IOPs from a SATA drive, 150 from an FC or SAS drive, and 1500 from an SSD before performance starts to degrade.
However, as you're using a RAID-5, you need to consider the write penalty of RAID-5 - which is 4. That means your RAID-5 needs 4 ops to do one write IO. (There is no read penalty, but for obvious reasons, your 'parity' drive doesn't count as a spindle).
How does your workload look? Mostly reads, mostly writes? How many IOPs? And how many spindles? In all honesty, it's more likely that the root of your problem is expectations of the the storage subsystems.
I am running NDB Cluster and I see that on mysql api nodes, there is a very big binary log table.
+---------------------------------------+--------+-------+-------+------------+---------+
| CONCAT(table_schema, '.', table_name) | rows | DATA | idx | total_size | idxfrac |
+---------------------------------------+--------+-------+-------+------------+---------+
| mysql.ndb_binlog_index | 83.10M | 3.78G | 2.13G | 5.91G | 0.56 |
Is there any recommended way to reduce the size of that without breaking anything? I understand that this will limit the time frame for point-in-time recovery, but the data has is growing out of hand and I need to do a bit of clean up.
It looks like this is possible. I don't see anything here: http://dev.mysql.com/doc/refman/5.5/en/mysql-cluster-replication-pitr.html that says you can't based on the last epoch.
Some additional information might be gained by reading this article:
http://www.mysqlab.net/knowledge/kb/detail/topic/backup/id/8309
The mysql.ndb_binlog_index is a MyISAM table. If you are cleaning it,
make sure you don't delete entries of binary logs that you still need.
For the past 4 days MySQL keeps crashing on running scripts, like once / day
this is the error log
key_buffer_size=134217728
read_buffer_size=1048576
max_used_connections=39
max_threads=100
threads_connected=34
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 336508 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
thd: 0x92025f38
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x95dce36c thread_stack 0x30000
/usr/sbin/mysqld(my_print_stacktrace+0x2d) [0x6b65ad]
/usr/sbin/mysqld(handle_segfault+0x494) [0x3823d4]
[0x110400]
/usr/sbin/mysqld(MYSQLparse(void*)+0x6aa) [0x3b42da]
/usr/sbin/mysqld(mysql_parse(THD*, char const*, unsigned int, char const**)+0x23e) [0x39ce6e]
/usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0xf35) [0x39df25]
/usr/sbin/mysqld(do_command(THD*)+0xf3) [0x39f0e3]
/usr/sbin/mysqld(handle_one_connection+0x2a0) [0x38dbd0]
/lib/tls/i686/cmov/libpthread.so.0(+0x596e) [0x93d96e]
/lib/tls/i686/cmov/libc.so.6(clone+0x5e) [0xd78a4e]
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort...
thd->query at 0x86982ef4 is an invalid pointer
thd->thread_id=2906
thd->killed=NOT_KILLED
The box runs on 2GB RAM, by my calculations it shouldn't have the problem with max memory. I've specifically lowered the memory requirements to a minimum but still getting the errors.
mysql> show variables like "sort_buffer%";
+------------------+---------+
| Variable_name | Value |
+------------------+---------+
| sort_buffer_size | 1048576 |
+------------------+---------+
It crashed today on this SQL query
ALTER TABLE FieldDefaultValue MODIFY value_field varchar(2000) CHARACTER SET utf8 collate utf8_bin;
Anyone got any similar experience ?
EDIT:
The table in question actually doesn't contain much data, the database has much larger tables:
mysql> desc fielddefaultvalue;
+----------------------+---------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------------------+---------------+------+-----+---------+----------------+
| fielddefaultvalue_Id | bigint(20) | NO | PRI | NULL | auto_increment |
| version | bigint(20) | NO | | NULL | |
| value_field | varchar(2000) | YES | MUL | NULL | |
| optimistic_version | bigint(20) | NO | | NULL | |
| property_fk | bigint(20) | YES | MUL | NULL | |
| esg_fk | bigint(20) | YES | MUL | NULL | |
+----------------------+---------------+------+-----+---------+----------------+
6 rows in set (0.02 sec)
mysql> select count(*) from fielddefaultvalue;
+----------+
| count(*) |
+----------+
| 690 |
+----------+
1 row in set (0.00 sec)
It also fails on multiple inserts (400-500) of little data, but not all the time, the same script can run properly once or crash it
EDIT 2: After crash recovery the error log also reports:
InnoDB: which exceeds the log group capacity 9433498.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log files at least 10 times bigger than the
InnoDB: largest such row.
my.cnf
lower_case_table_names = 1
key_buffer = 16M
key_buffer_size = 128M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam-recover = BACKUP
max_connections = 100
table_cache = 512
thread_concurrency = 4
sort_buffer_size = 1M
read_buffer_size = 1M
table_open_cache = 512
read_rnd_buffer_size = 8M
innodb_file_per_table = 1
open_files_limit = 65536
default_character_set=utf8
query_cache_limit = 1M
query_cache_size = 64M
expire_logs_days = 10
max_binlog_size = 250M
innodb_buffer_pool_size = 256M
innodb_additional_mem_pool_size = 20M
EDIT: 5 hours later
It just crashed again on the same "regular" script, it's a 25.000 line update script on a date column.
Same error message:
InnoDB: Log scan progressed past the checkpoint lsn 186 4056481576
110620 17:30:52 InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Read
Funny thing is, I've ran this script today and didn't fail, but it did now.
The most likely explanation is running out of address space; please post your entire my.cnf.
Running 32-bit OS in production is not a good idea.
However, what you should do is:
Reproduce the fault on the same MySQL version on a non-production machine
Check that you are using a properly supported, current build from Oracle. If you are not, then install one of those and reproduce the problem. If you are running Redhat (or similar), then you can use Oracle's RPMs. They also provide some other distributions' packages and binaries in a tar.gz file. Your package vendor may patch MySQL with some dodgy patches. I never run OEM MySQL builds in production.
You seem to be running 32-bit. Ensure that you're not running out of address-space.
If you can reproduce the bug using a standard Oracle build on a supported operating system, you are not running out of memory / adress space and there is no hardware fault, then you can submit the bug to Oracle.
The best idea is to reproduce the test-case with the minimum amount of data / table size.
Sounds like your innodb_log_file_size is not big enough - try with 256 MB in my.cnf:
innodb_log_file_size=256M
You need to shut it down cleanly, remove the old logfiles, then restart - mysql will recreate new log files.
Strange... I don't know how optimized the ALTER TABLE actually is on MySQL. Perhaps it consumes a lot of power. If the table contains a lot of data, try to move all your data into a temporary table and empty the main one. Then do your alter table and push the data back. If it has to do work on each row then you can split the work up like this and do a few records at a time.