MySQL Crashing on SQL - mysql

For the past 4 days MySQL keeps crashing on running scripts, like once / day
this is the error log
key_buffer_size=134217728
read_buffer_size=1048576
max_used_connections=39
max_threads=100
threads_connected=34
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 336508 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
thd: 0x92025f38
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x95dce36c thread_stack 0x30000
/usr/sbin/mysqld(my_print_stacktrace+0x2d) [0x6b65ad]
/usr/sbin/mysqld(handle_segfault+0x494) [0x3823d4]
[0x110400]
/usr/sbin/mysqld(MYSQLparse(void*)+0x6aa) [0x3b42da]
/usr/sbin/mysqld(mysql_parse(THD*, char const*, unsigned int, char const**)+0x23e) [0x39ce6e]
/usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0xf35) [0x39df25]
/usr/sbin/mysqld(do_command(THD*)+0xf3) [0x39f0e3]
/usr/sbin/mysqld(handle_one_connection+0x2a0) [0x38dbd0]
/lib/tls/i686/cmov/libpthread.so.0(+0x596e) [0x93d96e]
/lib/tls/i686/cmov/libc.so.6(clone+0x5e) [0xd78a4e]
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort...
thd->query at 0x86982ef4 is an invalid pointer
thd->thread_id=2906
thd->killed=NOT_KILLED
The box runs on 2GB RAM, by my calculations it shouldn't have the problem with max memory. I've specifically lowered the memory requirements to a minimum but still getting the errors.
mysql> show variables like "sort_buffer%";
+------------------+---------+
| Variable_name | Value |
+------------------+---------+
| sort_buffer_size | 1048576 |
+------------------+---------+
It crashed today on this SQL query
ALTER TABLE FieldDefaultValue MODIFY value_field varchar(2000) CHARACTER SET utf8 collate utf8_bin;
Anyone got any similar experience ?
EDIT:
The table in question actually doesn't contain much data, the database has much larger tables:
mysql> desc fielddefaultvalue;
+----------------------+---------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------------------+---------------+------+-----+---------+----------------+
| fielddefaultvalue_Id | bigint(20) | NO | PRI | NULL | auto_increment |
| version | bigint(20) | NO | | NULL | |
| value_field | varchar(2000) | YES | MUL | NULL | |
| optimistic_version | bigint(20) | NO | | NULL | |
| property_fk | bigint(20) | YES | MUL | NULL | |
| esg_fk | bigint(20) | YES | MUL | NULL | |
+----------------------+---------------+------+-----+---------+----------------+
6 rows in set (0.02 sec)
mysql> select count(*) from fielddefaultvalue;
+----------+
| count(*) |
+----------+
| 690 |
+----------+
1 row in set (0.00 sec)
It also fails on multiple inserts (400-500) of little data, but not all the time, the same script can run properly once or crash it
EDIT 2: After crash recovery the error log also reports:
InnoDB: which exceeds the log group capacity 9433498.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log files at least 10 times bigger than the
InnoDB: largest such row.
my.cnf
lower_case_table_names = 1
key_buffer = 16M
key_buffer_size = 128M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam-recover = BACKUP
max_connections = 100
table_cache = 512
thread_concurrency = 4
sort_buffer_size = 1M
read_buffer_size = 1M
table_open_cache = 512
read_rnd_buffer_size = 8M
innodb_file_per_table = 1
open_files_limit = 65536
default_character_set=utf8
query_cache_limit = 1M
query_cache_size = 64M
expire_logs_days = 10
max_binlog_size = 250M
innodb_buffer_pool_size = 256M
innodb_additional_mem_pool_size = 20M
EDIT: 5 hours later
It just crashed again on the same "regular" script, it's a 25.000 line update script on a date column.
Same error message:
InnoDB: Log scan progressed past the checkpoint lsn 186 4056481576
110620 17:30:52 InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Read
Funny thing is, I've ran this script today and didn't fail, but it did now.

The most likely explanation is running out of address space; please post your entire my.cnf.
Running 32-bit OS in production is not a good idea.
However, what you should do is:
Reproduce the fault on the same MySQL version on a non-production machine
Check that you are using a properly supported, current build from Oracle. If you are not, then install one of those and reproduce the problem. If you are running Redhat (or similar), then you can use Oracle's RPMs. They also provide some other distributions' packages and binaries in a tar.gz file. Your package vendor may patch MySQL with some dodgy patches. I never run OEM MySQL builds in production.
You seem to be running 32-bit. Ensure that you're not running out of address-space.
If you can reproduce the bug using a standard Oracle build on a supported operating system, you are not running out of memory / adress space and there is no hardware fault, then you can submit the bug to Oracle.
The best idea is to reproduce the test-case with the minimum amount of data / table size.

Sounds like your innodb_log_file_size is not big enough - try with 256 MB in my.cnf:
innodb_log_file_size=256M
You need to shut it down cleanly, remove the old logfiles, then restart - mysql will recreate new log files.

Strange... I don't know how optimized the ALTER TABLE actually is on MySQL. Perhaps it consumes a lot of power. If the table contains a lot of data, try to move all your data into a temporary table and empty the main one. Then do your alter table and push the data back. If it has to do work on each row then you can split the work up like this and do a few records at a time.

Related

MYSQL Database Repair Taking Extremely Long

I have a MYSQL MYISAM table which is approximately 7GB in size and has quite a bit of indexes. The table got corrupted yesterday and I have MYSQL repair working for 12+ Hours now.
I would like to know how long does a MYSQL repair actually take for such a table? (I cant exactly get number of rows and exact size at the moment, due to the repair running).
The variables I used are :
| myisam_max_sort_file_size | 9223372036853727232 |
| myisam_mmap_size | 18446744073709551615 |
| myisam_recover_options | FORCE |
| myisam_repair_threads | 1 |
| myisam_sort_buffer_size | 268435456
| read_buffer_size | 67108864 |
| read_only | OFF |
| read_rnd_buffer_size | 4194304
I was not able to change any of the global variables due to using GODADDY Managed Hosting.
The repair has always been "Repair by sorting" as seen by state.
Is there any other way I can speed up this repair process??
Thank you
Edit:
My memory and CPU usage can be seen in the image below
I have also tried restoring the database from a 2 day old backup (unto a new database), it also is stuck on "Repair with keycache" on the same table for the past 5 hours.
I have tried mysqlcheck and REPAIR TABLE, not myisamchk, as I cannot access the specific database folder in /var/lib/mysql which gives Permission Denied error. As well as myisamchk empty command gives command not found.
It should take minutes. If it hasn't finished after 12 hours, it probably hung and is never going to finish.
MyISAM hasn't really been maintained in over a decade, and it is quite likely you hit a bug. You might stand a better chance with myisamchk if you can get your hands on the raw database files.

MySQL Memory Usage with many Tables [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I have a MySQL with a few 100,000 tables. This is currently the best design for my system, since these tables are not related to each other and select queries will only be down on a single table. In addition, users will most likely not access the same tables very often.
I have 16GB RAM, but after about a day, MySQL consumes about 90% of that, with my total memory use of the system being at 99-100%. I tried numerous things, but simply can't get the memory usage down.
My innodb_buffer_pool_size is currently at 8GB, but I had it at 1G with the same issue. I also tried reducing open_files_limit but that didn't help either.
Here is my output for
SHOW GLOBAL STATUS LIKE '%Open_%';
+----------------------------+----------+
| Variable_name | Value |
+----------------------------+----------+
| Com_show_open_tables | 0 |
| Innodb_num_open_files | 431 |
| Open_files | 0 |
| Open_streams | 0 |
| Open_table_definitions | 615 |
| Open_tables | 416 |
| Opened_files | 4606655 |
| Opened_table_definitions | 4598528 |
| Opened_tables | 4661002 |
| Slave_open_temp_tables | 0 |
| Table_open_cache_hits | 30024782 |
| Table_open_cache_misses | 4661002 |
| Table_open_cache_overflows | 4660579 |
+----------------------------+----------+
And here is my mysqld config:
sql-mode=''
innodb_buffer_pool_size = 8G
open_files_limit=100000
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
bind-address = 127.0.0.1
key_buffer_size = 16M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
query_cache_limit = 1M
query_cache_size = 16M
log_error = /var/log/mysql/error.log
expire_logs_days = 10
max_binlog_size = 100M
Anyone know how to efficiently handle these thousands of tables?
ADDITIONAL INFO
A) Mysqld: https://pastebin.com/PTiz6uRD
B) SHOW GLOBAL STATUS: https://pastebin.com/K4sCmvFz
C) SHOW GLOBAL VARIABLES: https://pastebin.com/Cc64BAUw
D) MySQLTuner: https://pastebin.com/zLzayi56
E) SHOW ENGINE INNODB STATUS: https://pastebin.com/hHDuw6gY
F) top: https://pastebin.com/6WYnSnPm
TEST AFTER SERVER REBOOT (LITTLE MEMORY CONSUMED):
A) ulimit -a: https://pastebin.com/FmPrAKHU
B) iostat -x: https://pastebin.com/L0G7H8s4
C) df -h: https://pastebin.com/d3EttR19
D) MySQLTuner: https://pastebin.com/T3DYDLg8
First block to remove will be in Linux command line,
ulimit -n 124000 to get past the current limit of 1024 Open Files
this can be done NOW, no shutdown/restart of Linux is required for this to be active.
Suggestions to consider for your my.cnf [mysqld] section
table_open_cache=10000 # from 431 available today to a practical upper limit
table_definition_cache=10000 # from 615 available today to a practical upper limit
thread_cache_size=100 # from 8 for V8 refman CAP suggested to avoid OOM
max_heap_table_size=64M # from 16M to reduce created_tmp_disk_tables
tmp_table_size=64M # from 16M should always be equal to max_heap_table_size
innodb_lru_scan_depth=100 # from 1024 to reduce CPU workload every SECOND
innodb_log_buffer_size=512M # from 50M to avoid log rotation every 7 minutes
Considering your situation, I would SKIP the one a day rule and monitor before
next Global Variable change. Make all cnf changes. Stop/start services required because you are limited to open_files_limit in MySQL even though you requested 100,000 ulimit caused runtime to limit you to 1024.
If you copy paste the block to the END of the MySQLD section,
remove same named variables above the block in MYSQLD section only,
you will get rid of the 'multiple variable confusion' for the next analyst.
Please view profile for contact information and get in touch.

MySQL Error 1118 (Row size too large) when restoring Django-mailer database

I dumped a working production database from a django app and am trying to migrate it to my local development environment. The production server runs MySQL 5.1, and locally I have 5.6.
When migrating the django-mailer's "messagelog" table, I'm running into the dreaded Error 1118:
ERROR 1118 (42000) at line 2226: Row size too large (> 8126). Changing some columns to TEXT or BLOB may help. In current row format, BLOB prefix of 0 bytes is stored inline.
I've read lots of stuff online about this error, but none of it has solved my problem.
N.B. This error is not coming from the creation of the table, but rather the insertion of a row with pretty large data.
Notes:
The innodb_file_format and innodb_file_format_max variables are set to Barracuda.
The ROW_FORMAT is set to DYNAMIC on table creation.
The table does not have very many columns. Schema below:
+----------------+------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------------+------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| message_data | longtext | NO | | NULL | |
| when_added | datetime | NO | | NULL | |
| priority | varchar(1) | NO | | NULL | |
| when_attempted | datetime | NO | | NULL | |
| result | varchar(1) | NO | | NULL | |
| log_message | longtext | NO | | NULL | |
+----------------+------------+------+-----+---------+----------------+
Again, the error happens ONLY when I try to insert a quite large (message_data is about 5 megabytes) row; creating the table works fine, and about 500,000 rows are added just fine before the failure.
I'm out of ideas; I've tried DYANMIC and COMPRESSED row formats, and I've triple checked the values of the relevant innodb variables:
mysql> show variables like "%innodb_file%";
+--------------------------+-----------+
| Variable_name | Value |
+--------------------------+-----------+
| innodb_file_format | Barracuda |
| innodb_file_format_check | ON |
| innodb_file_format_max | Barracuda |
| innodb_file_per_table | ON |
+--------------------------+-----------+
The creation code (from SHOW CREATE TABLE) looks like:
CREATE TABLE `mailer_messagelog` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`message_data` longtext NOT NULL,
`when_added` datetime NOT NULL,
`priority` varchar(1) NOT NULL,
`when_attempted` datetime NOT NULL,
`result` varchar(1) NOT NULL,
`log_message` longtext NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=869906 DEFAULT CHARSET=latin1 ROW_FORMAT=DYNAMIC
According to one of the answers to this question, your problem might be caused by changes in MySQL 5.6 (see the InnoDB Notes on http://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-20.html):
InnoDB Notes
Important Change: Redo log writes for large, externally stored BLOB
fields could overwrite the most recent checkpoint. The 5.6.20 patch
limits the size of redo log BLOB writes to 10% of the redo log file
size. The 5.7.5 patch addresses the bug without imposing a limitation.
For MySQL 5.5, the bug remains a known limitation.
As a result of the redo log BLOB write limit introduced for MySQL 5.6,
the innodb_log_file_size setting should be 10 times larger than the
largest BLOB data size found in the rows of your tables plus the
length of other variable length fields (VARCHAR, VARBINARY, and TEXT
type fields). No action is required if your innodb_log_file_size
setting is already sufficiently large or your tables contain no BLOB
data.
Note In MySQL 5.6.22, the redo log BLOB write limit is relaxed to 10%
of the total redo log size (innodb_log_file_size *
innodb_log_files_in_group).
(Bug #16963396, Bug #19030353, Bug #69477)
Does it help if you change innodb_log_file_size to something bigger than 50M? (Changing that variable needs some steps to work correctly:
https://dba.stackexchange.com/questions/1261/how-to-safely-change-mysql-innodb-variable-innodb-log-file-size ).
If this is useful for anybody, the #klasske solution did not work for me, however writing this line in 'my.cnf' did:
innodb_file_format=Barracuda
I encountered the same error in my project. I tried a lot of suggestions, such as increasing innodb_log_file_size, innodb_buffer_pool_size or even disabling strict mode innodb_strict_mode=0 in the my.cnf file, but nothing worked for me.
What worked for me was the following:
Changing the offending CharFields with a big max_length to TextFields. For example, models.CharField(max_length=4000) to models.TextField(max_length=4000)
Splitting the table into multiple tables after the first solution wasn't enough on its own.
It was only after doing that I got rid of the error.
Recently, the same error haunted me again on the same project. This time, when I was running python manage.py test. I was confused because I had already split the tables and changed the CharFields to TextFields.
So I created another dummy Django project with a different database from my main project. I copied the models.py from the main project into the dummy project and run migrate. To my surprise, everything went fine.
It dawned on me that something could be wrong with my main project migrations. Perhaps running manage.py test uses my earlier migrations with the offending CharFields? I don't know for sure.
So I disabled the migrations when running tests by editing settings.py and adding the following snippet at the end the file. It disables the migrations when testing and solves the error.
class DisableMigrations(object):
def __contains__(self, item):
return True
def __getitem__(self, item):
return None
if 'test' in sys.argv[1:]:
MIGRATION_MODULES = DisableMigrations()
Doing that solved the problem for me when testing. I hope someone else finds it useful.
Source for the snippet settings_test_snippet.py
ERROR 1118 (42000) at line 1852: Row size too large (> 8126). Changing some columns to TEXT or BLOB may help. In current row format, BLOB prefix of 0 bytes is stored inline.
[mysqld]
innodb_log_file_size = 512M
innodb_strict_mode = 0
ubuntu 16.04 edit path : nano /etc/mysql/mysql.conf.d/mysqld.cnf
it work!!….
[http://dn59-kmutnb.blogspot.com/2017/06/error-1118-42000-at-line-1852-row-size.html][1]

How to put a 188MB MYISAM table into memory

For performance reason i will put a 188MB table (rebuild every day on disk) with ~ 550.000 datasets into MEMORY table. Whenever i tried this, i run into HEAP error ...
My server has 1.3GB free RAM (only 32BIt 4 GB)
Have you checked the configured mysql heap table size? Have a look at this:
mysql> show variables like "%heap%";
+---------------------+----------+
| Variable_name | Value |
+---------------------+----------+
| max_heap_table_size | 16777216 |
+---------------------+----------+
1 row in set (0.02 sec)
The default value is 16MB.

mysql Modify stopword list for fulltext search

I've searched a lot, it's said that I have to edit my.cnf file to change the stopword list.
I renamed my-medium.cnf to my.cnf and added the ft_query_expansion_limit and ft_stopword_file conditions. I have restarted mySQL. But it is not taking effect. I dont have admin privileges.
# The MySQL server
[mysqld]
port = 3306
socket = /tmp/mysql.sock
skip-external-locking
key_buffer_size = 16M
max_allowed_packet = 1M
table_open_cache = 64
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
ft_query_expansion_limit = 10
ft_stopword_file = 'D:/mysql/stopword.txt'
mysql> show variables like '%ft%';
+--------------------------+----------------+
| Variable_name | Value |
+--------------------------+----------------+
| ft_boolean_syntax | + -><()~*:""&| |
| ft_max_word_len | 84 |
| ft_min_word_len | 4 |
| ft_query_expansion_limit | 20 |
| ft_stopword_file | (built-in) |
+--------------------------+----------------+
5 rows in set (0.00 sec)
What can I do to modify the stopword list?
In my.ini text file (MySQL) :
ft_stopword_file = "" or link an empty file "empty_stopwords.txt"
ft_min_word_len = 2
// set your minimum length, but be aware that shorter words (3,2) will increase the query time dramatically, especially if the fulltext indexed column fields are large.
Save the file, restart the server.
The next step should be to repair the indexes with this query:
REPAIR TABLE tbl_name QUICK.
However, this will not work if you table is using InnoDB storage engine. You will have to change it to MyISAM :
ALTER TABLE t1 ENGINE = MyISAM;
So, once again:
1. Edit my.ini file and save
2. Restart your server (this cannot be done dynamically)
3. Change the table engine (if needed) ALTER TABLE tbl_name ENGINE = MyISAM;
4. Perform repair REPAIR TABLE tbl_name QUICK.
Be aware that InnoDB and MyISAM have their speed differences. One read faster, other writes faster ( read more about that on the internet )