InnoDB: SHOW ENGINE STATUS truncating query in deadlock section - mysql

Recently I've been having problems troubleshooting deadlocks because the engine log is not displaying the full queries involved. Hibernate generates some pretty long queries, and it seems in the two cases I've been concerned about, InnoDB is limiting itself to 1024 characters when printing the thread information (the line starting with "MySQL thread id...") and the queries themselves (select * from...).
While looking into this, I found a bug report complaining about the same issue which had an associated patch that should allow us to fix this issue. However, I'm told that there is no DB variable by the name max_query_len in either 5.0 or 5.1. This is strange to me, as the old limit was 300 characters (which I am seeing more than) but it does not seem to actually be configurable. My question is what actually happened with this - is there a later patch I'm not seeing that changed the fix? 5.0.12's changelog does indeed suggest that the issue was fixed:
"SHOW ENGINE INNODB STATUS now can display longer query strings. (Bug #7819)"
but it doesn't seem to me that it actually is. The oldest version we might possibly have had running when the deadlock happened is 5.0.56, so we should at least have the 5.0.12 fix - we do have newer versions running, though, so later versions that might have overwritten the fix for the truncated queries could have been applied. Is anyone aware of what's happened here to cause the limit on what it prints to be lowered again?

Related

Any way to 'hack' a IBD file into mysql without using IMPORT TABLESPACE ? (Mysql 8.0.19 tends to segfault during import)

Due to NDA I've difficulties providing an actual sample, that also makes filing a bug report quite useless.
It is a completely normal table, no specialities and 7 simple indexes on 1-2 columns each.
The Problem I have is that I have to regularly transfer a table between two Mysql 8.0.19 Servers (latest Percona stable they provide for Docker) and mysql crashes on signal 11 (segfault) every single time.
I've done this the same for a year without the issue and the table barely changed, crashing started recently.
I have this issue on 4 servers, 3 of them using Docker, one 1 normal Debian APT package
What I have tried:
1) I rebuilt the entire table to ensure the IBD file has no internal corruption.
2) I tried to split the table into multiple parts, in these cases the segfault crash only happens one part. Though I do not have the time to further reduce it to a single row.
06:55:37 UTC - mysqld got signal 11 ;
Most likely, you have hit a bug, but this error can also be caused by malfunctioning hardware.
Thread pointer: 0x7fee7cde6100
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 7ff03c0f4d80 thread_stack 0x46000
/usr/sbin/mysqld(my_print_stacktrace(unsigned char const*, unsigned long)+0x2e) [0x559950e1380e]
/usr/sbin/mysqld(handle_fatal_signal+0x351) [0x55994ff2e641]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x110e0) [0x7ff04770e0e0]
/usr/sbin/mysqld(lob::z_index_page_t::get_n_index_entries() const+0x8) [0x5599512a10f8]
/usr/sbin/mysqld(lob::z_index_page_t::import(unsigned long)+0x18) [0x5599512a1628]
/usr/sbin/mysqld(PageConverter::update_page(buf_block_t*, unsigned long&)+0x3e1) [0x559950ff8051]
/usr/sbin/mysqld(PageConverter::operator()(unsigned long, buf_block_t*)+0x322) [0x559950ff86a2]
/usr/sbin/mysqld(fil_tablespace_iterate(dict_table_t*, unsigned long, PageCallback&)+0x9ef) [0x55995122027f]
/usr/sbin/mysqld(row_import_for_mysql(dict_table_t*, dd::Table*, row_prebuilt_t*)+0xdc6) [0x559950ff9ad6]
/usr/sbin/mysqld(ha_innobase::discard_or_import_tablespace(bool, dd::Table*)+0x422) [0x559950ecf742]
/usr/sbin/mysqld(Sql_cmd_discard_import_tablespace::mysql_discard_or_import_tablespace(THD*, TABLE_LIST*)+0x1bc) [0x55994fe787cc]
/usr/sbin/mysqld(mysql_execute_command(THD*, bool)+0x2645) [0x55994fe07b15]
/usr/sbin/mysqld(mysql_parse(THD*, Parser_state*)+0x360) [0x55994fe0ae70]
/usr/sbin/mysqld(dispatch_command(THD*, COM_DATA const*, enum_server_command)+0x1e93) [0x55994fe0d203]
/usr/sbin/mysqld(do_command(THD*)+0x168) [0x55994fe0deb8]
/usr/sbin/mysqld(+0xfcc9c8) [0x55994ff1f9c8]
/usr/sbin/mysqld(+0x23fdeb5) [0x559951350eb5]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x74a4) [0x7ff0477044a4]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7ff04578fd0f]
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
My main questions:
1)
Given that I have an exported IBD file (created as in manual with a flush and CFG file) and create the new table using 1:1 the same syntax as from "SHOW CREATE TABLE": Is there no way to access it like an MyIsam table ?
2)
Given that MyIsam is heading to 'deprecated' I am reluctant using it for this purpose, it would probably be much easier using it.
Is there any idea for how long it will be available still ?
Like I said in the beginning I have limitations, I can't provide a reproduceable case and finding the row which causes this is too time demanding.
Update
Uncompressing the table solved the segmentation faults, I hope it won't happen with the larger tables.
In this case I just list 30GB storage, that was an acceptable solution.
In case a Mysql developer reads this: the compression of blobs seems to have a serious bug somewhere.
In a word - no.
InnoDB pages contain a header, and the node ID is encoded in each page header. If this doesn't match what is in the master tablespace, created when the node was initialized, it won't work. IMPORT TABLESPACE rewrites the headers of each page in the imported tablespace.
What you're asking for simply isn't possible with InnoDB.

Table `in use` on a mySQL server in phpmyadmin

I'm currently working on a website, and there is a bug that happend a few times already. I know this question has already been asked, but it hasn't been answered.
The problem is the following: a table in the database becomes unusable and probably corrupted.
When i check on phpmyadmin, the table is indicated as in use, and i can't open it nor read it's data (if it still exists).
Screenshot: https://imgur.com/a/LSTygFX
It is quite possible this bug is due to some interactions with the table, but it is very unlikely that these interactions didn't happen before.
So far, I've done some research and only found one guy with a solution.
I can't tell if it works yet, but he found that his table becoming in use was due to the storage engine being 'InnoDB' (same for me), and so i switched it to MyISAM.
You should never change your Innodb storage engine to MyISAM because Innodb has more benefits over MyISAM regarding performance and locking.
Apart from the benefits, if you did the conversion from Innodb to MyISAM it cause many problems and database size get also large. So you must use your previous backup of database which was based on InnoDB storage engine and for this issue goes in PHPMyAdmin, select the desired database, at the bottom, there is a dropdown option of "With Selected" select "Repair Table" from here. Most of the time it resolves this issue

Automaticly repair db table after crash, or switch to InnoDB

Sometimes tables in the mySQL database crash after a server restart or after a lost connection to the database.
This have happen several times for the last 3-4 weeks.
This is how the error message looks like:
145 - Table './xxx/sessions' is marked as crashed and should be
repaired select value from sessions where sesskey =
'60fa1fab3a3d285cfb7bc1c93bb85f64' and expiry > '1395226673'
[TEP STOP]
So far it’s have been the tables "sessions" and "whos_online" that have crashed. If I repair the tables in phpmyadmin it will work fine again
After the last crash I changed "sessions" from MyISAM to InnoDB. The table "whos_online" still use MyISAM.
I use osCommerce 2.2 rc2a and I'm looking for any thoughts and suggestions in this matter.
One solution might be to change both these tables to InnoDB, since it supposed to be self-healing. Is that a good or bad idea?
Another one would be to have them in MyISAM and do something like this with the php-file that echo the error message:
if $error contain "is marked as crashed and should be repaired"
run a table repair script
Would that be a good or bad idea?
Here’s my server specs
Database: MySQL 5.5.36-cll
PHP Version: 5.3.15
At first you should address the issue that tables become corrupted in the first place. InnoDB tries to repair broken files, but it cannot do magic. Repeated unsafe shutdowns or other accidents may seriously corrupt your database!
If you only have a small website, all variants are good. MyISAM is a little faster (sometimes), while InnoDB provides some extended features. If the server is strong enough, you will likely encounter no issues. I would stick with InnoDB in most cases as it accounts for consistent data and throws errors if files are broken, unlike MYISAM tables which are used and then sometimes throw errors in production...
But if something severely breaks, it requires more effort to get MySQL with InnoDB back up running if it does not automatically. There exist different InnoDB recovery modes which are only available from the shell of your server and require modifying the config file, starting the server by hand, reading its output and maybe doing some other actions. If you do not have any clue of these issues, you might want to stick with MyISAM instead. The server always starts, and if the table is utterly broken, you might need to import a backup. But this is more easily than editing config files and reading database outputs etc.

Mysql Lock times in slow query log

I have an application that has been running fine for quite awhile, but recently a couple of items have started popping up in the slow query log.
All the queries are complex and ugly multi join select statements that could use refactoring. I believe all of them have blobs, meaning they get written to disk. The part that gets me curious is why some of them have a lock time associated with them. None of the queries have any specific locking protocols set by the application. As far as I know, by default you can read against locks unless explicitly specified.
so my question: What scenarios would cause a select statement to have to wait for a lock (and thereby be reported in the slow query log)? Assume both INNODB and MYISAM environments.
Could the disk interaction be listed as some sort of lock time? If yes, is there documentation around that says this?
thanks in advance.
MyISAM will give you concurrency problems, an entire table is completely locked when an insert is in progress.
InnoDB should have no problems with reads, even while a write/transaction is in progress due to it's MVCC.
However, just because a query is showing up in the slow-query log doesn't mean the query is slow - how many seconds, how many records are being examined?
Put "EXPLAIN" in front of the query to get a breakdown of the examinations going on for the query.
here's a good resource for learning about EXPLAIN (outside of the excellent MySQL documentation about it)
I'm not certain about MySql, but I know that in SQL Server select statements do NOT read against locks. Doing so will allow you to read uncommitted data, and potentially see duplicate records or miss a record entirely. The reason for this is because if another process is writing to the table, the database engine may decide it's time to reorganize some data and shifts it around on disk. So it moves a record you already read to the end and you see it again, or it moves one from the end up higher where you've already past.
There's a guy on the net somewhere who actually wrote a couple of scripts to prove that this happens and I tried them once and it only took a few seconds before a duplicate showed up. Of course, he designed the scripts in a fashion that would make it more likely to happen, but it proves that it definitely can happen.
This is okay behaviour if your data doesn't need to be accurate and can certainly help prevent deadlocks. However, if you're working on an application dealing with something like people's money then that's very bad.
In SQL Server you can use the WITH NOLOCK hint to tell your select statement to ignore locks. I'm not sure what the equivalent in MySql would be but maybe someone else here will say.

How to track down a Drupal max_allowed_packet error?

One of my staging sites has recently started spewing huge errors on every admin page along the lines of:
User warning: Got a packet bigger than 'max_allowed_packet' bytes query: UPDATE cache_update SET data = ' ... ', created = 1298434692, expire = 1298438292, serialized = 1 WHERE cid = 'update_project_data' in _db_query() (line 141 of /var/www/vhosts/mysite/mypath/includes/database.mysqli.inc). (where "..." is about 1.5 million characters worth of serialized data)
How should I go about tracking down where the error originates? Would adding debugging code to _db_query do any good, since it gets called so much?
No need to track this down because you can't fix it I think.
This is the cache from update.module, containing information about which modules have updated versions and so on. So this is coming from one of the "_update_cache_set()" calls in that module.
Based on a wild guess, I'd say it is the one in this function: http://api.drupal.org/api/drupal/modules--update--update.fetch.inc/function/_update_refresh/6
It is basically building up an huge array with information about all projects on your site and tries to store it as a single, serialized value.
How many modules do you have installed on this site?
I can think of three ways to "fix" this error:
Increase the max_allowed_packet size. (max_allowed_packet setting in my.conf)
Disable update.module (It's not that useful on a staging/production site anyway, when you need to update on a dev site first anyway)
Disable some modules ;)
I had a similar error and went round and round for about an hour.
Increased memory limit to 512m and still had the issue. And figured that was enough. So went looking elsewhere.
I cleared the caches with drush, still the error, and then looked at the database tables.
I noticed that all the cache tables were cleared except cache_update. I truncated this table and bam, everything was working normally.
Before I got the memory limit error, I got a max_input_vars error since I am on PHP5.4. But this question and answer led me to this fix. Not quite sure how or why it worked, but it did.