Why undo log entries in MySQL innodb buffer pool (MySQL 5.7) - mysql

Long transactions generate several old versions of the rows in the undo log which are stored in ibdata file.Is there any possibility that undo log entries are also stored in the innodb buffer pool.

They cannot be stored "permanently" in the buffer_pool because it is lost in a power failure. Undo is needed then, too.

Related

Effect of Alter Table Import TableSpace on InnoDB Buffer Pool in MySql?

I am trying to understand the impact of Import TableSpace for the alter command on innodb's buffer pool.
Since the table space is being imported and rows are being inserted, would all pages be bought into the Buffer Pool during the import? or does the Import TableSpace actually skip the buffer pool completely ?
The only information i could find is in the storage/innobase/row/row0import.cc in a comment under Phase III - Flush changes to disk which says
Ensure that all pages dirtied during the IMPORT make it to disk. The
only dirty pages generated should be from the pessimistic purge of
delete marked records that couldn't be purged in Phase I
The Mysql Version is 5.7

How MYSQL do the redo operation?

We know in SQL server , the transaction log file contains both redo and undo log, however in the MYSQL transaction log ,it only contains the undo log , so how the MYSQL do the redo operation if it just writing just partial of the transaction data for a successfully transaction ,then the server down? As it is necessary to do the redo operation when reboot the server.
Eidt, Sorry, I typo, I mean why MYSQL doesn't have the redo log instead of the undo log.
This is not done on MySQL level, this is taken care of the various table types that support transactions (e.g. innodb).
Innodb being the most widely used transactional table type in MySQL, my answer focuses on this table type only. For other transactional table types, see their own documentation on more info on the redo logs.
MySQL documentation on innodb redo logs describe how innodb stores redo logs:
By default, the redo log is physically represented on disk as a set of
files, named ib_logfile0 and ib_logfile1. MySQL writes to the redo log
files in a circular fashion. Data in the redo log is encoded in terms
of records affected; this data is collectively referred to as redo.
The passage of data through the redo log is represented by an
ever-increasing LSN value.

Can MySQL InnoDB corruption occur that the storage engine cannot recover from if there is NO software bug involved?

I am working on an embedded Linux system running MySQL 5.1. In rare cases QA reports systems not starting properly because of mysqld not starting. If this happens the MySQL log files looks similar to this excerpt:
150716 14:29:42 InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
150716 14:29:42 InnoDB: Starting log scan based on checkpoint at
InnoDB: log sequence number 0 133478.
InnoDB: Doing recovery: scanned up to log sequence number 0 133478
150716 14:29:42 InnoDB: Started; log sequence number 0 133478
/usr/libexec/mysqld: Unknown error 130
150716 14:29:42 [ERROR] Can't open the mysql.plugin table. Please run the mysql_upgrade script to create it.
150716 14:29:42 [ERROR] Fatal error: Can't open and lock privilege tables: Incorrect file format 'host'
This is probably due to the fact that this is an embedded device without a power switch and it is powered off by unplugging the network cable and thus killing the PoE supply. In this case of course mysqld will be terminated quite abnormaly.
my.cnf contains nothing fancy besides a size limitation to 18 MB.
Question
Is it possible that InnoDB tables get corrupted and recovery is NOT possible if there is NO bug involved (either in MySQL itself or e.g. a faulty fsync() implementation)? Are there situtations that can cause a corruption (that the DB cannot be recovered from) even if all software components are working correctly? Can such a DB be safely used in an environment where power failures occur "in normal operation"?
What I am ultimately asking is:
Is there a point searching for a fix to this problem or is there no fix to this problem whatsoever?
Is it possible that InnoDB tables get corrupted if there is NO bug involved (either in MySQL itself or e.g. a faulty fsync() implementation)?
Yes, eg: hardware failure
Are there situtations that can cause a corruption even if all software components are working correctly?
Yes, eg: hardware failure
Can such a DB be safely used in an environment where power failures occur "in normal operation"?
Yes, only if your hardware works "normally"
What I am ultimately asking is: Is there a point searching for a fix to this problem or is there no fix to this problem whatsoever?
Usually it's very difficult to fix the database if you meet a corruption. Do backup.
This article may help you:
https://dev.mysql.com/doc/refman/5.6/en/innodb-init-startup-configuration.html
Caution
InnoDB is a transaction-safe (ACID compliant) storage engine for MySQL that has commit, rollback, and crash-recovery capabilities to protect user data. However, it cannot do so if the underlying operating system or hardware does not work as advertised. Many operating systems or disk subsystems may delay or reorder write operations to improve performance. On some operating systems, the very fsync() system call that should wait until all unwritten data for a file has been flushed might actually return before the data has been flushed to stable storage. Because of this, an operating system crash or a power outage may destroy recently committed data, or in the worst case, even corrupt the database because of write operations having been reordered. If data integrity is important to you, perform some “pull-the-plug” tests before using anything in production. On OS X 10.3 and up, InnoDB uses a special fcntl() file flush method. Under Linux, it is advisable to disable the write-back cache.
On ATA/SATA disk drives, a command such hdparm -W0 /dev/hda may work to disable the write-back cache. Beware that some drives or disk controllers may be unable to disable the write-back cache.
With regard to InnoDB recovery capabilities that protect user data, InnoDB uses a file flush technique involving a structure called the doublewrite buffer, which is enabled by default (innodb_doublewrite=ON). The doublewrite buffer adds safety to recovery following a crash or power outage, and improves performance on most varieties of Unix by reducing the need for fsync() operations. It is recommended that the innodb_doublewrite option remains enabled if you are concerned with data integrity or possible failures. For additional information about the doublewrite buffer, see Section 14.9, “InnoDB Disk I/O and File Space Management”.
Caution
If reliability is a consideration for your data, do not configure InnoDB to use data files or log files on NFS volumes. Potential problems vary according to OS and version of NFS, and include such issues as lack of protection from conflicting writes, and limitations on maximum file sizes.
Did you upgrade the MySQL version, but fail to run mysql_upgrade ? That's what the error is saying.
I finally found the root cause. The problem does not actually occur with the InnoDB tables, but with the system tables.
In MySQL 5.1 system tables are stored using the MyISAM engine. This makes these tables very fragile on power loss.
For all system tables the content of the MYI (index) and MYD (data) files were lost.
Missing this data - of course - the rest of the databases had a problem...
The important hint for me was
mysql.plugin table
Finally looked into the directory containing the system tables and saw they were using the MyISAM storage engine. Then the consequences are quite obvious.
(Only) Solution:
Go to a newer version (I used MariaDB in my case).
You cannot use InnoDB as storage engine for the system tables on MySQL 5.1.

MySQL 5.6 Innodb running out single-table tablespace id's

I am looking for general information regarding this message:
InnoDB: Warning: you are running out of new single-table tablespace id's.
InnoDB: Current counter is 2152000000 and it must not exceed 4294967280!
InnoDB: To reset the counter to zero you have to dump all your tables and
InnoDB: recreate the whole InnoDB installation.
Which counter? How do you query it? Does an entire restore fix this problem?
From Percona Support:
When you create one table in innodb, that counter will be increased by 1.
This table id seems to be a 32bit value.
In my case, we created lots of temporary tables. On MySQL 5.6 default engine for temporary tables is innodb hence the problem. Once I changed it to MYISAM the warnings disappeared.

How to Sync mySQL Logs

I recently experienced a DDoS attack. It overwhelmed and crashed my server. Upon restart all of the innodb tables were corrupted in every database on the server.
I have since rebuilt the databases and all of the tables. I had to recreate innodb_table_stats, innodb_index_stats.
Everything now seems to be running fine, the website is up but I have one persistent error that keeps coming up. The general log is filling with these quite rapidly.
2014-07-13 15:28:37 7fd70b374700 InnoDB: Error: page 193 log sequence number 526819726
InnoDB: is in the future! Current system log sequence number 156433332.
InnoDB: Your database may be corrupt or you may have copied the InnoDB
InnoDB: tablespace but not the InnoDB log files. See
InnoDB: http://dev.mysql.com/doc/refman/5.6/en/forcing-innodb-recovery.html
InnoDB: for more information.
I tried Changing the Number and Size of mySQL logs. It all went fine with no errors. But I am still seeing these errors mount up in the new log.
What else can I try to sync up these log sequence numbers? I am pretty new to this backend database work. Being forced to learn since my hosts tech support sucks.
I am currently on: CENTOS 6.5 x86_64 virtuozzo – vps- WHM 11.44.0 (build 22)
mySQL: 5.6.17
"page ... log sequence number" is when the page was last modified. The LSN in the header is larger than one in the redo log.
The easiest way to fix it is to rebuild the table with noop ALTER TABLE.
ALTER TABLE mytable ENGINE InnoDB;
It will rebuild PRIMARY index as well as its secondary indexes. After that the error should go away.
The ALTER is going to block the table, so if it's large and if the site is in production the best option is to rebuild it with pt-online-schema-change. It will do the same, but won't block the table but for brief moment.
pt-online-schema-change --alter "ENGINE=InnoDB" D=sakila,t=actor