Error code 3019 in mysql, how to correct? - mysql

Error Code: 3019. Undo Log error: No more space left over in system
tablespace for allocating UNDO log pages. Please add new data file to
the tablespace or check if filesystem is full or enable auto-extension
for the tablespace
Is it because the hard disk space is too low? or is it because the memory is too low?

When you define a Tablespace, you define its size. Now the Tablespace for the Undo-Logs is full.
Tablespaces do (intentionally) not grow on their own, so having more hard disk space won't directly help. You need to explicitly increase the allocated Tablespace size (or allow Auto-Increase), to allow it to use more size of your hard disk.
Alternatively, you might want to look into why the Undo Logs are running out of space. Either there are many large open transactions (forgot a Commit?), or it was too small to start with (or both); either way you should follow up and correct it.

Related

MySQL InnoDB: Differences between WAL, Double Write Buffer, Log Buffer, Redo Log

I am learning MySQL architecture. I come up with the following illustration:
There are 4 concepts that I don't understand well:
double write buffer
log buffer
write-ahead log
redo log
I read from many documents, Write-Ahead Log (WAL) is a mechanism for database durability. MySQL WAL Design Wikipedia WAL
Like the above image, there are 2 types of buffers when flushing data from the memory buffer pool to disks: double write buffer and log buffer. Why do we need 2 buffers, and how are they related to WAL?
Last but not least, what are the differences between redo logs and WAL. I think WAL can help database recover when something wrong happens (e.g.: power outage, server crashes ...). What do we need redo log alongside with WAL?
The WAL design document you linked to gives a clue:
All the changes to data files are logged in the WAL (called the redo log in InnoDB).
That means WAL and redo log are two different terms for the same log. There is no difference.
The log buffer is an allocation in RAM. All writes to the redo log are saved in the log buffer first, because it's very fast to save some data in RAM. A transaction could be made of many changes affecting many individual rows, and writing to disk for every one of these rows would be too slow. So changes on their way to the redo log are saved in the log buffer first. Periodically, a group of changes in the log buffer are saved to disk, in the redo log. This happens when:
You commit a transaction
The log buffer is full (the log buffer has a fixed size)
Every 1 second regardless of whether the log buffer is full
The double-write buffer has a totally different purpose. It is actually a segment of the InnoDB tablespace on disk, not in RAM (I think it's confusing that the term "buffer" is used for storage in both RAM and disk).
The purpose of the double-write buffer is to prevent data corruption from partial page writes, while modified pages are copied from the innodb buffer pool to the tablespace. That is, if MySQL Server were to crash while InnoDB is writing a given page to disk, it could overwrite a page on disk partially. Even with the redo log, there would be no way to recover this page.
So InnoDB writes first every page to a small subset of the tablespace called the doublewrite buffer. Once it has finished writing that page, it can then save the page again to the proper page in the tablespace. If this fails partially, it's okay because the page has also been written to the doublewrite buffer. Once the page has been saved to its proper location in the tablespace, the copy of that page in the doublewrite buffer is not needed, and it can be overwritten the next time there's a page flush from the buffer pool.

How long do dirty database pages usually stay inside memory before getting flushed back to disk in InnoDB MySQL?

By database pages i mean :
https://dev.mysql.com/doc/internals/en/innodb-page-structure.html
Now these pages get loaded to memory when we issue a query against it, and it gets changed there only and get marked as dirty
I'm not sure whether this depends on O.S or Database, but my question is how long do these pages usually stay dirty in memory?
Lets say we have a database for a high load web server with a lot traffic, and the buffer size is like 1gb or something(not sure how much database servers usually have), now how much of these 1gb could be dirty pages?
and if the power is lost with no backup power, then all of the changes to these dirty pages get lost correct? (Basically i want to know if a power outage occurs, if there is no power backup and there are a lot of inserts and queries happening, what is the estimated percentage of dirty data inside memory that is going to get lost?)
For example is there a chance that these dirty pages ever stay more than 12 or 24 hours on busy servers?
EDIT: by dirty pages i mean the page is modified in memory, for example one row inside it is updated or deleted
how long do these pages usually stay dirty in memory?
It's variable. InnoDB has a background thread that flushed dirty pages to disk. It flushes a modest number of pages, then does it again after 1 second.
So if you do a lot of updates in a short space of time, you would make a lot of pages dirty. Then the flushing thread would gradually flush them to disk. The idea is that this helps to stretch the work out over time, so a sudden spike of updates doesn't overwhelm your disk.
But it means that "how long do these pages stay dirty in memory" can vary quite a bit. I think in typical cases, it would be done in a few minutes.
Different versions of MySQL flush in different ways. Years ago, the main background thread flushed a fixed number of pages every 1 second. Then they came up with adaptive flushing, so it would increase the flush rate automatically if it detected you were making a lot of changes. Then they came up with a dedicated thread called the page cleaner. I think it's even possible to configure MySQL to run multiple page cleaner threads, but that's not necessary for most applications.
You might also be interested in my answers to these past questions:
How to calculate amount of work performed by the page cleaner thread each second?
How to solve mysql warning: "InnoDB: page_cleaner: 1000ms intended loop took XXX ms. The settings might not be optimal "?
Lets say ... the buffer size is like 1gb or something(not sure how much database servers usually have)
It really varies and depends on the app. The default innodb buffer pool size out of the box is 128MB, but that's too small for most applications unless it's a test instance.
At my company, we try to maintain the buffer pool at least 10% of the size of data on disk. Some apps need more. The most common size we have is 24GB, but the smallest is 1GB and the largest is 200GB. We manage over 4,000 production MySQL instances.
how much of these 1gb could be dirty pages?
All of them, in theory. MySQL has a config variable calls innodb_max_dirty_pages_pct which you might assume blocks any further dirty pages if you have too many. But it doesn't. You can still modify more pages even if the buffer pool is more dirty (percentage-wise) than that variable.
What the variable really does is if the buffer pool is more than that percent full of dirty pages, the rate of flushing dirty pages is increased (IIRC, it doubles the number of pages it flushes per cycle), until the number falls below that percentage threshold again.
if the power is lost with no backup power, then all of the changes to these dirty pages get lost correct?
Yes, but you won't lose the changes, because they can be reconstructed from the InnoDB redo log -- those two files iblogfile_0 and iblogfile_1 you may have seen in your data dir. Any transaction that created a dirty page must be logged in the redo log during commit.
If you have a power loss (or other kind of restart of the mysqld process), the first thing InnoDB does is scan the redo log to check that every change logged was either flushed before the crash, or else if not, load the original page and reapply the change from the log to make the dirty page again. That's what InnoDB calls crash recovery.
You can watch this happening. Tail the error log on a test instance of MySQL Server, while you kill -9 the mysqld process. mysqld_safe will restart the mysqld process, which will spew a bunch of information into the error log as it performs crash recovery.
If there was only a small amount of dirty pages to recover, this will be pretty quick, perhaps only seconds. If the buffer pool was large and had a lot of dirty pages, it'll take longer. The MySQL Server isn't fully started, and cannot take new client connections, until crash recovery is complete. This has caused many MySQL DBA's many minutes of anxiety while watching the progress of the crash recovery. There's no way to predict how long it takes after a crash.
Since the redo log is needed for crash recovery, if the redo log fills up, MySQL must flush some dirty pages. It won't allow dirty pages to be un-flushed and also unrecoverable from the redo log. If this happens, you'll actually see writes paused by InnoDB until it can do a kind of "emergency flush" of the oldest dirty pages. This used to be a problem for MySQL, but with improvements like adaptive flushing and the page cleaner, it can keep up with the pace of changes much better. You'd have to have a really extraordinary number of writes, and an undersized redo log to experience a hard stop on InnoDB while it does a sync flush.
Here's a good blog about flushing: https://www.percona.com/blog/2011/04/04/innodb-flushing-theory-and-solutions/
P.S.: For an obligatory bash against MyISAM, I'll point out that MyISAM doesn't have a redo log, doesn't have crash recovery, and relies on the host OS file buffer during writes to its data files. If your host has a power failure while there are pending writes in the file buffer and not yet written to disk, you will lose them. MyISAM does not have any real support for the Durability property of ACID.
Re your comment:
A page will probably be flushed by the time the redo log recycles. That is, if you have 2x 48MB redo log files (the default size), and you write enough transactions to it to cycle completely through it and start over at the beginning, any pages in the buffer pool made dirty during that time will need to be flushed. A page cannot remain dirty in the BP if the respective transaction in the redo log is overwritten with new transactions.
As far as I understand, it would be virtually impossible for a dirty page to stay dirty in the buffer pool without being flushed for 12-24 hours.
The possible exception, and I'm just speculating about this, is that a given page gets updated again and again before it's flushed. Therefore it remains a recent dirty page for a long time. Again, I don't know for sure if this overcomes the need to flush a page when the redo log recycles.
Regardless, I think it's highly unlikely.
Also, I'm not sure what you mean by forensic. There's no direct way to examine page versions from the buffer pool. To get info about recent changes from InnoDB, you'd need to examine the undo segment to find previous versions of pages, and correlated them with redo log entries. The dirty page and its previous versions can both be in the buffer pool, or on disk. There's no commands or API or any data structure to do any of that correlation. So you'd be doing manual dumps of both disk images and memory images, and following pointers manually.
A much easier way of tracing data changes is by examining the stream of changes in the binary log. That's independent of InnoDB.

Semantics of ib_buffer_pool file in MySQL

MySQL's default storage engine, InnoDB, maintains an internal buffer pool of database pages. In newer versions of MySQL (e.g. 5.7+) the space and page IDs of the pages in the buffer pool are persisted to disk in the "ib_buffer_pool" file.
I'm curious about how this file is constructed, and in particular if the relative young-ness/old-ness of the pages in the buffer pool persists across restarts. In other words, if some page in the pool is younger than some other page, will that relationship hold after the file is written to, and then read from, the disk?
A broader question is the following: how much of the state of the InnoDB buffer pool persists across restarts?
Most of what you ask does not matter.
That file contains pointers, not data blocks. Each "pointer" probably contain the tablespace id (ibdata1 versus individual .ibd files) and block number. It would be handy, but not absolutely necessary to include the LRU info.
The goal is to quickly refill the RAM-based "buffer pool" after a restart. The buffer pool is a cache; in the past is was simply not reloaded. During normal activity, the blocks in the buffer pool are organized based (somewhat) on "least recently used". This helps prevent bumping out a block "too soon".
If all the block pointers are stored in that file before shutting down, then the buffer pool can be restored to essentially where it was. At restart, this takes some disk activity, but after that, each query should be as fast as if the restart had not occurred..
If, because of whatever, some block is inappropriately reloaded, it will be a minor performance hit, but nothing will be "wrong". That block will soon be bumped out of the buffer pool.
How much state persists across a restart? Well, the absolute requirement is that the integrity of the data in the database be maintained -- even across a power failure. Anything beyond that is just performance optimizations. So, to fully answer the question, one needs to understand iblog* (needed after a crash; not needed after clean shutdown), the new tmp table file (not needed), the "double buffer" (used to recover from 'torn page' after abrupt crash), etc.

is it possible to create persistent disk smaller than snapshot size

I have a disk size of 2TB. Only 300GB used. Took the snapshot and snapshot size shows 2TB.
When I try to create a disk of 1TB from the snapshot it throws and error "Disk size cannot be smaller than the snapshot size (2,048 GB)"
Am I doing anything wrong here?
When a snapshot is created it uploads all physical blocks of your disk, which includes control data structures used by the file system you used to format the disk, e.g. ext4 inodes. When you restore the disk, all those bytes must be copied back to the disk at the exact same position as they were before. This is the reason you can't shrink the disk to a smaller size, it would be equivalent to physically removing half of your hard disk for example.
If you want your files on a smaller disk, you would need to download the snapshot to a 2TB disk, attach a smaller disk to the same VM and copy over the data.
Hope it helps,
Fabricio.
By taking snapshot, GCP takes the data of the disk & it happens incrementally.
And it wouldn't allow us to move the snapshot to create a lower sized disk, because it has metadata check of the size it stored for source-instance, rather than filled-data size comparison. hence, it will simply abort the operation of creating lower sized disk.
But i still don't understand how the snapshot became 2TB of 1TB disk.
Possibility that i can think of is: you had 2TB disk when the snapshot is taken & later on you/someone might have shrunk the disk to 1TB. This is pure guess.
I don't have any logical reasons about how the snapshot has twice size.

Is the size of log file effecting performance of the database? How to Shrink log file?

I just checked my logfile and its almost 45GB.
I have two questions:
Is it effecting performance of database in general?
How to shrink it with SQL query? (please give me an example)
Thank you
Shrinking your Transaction Log file under normal circumstances should not be necessary.
It oftens means you are in FULL Recovery Mode and not regular performing Transaction Log backups.
The size of the Log file will not affect performance per se. BUT Virtual Log File (VLF) fragmentation can (and often will) have very adverse effects on performance.
Please see: Transaction Log VLFs - too many or too few?
To determine how many VLFs are in teh log run:
DBCC LOGINFO
You could shrink your Log, but that wouldn't remove the cause of it growing so large in the first place.
The canonical reference is: 8 Steps to better Transaction Log throughput
[Note: If your database is NOT a production database, you could set Recovery Mode to Simple.]
Shrinking the Transaction Log
a slowly grown log file will create a large number of virtual log files.
this will slow down database startups,restore and backup operations.
to fix this look at this blog: http://www.sqlserveroptimizer.com/2013/02/how-to-speed-up-sql-server-log-file-by-reducing-number-of-virtual-log-file/