error with storage 28 from storage engine with mysql - mysql

I am having an issue with a storage . I recieved this error
Got error 28 from storage engine
I have checked the storage capacity and it was still available and it was not full. what can be the reason for this? I have checked everything with no success
It is possible that I am running out of main mysql data directory, or in the mysql tmp. Can someone tell me how to find their place in order to check for it too ?

It is possible that I am running out of main mysql data directory, or in the mysql tmp. Can someone tell me how to find their place in order to check for it too ?
TL;DR
Issue the following commands to inspect the location of your server's data and temporary directories respectively:
SHOW GLOBAL VARIABLES LIKE 'datadir'
SHOW GLOBAL VARIABLES LIKE 'tmpdir'
The values of these variables are typically absolute paths (relative to any chroot jail in which the server is running), but if they happen to be relative paths then they will be relative to the working directory of the process that started the server.
However...
As documented under The MySQL Data Directory (emphasis added):
The following list briefly describes the items typically found in the data directory ...
Some items in the preceding list can be relocated elsewhere by reconfiguring the server. In addition, the --datadir option enables the location of the data directory itself to be changed. For a given MySQL installation, check the server configuration to determine whether items have been moved.
You may therefore also wish to inspect the values of a number of other variables, including:
pid_file
ssl_%
%_log_file
innodb_data_home_dir
innodb_log_group_home_dir
innodb_temp_data_file_path
innodb_undo_directory
innodb_buffer_pool_filename
If your server is not responsive...
You can also inspect the server's startup configuration.
As documented under Specifying Program Options, the server's startup configuration is determined "by examining environment variables, then by processing option files, and then by checking the command line" with later options taking precedence over earlier options.
The documentation also lists the locations of the Option Files Read on Unix and Unix-Like Systems, should you require it. Note that the sections of those files that the server reads is determined by the manner in which the server is started, as described in the second and third paragraphs of Server Command Options.

Once you have found the locations where MySQL stores files, run a command in the shell:
df -Ph <pathname>
Where <pathname> is each of the locations you want to test. Some may be on the same disk volume, so they'll show up as the same when reported by df.
[vagrant#localhost ~]$ mysql -e 'select ##datadir'
+-----------------+
| ##datadir |
+-----------------+
| /var/lib/mysql/ |
+-----------------+
[vagrant#localhost ~]$ df -Ph /var/lib/mysql
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00 38G 3.7G 34G 10% /
This tells me that the disk volume for my datadir is the root volume, including the top-level directory "/" and everything beneath that. The volume is 10% full, with 34G unused.
If the volume where your datadir reaches 100%, then you'll start seeing errno 28 issues when you insert new data and it needs to expand a MySQL tablespace, or write to a log file.
In that case, you need to figure out what's taking so much space. It might be something under the MySQL directory, or like in my case, your datadir might be part of a larger disk volume, where all your other files exist. In that case, any accumulation of files on the system might cause the disk to fill up. For example, log files or temp files even if they're not related to MySQL.
I'd start at the top of the disk volume and use du to figure out which directories are so full.
[vagrant#localhost ~]$ sudo du -shx /*
33M /boot
34M /etc
40K /home
28M /opt
4.0K /tmp
2.0G /usr
941M /vagrant
666M /var
Note: if your df command told you that your datadir is on a separate disk volume, you'd start at that volume's mount point. The space used by one disk volume does not count toward another disk volume.
Now I see that /usr is taking the most space, of top-level directories. Drill down and see what's taking space under that:
[vagrant#localhost ~]$ sudo du -shx /usr/*
166M /usr/bin
126M /usr/include
345M /usr/lib
268M /usr/lib64
55M /usr/libexec
546M /usr/local
106M /usr/sbin
335M /usr/share
56M /usr/src
Keep drilling down level by level.
Usually the culprit ends up being pretty clear. Like if you have some huge 500G log file /var/log somewhere that has been growing for months.
An example of a typical culprit is the http server logs.
Re your comments:
It sounds like you have a separate storage volume for your database storage. That's good.
You just added the du output to your question above. I see that in your 1.4T disk volume, the largest single file by far is this:
1020G /vol/db1/mysql/_fool_Gerg_sql_200e_4979_main_16419_2_18.tokudb$
This appears to be a TokuDB tablespace. There's information on how TokuDB handles full disks here: https://www.percona.com/doc/percona-server/LATEST/tokudb/tokudb_faq.html#full-disks
I would not remove those files. I'm not as familiar with TokuDB as I am with InnoDB, but I assume those files are important datafiles. If you remove them, you will lose part of your data and you might corrupt the rest of your data.
I found this post, which explains in detail what the files are used for: https://www.percona.com/blog/2014/07/30/examining-the-tokudb-mysql-storage-engine-file-structure/
The manual also says:
Deleting large numbers of rows from an existing table and then closing the table may free some space, but it may not. Deleting rows may simply leave unused space (available for new inserts) inside TokuDB data files rather than shrink the files (internal fragmentation).
So you can DELETE rows from the table, but the physical file on disk may not shrink. Eventually, you could free enough space that you can build a new TokuDB data file with ALTER TABLE <tablename> ENGINE=TokuDB ROW_FORMAT=TOKUDB_SMALL; (see https://dba.stackexchange.com/questions/48190/tokudb-optimize-table-does-not-reclaim-disk-space)
But this will require enough free disk space to build the new table.
So I'm afraid you have painted yourself into a corner. You no longer have enough disk space to rebuild your large table. You should never let the free disk space get smaller than the space required to rebuild your largest table.
At this point, you probably have to use mysqldump to dump data from your largest table. Not necessarily the whole table, but just what you want to keep (read about the mysqldump --where option). Then DROP TABLE to remove that large table entirely. I assume that will free disk space, where using DELETE won't.
You don't have enough space on your db1 volume to save the dump file, so you'll have to save it to another volume. It looks like you have a larger volume on /vol/cbgb1, but I don't know if it's full.
I'd dump the whole thing, for archiving purposes. Then dump again with a subset.
mkdir /vol/cbgdb1/backup
mysqldump fool Gerg | gzip -c > /vol/cbgdb1/backup/Gerg-dump-full.sql.gz
mysqldump fool Gerg --where "id > 8675309" | gzip -c > /vol/cbgb1/backup/Gerg-dump-partial.sql.gz
I'm totally guessing at the arguments to --where. You'll have to decide how you want to select for a partial dump.
After the big table is dropped, reload the partial data you dumped:
gunzip -c /vol/cbgb1/backup/Gerg-dump-partial.sql.gz | mysql fool
If there are any commands I've given in my examples that you don't already know well, I suggest you learn them before trying them. Or find someone to pair with who is more familiar with those commands.

Related

How to open and work with a very large .SQL file that was generated in a dump?

I have a very large .SQL file, of 90 GB
It was generated with a dump on a server:
mysqldump -u root -p siafi > /home/user_1/siafi.sql
I downloaded this .SQL file on a computer with Ubuntu 16.04 and MySQL Community Server (8.0.16). It has 8GB of RAM
So I did these steps in Terminal:
# Access
/usr/bin/mysql -u root -p
# I create a database with the same name to receive the .SQL information
CREATE DATABASE siafi;
# I establish the privileges. User reinaldo
GRANT ALL PRIVILEGES ON siafi.* to reinaldo#localhost;
# Enable the changes
FLUSH PRIVILEGES;
# Then I open another terminal and type command for the created database to receive the data from the .SQL file
mysql --user=reinaldo --password="type_here" --database=siafi < /home/reinaldo/Documentos/Code/test/siafi.sql
I typed these same commands with other .SQL files, only minor ones, with a maximum of 2GB. And it worked normally
But this 90GB file is processing for over twelve hours without stopping. I do not know if it's working
Please, is there any more efficient way to do this? Maybe splitting the .SQL file?
Break the file up into smaller chunks and process them separately.
You're probably hitting the logging high-water mark and mysql is trying to roll everything back, and that is a slow process.
Split the file into approx 1Gb chunks, breaking on whole lines. Perhaps using:
split -l 1000000 bigfile.sql part.
Then run them in order using your current command.
You'll have to experiment with split to get the size right, and you haven't said what your OS is, and split implementations/options vary. split --number=100 make work for you.
2 things that might be helpful:
Use pv to see how much of the .sql file has already been read. This can give you a progress bar which at least tells you it's not suck.
Log into MySQL and use SHOW PROCESSLIST to see what MySQL currently is executing. If it's still running, just let it run to completion.
If turned on, it might really help to turn off the binlog for the duration of the restore. Another thing that may or may not be helpful... if you have the choice, try to use the fastest disks available. You may have this kind of option if you're running on hosters like Amazon. You're going to really feel the pain if you're (for example) doing this on a standard EC2 host.
You can use third party tools like
https://philiplb.de/sqldumpsplitter3/
Very easy to use, can define size, location etc...
Or use this one also
same but interface its bit colorful and use to use
https://sqldumpsplitter.net/

How do I make my databases in MySQL go on another drive on my computer?

Say my website takes in lots of data from its users, and it goes to MySQL on my computer, but my computer runs out of space. If I connect a hard drive to my server computer, can I make it so I can put a new database on that hard drive, and all the data gets stored there? It would obviously be attached at all times.
This probably belongs on https://dba.stackexchange.com/ instead, and indeed there's quite a bit of in-depth discussion there about some of the techniques you could use.
One thing I've found helpful when working with large but temporary datasets is to enable innodb_file_per_table which — in my case — helps reclaim disk space when removing these temporary databases.
Moving the entire datadir
You can move the entire directory that MySQL uses to store files; this is called the datadir. Stop the MySQL daemon, move the folder, edit my.cnf to refer datadir = to the new folder location, and start the daemon.
File-per-table tablespace outside of the datadir
https://dev.mysql.com/doc/refman/5.6/en/tablespace-placing.html
You can use the file-per-table tablespace configuration (innodb_file_per_table in the [mysqld] portion of the configuration file) in conjunction with CREATE TABLE to actually place a tablespace outside the datadir. When you have innodb_file_per_table enabled, you can use statements like CREATE TABLE t1 (c1 INT PRIMARY KEY) DATA DIRECTORY = '/alternative/directory'; to put the datadir for that tablespace in a different directory.
Partitioning
Using partitioning, you can break up the databases, tables, and columns in to different storage partitions. This is probably not what you're looking for in this case because they still reside in the datadir. https://dev.mysql.com/doc/refman/5.7/en/partitioning.html
You can export the database via PhpMyAdmin and the pass the saved database file onto another computer.
Exporting the database:
https://serverpilot.io/community/articles/how-to-export-a-database-using-phpmyadmin.html
Importing the database:
https://serverpilot.io/community/articles/how-to-import-a-database-using-phpmyadmin.html

MySQL how to change innodb-log-file-size

According to the mysql documentation (Docs), in order to change innodb-log-file-size in step #4 I need to delete the binary logs. I have some concerns and questions about this. My current value for innodb-log-file-size is 5MB. So I would assume my binary log files are 5MB each (max). When I look at the bin-log directory I have a bunch of file names like 'mysql-bin.000001', 'mysql-bin.000002', etc. I believe these are the binary log files, but they are all quite a bit larger than 5MB. There are 2 files (ib_logfile0, ib_logfile1) that are 5 MB. So my question is
Which of those files is my 'binary log'?
Which of those do I need to delete?
Thanks in advance
The InnoDB log is in ib_logfile0 and ib_logfile1. These are the files sized by innodb_log_file_size.
To resize the InnoDB logs, you first need to shut down mysqld cleanly. That will make sure that any changes in the log have already been flushed into your tablespaces. The clean shutdown is important, because if you don't do this step, you have a high chance of losing data.
After you have shut down mysqld cleanly, the ib_logfiles are superfluous. You must rm them to change their size.
As you restart mysqld, InnoDB notices that the files are missing, and creates new file at the new size according to the innodb_log_file_size variable in your my.cnf file. So make sure you edit that file before you restart, or else it'll just create new 5MB files.
MySQL 5.6 makes this process a little bit simpler. You don't need to rm the log files, but you do need to restart mysqld to make a new log file size take effect. The way it works in 5.6 is that if the size of these files is different from the config variable, MySQL automatically does another clean restart (to make sure the files don't contain any changes that are unflushed), and then InnoDB resizes the files upon the final startup.
The other files (mysql-bin.000001, etc.) are binary logs. These may grow up to max_binlog_size (which is 1GB by default), but the binary logs vary in size because new logs are created whenever you restart mysqld or execute FLUSH LOGS. Anyway, they have nothing to do with the InnoDB logs.
PS: You might like this article: How to calculate a good InnoDB log file size.
As per official document and it works for me for MySQL 5.7.30
To change the number or the size of your InnoDB redo log files, perform the following steps:
Stop the MySQL server and make sure that it shuts down without errors.
Edit my.cnf to change the log file configuration. To change the log file size, configure innodb_log_file_size. To increase the number of log files, configure innodb_log_files_in_group.
Start the MySQL server again.
If InnoDB detects that the innodb_log_file_size differs from the redo log file size, it writes a log checkpoint, closes and removes the old log files, creates new log files at the requested size, and opens the new log files.

Howto: Clean a mysql InnoDB storage engine?

Is it possible to clean a mysql innodb storage engine so it is not storing data from deleted tables?
Or do I have to rebuild a fresh database every time?
Here is a more complete answer with regard to InnoDB. It is a bit of a lengthy process, but can be worth the effort.
Keep in mind that /var/lib/mysql/ibdata1 is the busiest file in the InnoDB infrastructure. It normally houses six types of information:
Table Data
Table Indexes
MVCC (Multiversioning Concurrency Control) Data
Rollback Segments
Undo Space
Table Metadata (Data Dictionary)
Double Write Buffer (background writing to prevent reliance on OS caching)
Insert Buffer (managing changes to non-unique secondary indexes)
See the Pictorial Representation of ibdata1
InnoDB Architecture
Many people create multiple ibdata files hoping for better disk-space management and performance, however that belief is mistaken.
Can I run OPTIMIZE TABLE ?
Unfortunately, running OPTIMIZE TABLE against an InnoDB table stored in the shared table-space file ibdata1 does two things:
Makes the table’s data and indexes contiguous inside ibdata1
Makes ibdata1 grow because the contiguous data and index pages are appended to ibdata1
You can however, segregate Table Data and Table Indexes from ibdata1 and manage them independently.
Can I run OPTIMIZE TABLE with innodb_file_per_table ?
Suppose you were to add innodb_file_per_table to /etc/my.cnf (my.ini). Can you then just run OPTIMIZE TABLE on all the InnoDB Tables?
Good News : When you run OPTIMIZE TABLE with innodb_file_per_table enabled, this will produce a .ibd file for that table. For example, if you have table mydb.mytable witha datadir of /var/lib/mysql, it will produce the following:
/var/lib/mysql/mydb/mytable.frm
/var/lib/mysql/mydb/mytable.ibd
The .ibd will contain the Data Pages and Index Pages for that table. Great.
Bad News : All you have done is extract the Data Pages and Index Pages of mydb.mytable from living in ibdata. The data dictionary entry for every table, including mydb.mytable, still remains in the data dictionary (See the Pictorial Representation of ibdata1). YOU CANNOT JUST SIMPLY DELETE ibdata1 AT THIS POINT !!! Please note that ibdata1 has not shrunk at all.
InnoDB Infrastructure Cleanup
To shrink ibdata1 once and for all you must do the following:
Dump (e.g., with mysqldump) all databases into a .sql text file (SQLData.sql is used below)
Drop all databases (except for mysql and information_schema) CAVEAT : As a precaution, please run this script to make absolutely sure you have all user grants in place:
mkdir /var/lib/mysql_grants
cp /var/lib/mysql/mysql/* /var/lib/mysql_grants/.
chown -R mysql:mysql /var/lib/mysql_grants
Login to mysql and run SET GLOBAL innodb_fast_shutdown = 0; (This will completely flush all remaining transactional changes from ib_logfile0 and ib_logfile1)
Shutdown MySQL
Add the following lines to /etc/my.cnf (or my.ini on Windows)
[mysqld]
innodb_file_per_table
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G
innodb_buffer_pool_size=4G
(Sidenote: Whatever your set for innodb_buffer_pool_size, make sure innodb_log_file_size is 25% of innodb_buffer_pool_size.
Also: innodb_flush_method=O_DIRECT is not available on Windows)
Delete ibdata* and ib_logfile*, Optionally, you can remove all folders in /var/lib/mysql, except /var/lib/mysql/mysql.
Start MySQL (This will recreate ibdata1 [10MB by default] and ib_logfile0 and ib_logfile1 at 1G each).
Import SQLData.sql
Now, ibdata1 will still grow but only contain table metadata because each InnoDB table will exist outside of ibdata1. ibdata1 will no longer contain InnoDB data and indexes for other tables.
For example, suppose you have an InnoDB table named mydb.mytable. If you look in /var/lib/mysql/mydb, you will see two files representing the table:
mytable.frm (Storage Engine Header)
mytable.ibd (Table Data and Indexes)
With the innodb_file_per_table option in /etc/my.cnf, you can run OPTIMIZE TABLE mydb.mytable and the file /var/lib/mysql/mydb/mytable.ibd will actually shrink.
I have done this many times in my career as a MySQL DBA. In fact, the first time I did this, I shrank a 50GB ibdata1 file down to only 500MB!
Give it a try. If you have further questions on this, just ask. Trust me; this will work in the short term as well as over the long haul.
CAVEAT
At Step 6, if mysql cannot restart because of the mysql schema begin dropped, look back at Step 2. You made the physical copy of the mysql schema. You can restore it as follows:
mkdir /var/lib/mysql/mysql
cp /var/lib/mysql_grants/* /var/lib/mysql/mysql
chown -R mysql:mysql /var/lib/mysql/mysql
Go back to Step 6 and continue
UPDATE 2013-06-04 11:13 EDT
With regard to setting innodb_log_file_size to 25% of innodb_buffer_pool_size in Step 5, that's blanket rule is rather old school.
Back on July 03, 2006, Percona had a nice article why to choose a proper innodb_log_file_size. Later, on Nov 21, 2008, Percona followed up with another article on how to calculate the proper size based on peak workload keeping one hour's worth of changes.
I have since written posts in the DBA StackExchange about calculating the log size and where I referenced those two Percona articles.
Aug 27, 2012 : Proper tuning for 30GB InnoDB table on server with 48GB RAM
Jan 17, 2013 : MySQL 5.5 - Innodb - innodb_log_file_size higher than 4GB combined?
Personally, I would still go with the 25% rule for an initial setup. Then, as the workload can more accurate be determined over time in production, you could resize the logs during a maintenance cycle in just minutes.
The InnoDB engine does not store deleted data. As you insert and delete rows, unused space is left allocated within the InnoDB storage files. Over time, the overall space will not decrease, but over time the 'deleted and freed' space will be automatically reused by the DB server.
You can further tune and manage the space used by the engine through an manual re-org of the tables. To do this, dump the data in the affected tables using mysqldump, drop the tables, restart the mysql service, and then recreate the tables from the dump files.
I follow this guide for a complete reset (as root):
mysqldump --all-databases --single-transaction | gzip -c > /tmp/mysql.all.sql.gz
service mysql stop
mv /var/lib/mysql /var/lib/mysql.old; mkdir -m700 /var/lib/mysql; chown mysql:mysql /var/lib/mysql
mysql_install_db # mysql 5.5
mysqld --initialize-insecure # mysql 5.7
service mysql start
zcat /tmp/mysql.all.sql.gz | mysql
service mysql restart
What nobody seems to mention is the impact innodb_undo_log_truncate setting can have.
Take a look at my answer at How to shrink/purge ibdata1 file in MySQL.

What's the quickest way to dump & load a MySQL InnoDB database using mysqldump?

I would like to create a copy of a database with approximately 40 InnoDB tables and around 1.5GB of data with mysqldump and MySQL 5.1.
What are the best parameters (ie: --single-transaction) that will result in the quickest dump and load of the data?
As well, when loading the data into the second DB, is it quicker to:
1) pipe the results directly to the second MySQL server instance and use the --compress option
or
2) load it from a text file (ie: mysql < my_sql_dump.sql)
QUICKLY dumping a quiesced database:
Using the "-T " option with mysqldump results in lots of .sql and .txt files in the specified directory. This is ~50% faster for dumping large tables than a single .sql file with INSERT statements (takes 1/3 less wall-clock time).
Additionally, there is a huge benefit when restoring if you can load multiple tables in parallel, and saturate multiple cores. On an 8-core box, this could be as much as an 8X difference in wall-clock time to restore the dump, on top of the efficiency improvements provided by "-T". Because "-T" causes each table to be stored in a separate file, loading them in parallel is easier than splitting apart a massive .sql file.
Taking the strategies above to their logical extreme, one could create a script to dump a database widely in parallel. Well, that's exactly what the Maakit mk-parallel-dump (see http://www.maatkit.org/doc/mk-parallel-dump.html) and mk-parallel-restore tools are; perl scripts that make multiple calls to the underlying mysqldump program. However, when I tried to use these, I had trouble getting the restore to complete without duplicate key errors that didn't occur with vanilla dumps, so keep in mind that your milage may vary.
Dumping data from a LIVE database (w/o service interruption):
The --single-transaction switch is very useful for taking a dump of a live database without having to quiesce it or taking a dump of a slave database without having to stop slaving.
Sadly, -T is not compatible with --single-transaction, so you only get one.
Usually, taking the dump is much faster than restoring it. There is still room for a tool that take the incoming monolithic dump file and breaks it into multiple pieces to be loaded in parallel. To my knowledge, such a tool does not yet exist.
Transferring the dump over the Network is usually a win
To listen for an incoming dump on one host run:
nc -l 7878 > mysql-dump.sql
Then on your DB host, run
mysqldump $OPTS | nc myhost.mydomain.com 7878
This reduces contention for the disk spindles on the master from writing the dump to disk slightly speeding up your dump (assuming the network is fast enough to keep up, a fairly safe assumption for two hosts in the same datacenter). Plus, if you are building out a new slave, this saves the step of having to transfer the dump file after it is finished.
Caveats - obviously, you need to have enough network bandwidth not to slow things down unbearably, and if the TCP session breaks, you have to start all over, but for most dumps this is not a major concern.
Lastly, I want to clear up one point of common confusion.
Despite how often you see these flags in mysqldump examples and tutorials, they are superfluous because they are turned ON by default:
--opt
--add-drop-table
--add-locks
--create-options
--disable-keys
--extended-insert
--lock-tables
--quick
--set-charset.
From http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html:
Use of --opt is the same as specifying --add-drop-table, --add-locks, --create-options, --disable-keys, --extended-insert, --lock-tables, --quick, and --set-charset. All of the options that --opt stands for also are on by default because --opt is on by default.
Of those behaviors, "--quick" is one of the most important (skips caching the entire result set in mysqld before transmitting the first row), and can be with "mysql" (which does NOT turn --quick on by default) to dramatically speed up queries that return a large result set (eg dumping all the rows of a big table).
Pipe it directly to another instance, to avoid disk overhead. Don't bother with --compress unless you're running over a slow network, since on a fast LAN or loopback the network overhead doesn't matter.
i think it will be a lot faster and save you disk space if you tried database replication as opposed to using mysqldump. personally i use sqlyog enterprise for my really heavy lifting but there also a number of other tools that can provide the same services. unless of course you would like to use only mysqldump.
For innodb, --order-by-primary --extended-insert is usually the best combo. If your after every last bit of performance and the target box has many CPU cores, you might want to split the resulting dumpfile and do parallel inserts in many threads, up to innodb_thread_concurrency/2.
Also, tweak the innodb_buffer_pool_size on the target to the max you can afford, and increase innodb_log_file_size to 128 or 256 MB (careful with this, you need to remove the old logfiles before restarting the mysql daemon otherwise it won't restart)
Use mk-parallel-dump tool from Maatkit.
At least that would probably be faster. I'd trust mysqldump more.
How often are you doing this? Is it really an application performance problem? Perhaps you should design a way of doing this which doesn't need to dump the whole data (replication?)
On the other hand, 1.5G is quite a small database so it probably won't be much of a problem.
mydumper is a good choice, with paralel export, even paralell threads per table, and compressed files, see: