Howto: Clean a mysql InnoDB storage engine? - mysql

Is it possible to clean a mysql innodb storage engine so it is not storing data from deleted tables?
Or do I have to rebuild a fresh database every time?

Here is a more complete answer with regard to InnoDB. It is a bit of a lengthy process, but can be worth the effort.
Keep in mind that /var/lib/mysql/ibdata1 is the busiest file in the InnoDB infrastructure. It normally houses six types of information:
Table Data
Table Indexes
MVCC (Multiversioning Concurrency Control) Data
Rollback Segments
Undo Space
Table Metadata (Data Dictionary)
Double Write Buffer (background writing to prevent reliance on OS caching)
Insert Buffer (managing changes to non-unique secondary indexes)
See the Pictorial Representation of ibdata1
InnoDB Architecture
Many people create multiple ibdata files hoping for better disk-space management and performance, however that belief is mistaken.
Can I run OPTIMIZE TABLE ?
Unfortunately, running OPTIMIZE TABLE against an InnoDB table stored in the shared table-space file ibdata1 does two things:
Makes the table’s data and indexes contiguous inside ibdata1
Makes ibdata1 grow because the contiguous data and index pages are appended to ibdata1
You can however, segregate Table Data and Table Indexes from ibdata1 and manage them independently.
Can I run OPTIMIZE TABLE with innodb_file_per_table ?
Suppose you were to add innodb_file_per_table to /etc/my.cnf (my.ini). Can you then just run OPTIMIZE TABLE on all the InnoDB Tables?
Good News : When you run OPTIMIZE TABLE with innodb_file_per_table enabled, this will produce a .ibd file for that table. For example, if you have table mydb.mytable witha datadir of /var/lib/mysql, it will produce the following:
/var/lib/mysql/mydb/mytable.frm
/var/lib/mysql/mydb/mytable.ibd
The .ibd will contain the Data Pages and Index Pages for that table. Great.
Bad News : All you have done is extract the Data Pages and Index Pages of mydb.mytable from living in ibdata. The data dictionary entry for every table, including mydb.mytable, still remains in the data dictionary (See the Pictorial Representation of ibdata1). YOU CANNOT JUST SIMPLY DELETE ibdata1 AT THIS POINT !!! Please note that ibdata1 has not shrunk at all.
InnoDB Infrastructure Cleanup
To shrink ibdata1 once and for all you must do the following:
Dump (e.g., with mysqldump) all databases into a .sql text file (SQLData.sql is used below)
Drop all databases (except for mysql and information_schema) CAVEAT : As a precaution, please run this script to make absolutely sure you have all user grants in place:
mkdir /var/lib/mysql_grants
cp /var/lib/mysql/mysql/* /var/lib/mysql_grants/.
chown -R mysql:mysql /var/lib/mysql_grants
Login to mysql and run SET GLOBAL innodb_fast_shutdown = 0; (This will completely flush all remaining transactional changes from ib_logfile0 and ib_logfile1)
Shutdown MySQL
Add the following lines to /etc/my.cnf (or my.ini on Windows)
[mysqld]
innodb_file_per_table
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G
innodb_buffer_pool_size=4G
(Sidenote: Whatever your set for innodb_buffer_pool_size, make sure innodb_log_file_size is 25% of innodb_buffer_pool_size.
Also: innodb_flush_method=O_DIRECT is not available on Windows)
Delete ibdata* and ib_logfile*, Optionally, you can remove all folders in /var/lib/mysql, except /var/lib/mysql/mysql.
Start MySQL (This will recreate ibdata1 [10MB by default] and ib_logfile0 and ib_logfile1 at 1G each).
Import SQLData.sql
Now, ibdata1 will still grow but only contain table metadata because each InnoDB table will exist outside of ibdata1. ibdata1 will no longer contain InnoDB data and indexes for other tables.
For example, suppose you have an InnoDB table named mydb.mytable. If you look in /var/lib/mysql/mydb, you will see two files representing the table:
mytable.frm (Storage Engine Header)
mytable.ibd (Table Data and Indexes)
With the innodb_file_per_table option in /etc/my.cnf, you can run OPTIMIZE TABLE mydb.mytable and the file /var/lib/mysql/mydb/mytable.ibd will actually shrink.
I have done this many times in my career as a MySQL DBA. In fact, the first time I did this, I shrank a 50GB ibdata1 file down to only 500MB!
Give it a try. If you have further questions on this, just ask. Trust me; this will work in the short term as well as over the long haul.
CAVEAT
At Step 6, if mysql cannot restart because of the mysql schema begin dropped, look back at Step 2. You made the physical copy of the mysql schema. You can restore it as follows:
mkdir /var/lib/mysql/mysql
cp /var/lib/mysql_grants/* /var/lib/mysql/mysql
chown -R mysql:mysql /var/lib/mysql/mysql
Go back to Step 6 and continue
UPDATE 2013-06-04 11:13 EDT
With regard to setting innodb_log_file_size to 25% of innodb_buffer_pool_size in Step 5, that's blanket rule is rather old school.
Back on July 03, 2006, Percona had a nice article why to choose a proper innodb_log_file_size. Later, on Nov 21, 2008, Percona followed up with another article on how to calculate the proper size based on peak workload keeping one hour's worth of changes.
I have since written posts in the DBA StackExchange about calculating the log size and where I referenced those two Percona articles.
Aug 27, 2012 : Proper tuning for 30GB InnoDB table on server with 48GB RAM
Jan 17, 2013 : MySQL 5.5 - Innodb - innodb_log_file_size higher than 4GB combined?
Personally, I would still go with the 25% rule for an initial setup. Then, as the workload can more accurate be determined over time in production, you could resize the logs during a maintenance cycle in just minutes.

The InnoDB engine does not store deleted data. As you insert and delete rows, unused space is left allocated within the InnoDB storage files. Over time, the overall space will not decrease, but over time the 'deleted and freed' space will be automatically reused by the DB server.
You can further tune and manage the space used by the engine through an manual re-org of the tables. To do this, dump the data in the affected tables using mysqldump, drop the tables, restart the mysql service, and then recreate the tables from the dump files.

I follow this guide for a complete reset (as root):
mysqldump --all-databases --single-transaction | gzip -c > /tmp/mysql.all.sql.gz
service mysql stop
mv /var/lib/mysql /var/lib/mysql.old; mkdir -m700 /var/lib/mysql; chown mysql:mysql /var/lib/mysql
mysql_install_db # mysql 5.5
mysqld --initialize-insecure # mysql 5.7
service mysql start
zcat /tmp/mysql.all.sql.gz | mysql
service mysql restart

What nobody seems to mention is the impact innodb_undo_log_truncate setting can have.
Take a look at my answer at How to shrink/purge ibdata1 file in MySQL.

Related

error with storage 28 from storage engine with mysql

I am having an issue with a storage . I recieved this error
Got error 28 from storage engine
I have checked the storage capacity and it was still available and it was not full. what can be the reason for this? I have checked everything with no success
It is possible that I am running out of main mysql data directory, or in the mysql tmp. Can someone tell me how to find their place in order to check for it too ?
It is possible that I am running out of main mysql data directory, or in the mysql tmp. Can someone tell me how to find their place in order to check for it too ?
TL;DR
Issue the following commands to inspect the location of your server's data and temporary directories respectively:
SHOW GLOBAL VARIABLES LIKE 'datadir'
SHOW GLOBAL VARIABLES LIKE 'tmpdir'
The values of these variables are typically absolute paths (relative to any chroot jail in which the server is running), but if they happen to be relative paths then they will be relative to the working directory of the process that started the server.
However...
As documented under The MySQL Data Directory (emphasis added):
The following list briefly describes the items typically found in the data directory ...
Some items in the preceding list can be relocated elsewhere by reconfiguring the server. In addition, the --datadir option enables the location of the data directory itself to be changed. For a given MySQL installation, check the server configuration to determine whether items have been moved.
You may therefore also wish to inspect the values of a number of other variables, including:
pid_file
ssl_%
%_log_file
innodb_data_home_dir
innodb_log_group_home_dir
innodb_temp_data_file_path
innodb_undo_directory
innodb_buffer_pool_filename
If your server is not responsive...
You can also inspect the server's startup configuration.
As documented under Specifying Program Options, the server's startup configuration is determined "by examining environment variables, then by processing option files, and then by checking the command line" with later options taking precedence over earlier options.
The documentation also lists the locations of the Option Files Read on Unix and Unix-Like Systems, should you require it. Note that the sections of those files that the server reads is determined by the manner in which the server is started, as described in the second and third paragraphs of Server Command Options.
Once you have found the locations where MySQL stores files, run a command in the shell:
df -Ph <pathname>
Where <pathname> is each of the locations you want to test. Some may be on the same disk volume, so they'll show up as the same when reported by df.
[vagrant#localhost ~]$ mysql -e 'select ##datadir'
+-----------------+
| ##datadir |
+-----------------+
| /var/lib/mysql/ |
+-----------------+
[vagrant#localhost ~]$ df -Ph /var/lib/mysql
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00 38G 3.7G 34G 10% /
This tells me that the disk volume for my datadir is the root volume, including the top-level directory "/" and everything beneath that. The volume is 10% full, with 34G unused.
If the volume where your datadir reaches 100%, then you'll start seeing errno 28 issues when you insert new data and it needs to expand a MySQL tablespace, or write to a log file.
In that case, you need to figure out what's taking so much space. It might be something under the MySQL directory, or like in my case, your datadir might be part of a larger disk volume, where all your other files exist. In that case, any accumulation of files on the system might cause the disk to fill up. For example, log files or temp files even if they're not related to MySQL.
I'd start at the top of the disk volume and use du to figure out which directories are so full.
[vagrant#localhost ~]$ sudo du -shx /*
33M /boot
34M /etc
40K /home
28M /opt
4.0K /tmp
2.0G /usr
941M /vagrant
666M /var
Note: if your df command told you that your datadir is on a separate disk volume, you'd start at that volume's mount point. The space used by one disk volume does not count toward another disk volume.
Now I see that /usr is taking the most space, of top-level directories. Drill down and see what's taking space under that:
[vagrant#localhost ~]$ sudo du -shx /usr/*
166M /usr/bin
126M /usr/include
345M /usr/lib
268M /usr/lib64
55M /usr/libexec
546M /usr/local
106M /usr/sbin
335M /usr/share
56M /usr/src
Keep drilling down level by level.
Usually the culprit ends up being pretty clear. Like if you have some huge 500G log file /var/log somewhere that has been growing for months.
An example of a typical culprit is the http server logs.
Re your comments:
It sounds like you have a separate storage volume for your database storage. That's good.
You just added the du output to your question above. I see that in your 1.4T disk volume, the largest single file by far is this:
1020G /vol/db1/mysql/_fool_Gerg_sql_200e_4979_main_16419_2_18.tokudb$
This appears to be a TokuDB tablespace. There's information on how TokuDB handles full disks here: https://www.percona.com/doc/percona-server/LATEST/tokudb/tokudb_faq.html#full-disks
I would not remove those files. I'm not as familiar with TokuDB as I am with InnoDB, but I assume those files are important datafiles. If you remove them, you will lose part of your data and you might corrupt the rest of your data.
I found this post, which explains in detail what the files are used for: https://www.percona.com/blog/2014/07/30/examining-the-tokudb-mysql-storage-engine-file-structure/
The manual also says:
Deleting large numbers of rows from an existing table and then closing the table may free some space, but it may not. Deleting rows may simply leave unused space (available for new inserts) inside TokuDB data files rather than shrink the files (internal fragmentation).
So you can DELETE rows from the table, but the physical file on disk may not shrink. Eventually, you could free enough space that you can build a new TokuDB data file with ALTER TABLE <tablename> ENGINE=TokuDB ROW_FORMAT=TOKUDB_SMALL; (see https://dba.stackexchange.com/questions/48190/tokudb-optimize-table-does-not-reclaim-disk-space)
But this will require enough free disk space to build the new table.
So I'm afraid you have painted yourself into a corner. You no longer have enough disk space to rebuild your large table. You should never let the free disk space get smaller than the space required to rebuild your largest table.
At this point, you probably have to use mysqldump to dump data from your largest table. Not necessarily the whole table, but just what you want to keep (read about the mysqldump --where option). Then DROP TABLE to remove that large table entirely. I assume that will free disk space, where using DELETE won't.
You don't have enough space on your db1 volume to save the dump file, so you'll have to save it to another volume. It looks like you have a larger volume on /vol/cbgb1, but I don't know if it's full.
I'd dump the whole thing, for archiving purposes. Then dump again with a subset.
mkdir /vol/cbgdb1/backup
mysqldump fool Gerg | gzip -c > /vol/cbgdb1/backup/Gerg-dump-full.sql.gz
mysqldump fool Gerg --where "id > 8675309" | gzip -c > /vol/cbgb1/backup/Gerg-dump-partial.sql.gz
I'm totally guessing at the arguments to --where. You'll have to decide how you want to select for a partial dump.
After the big table is dropped, reload the partial data you dumped:
gunzip -c /vol/cbgb1/backup/Gerg-dump-partial.sql.gz | mysql fool
If there are any commands I've given in my examples that you don't already know well, I suggest you learn them before trying them. Or find someone to pair with who is more familiar with those commands.

MySQL backup using AutoMySQLBackup from slave to remote machine

I am trying to setup daily backup for MySQL database from slave server or MySQL instance. My database is a mixture of InnoDb and MyISAM tables. I have installed AutoMySQLBackup on another machine. I am trying to take a full backup and a daily incremental backup of a MySQL database from that machine with the help of AutoMySQLBackup.
Mysql Full backup:-
https://dev.mysql.com/doc/mysql-enterprise-backup/3.12/en/mysqlbackup.full.html
Options on Command Line or in Configuration File?
For clarity, the examples in this manual often show some of the
command-line options that are used with the mysqlbackup commands. For
convenience and consistency, you can include those options that remain
unchanged for most backup jobs into the [mysqlbackup] section of the
MySQL configuration file that you supply to mysqlbackup. mysqlbackup
also picks up the options from the [mysqld] section if they are
present there. Putting the options into a configuration file can
simplify backup administration for you: for example, putting port
information into a configuration file, you can avoid the need to edit
your backup scripts each time the database instance switches to a
different port. See Chapter 14, Configuration Files and Parameters for
details about the use of configuration files.
Output in Single Directory or Timestamped Subdirectories?
For convenience, the --with-timestamp option creates uniquely named
subdirectories under the backup directory to hold the output from each
backup job. The timestamped subdirectories make it simpler to
establish retention periods, allowing easy removal and archiving of
backup data that has passed a certain age.
If you do use a single backup directory (that is, if you omit the
--with-timestamp option), either specify a new unique directory name for each backup job, or specify the --force option to overwrite
existing backup files.
For incremental backups that uses the --incremental-base option to
specify the directory containing the previous backup, in order to make
the directory names predictable, you might prefer to not use the
--with-timestamp option and generate a sequence of directory names with your backup script instead .
Always Full Backup, or Full Backup plus Incremental Backups?
If your InnoDB data volume is small, or if your database is so busy
that a high percentage of data changes between backups, you might want
to run a full backup each time. However, you can usually save time and
storage space by running periodic full backups and then several
incremental backups in between them, as described in Section 4.3.2,
“Making a Differential or Incremental Backup”.
Use Compression or Not?
Creating a compressed backup can save you considerable storage space
and reduce I/O usage significantly. And with the LZ4 compression
method (introduced since release 3.10), the overhead for processing
compression is quite low. In cases where database backups are moving
from a faster disk system where the active database files sit to a
possibly slower storage, compression will often significantly lower
the overall backup time. It can result in reduced restoration time as
well. In general, we recommend LZ4 compression over no compression for
most users, as LZ4-based backups often finish in a shorter time
period. However, test out MySQL Enterprise Backup within your
environment to determine what is the most efficient approach.
The incremental backup feature is primarily intended for InnoDB tables, or non-InnoDB tables that are read-only or rarely updated. For non-InnoDB files, the entire file is included in an incremental backup if that file changed since the previous backup.
You cannot perform incremental backups with the --compress option.
Incremental backups detect changes at the level of pages in the InnoDB data files, as opposed to table rows; each page that has changed is backed up. Thus, the space and time savings are not exactly proportional to the percentage of changed InnoDB rows or columns.
When an InnoDB table is dropped and you do a subsequent incremental backup, the apply-log step removes the corresponding .ibd file from the full backup directory. Since the backup program cannot have the same insight into the purpose of non-InnoDB files, when a non-InnoDB file is removed between the time of a full backup and a subsequent incremental backup, the apply-log step does not remove that file from the full backup directory. Thus, restoring a backup could result in a deleted file reappearing.
Creating Incremental Backups Using Only the Redo Log
The --incremental-with-redo-log-only might offer some benefits over
the --incremental option for creating an incremental backup:
The changes to InnoDB tables are determined based on the contents of
the InnoDB redo log. Since the redo log files have a fixed size that
you know in advance, it can require less I/O to read the changes from
them than to scan the InnoDB tablespace files to locate the changed
pages, depending on the size of your database, amount of DML activity,
and size of the redo log files.
Since the redo log files act as a circular buffer, with records of
older changes being overwritten as new DML operations take place, you
must take new incremental backups on a predictable schedule dictated
by the size of the log files and the amount of redo data generated for
your workload. Otherwise, the redo log might not reach back far enough
to record all the changes since the previous incremental backup, in
which case mysqlbackup will quickly determine that it cannot proceed
and will return an error. Your backup script should be able to catch
that error and then perform an incremental backup with the
--incremental option instead.
For example:
To calculate the size of the redo log, issue the command SHOW
VARIABLES LIKE 'innodb_log_file%' and, based on the output, multiply
the innodb_log_file_size setting by the value of
innodb_log_files_in_group. To compute the redo log size at the
physical level, look into the datadir directory of the MySQL instance
and sum up the sizes of the files matching the pattern ib_logfile*.
The InnoDB LSN value corresponds to the number of bytes written to the
redo log. To check the LSN at some point in time, issue the command
SHOW ENGINE INNODB STATUS and look under the LOG heading. While
planning your backup strategy, record the LSN values periodically and
subtract the earlier value from the current one to calculate how much
redo data is generated each hour, day, and so on.
Prior to MySQL 5.5, it was common practice to keep the redo logs
fairly small to avoid a long startup time when the MySQL server was
killed rather than shut down normally. With MySQL 5.5 and higher, the
performance of crash recovery is significantly improved, as described
in Optimizing InnoDB Configuration Variables, so that you can make
your redo log files bigger if that helps your backup strategy and your
database workload.
This type of incremental backup is not so forgiving of too-low
--start-lsn values as the standard --incremental option. For example, you cannot make a full backup and then make a series of
--incremental-with-redo-log-only backups all using the same --start-lsn value. Make sure to specify the precise end LSN of the previous backup as the start LSN of the next incremental backup; do
not use arbitrary values.
Note To ensure the LSN values match up exactly between successive
incremental backups, it is recommended that you always use the
--incremental-base option when you use the --incremental-with-redo-log-only option.
To judge whether this type of incremental backup is practical and
efficient for a particular MySQL instance:
Measure how fast the data changes within the InnoDB redo log files.
Check the LSN periodically to decide how much redo data accumulates
over the course of some number of hours or days.
Compare the rate of redo log accumulation with the size of the redo
log files. Use this ratio to see how often to take an incremental
backup, in order to avoid the likelihood of the backup failing because
the historical data are not available in the redo log. For example, if
you are producing 1GB of redo log data per day, and the combined size
of your redo log files is 7GB, you would schedule incremental backups
more frequently than once a week. You might perform incremental
backups every day or two, to avoid a potential issue when a sudden
flurry of updates produced more redo than usual.
Benchmark incremental backup times using both the --incremental and
--incremental-with-redo-log-only options, to confirm if the redo log backup technique performs faster and with less overhead than the
traditional incremental backup method. The result could depend on the
size of your data, the amount of DML activity, and the size of your
redo log files. Do your testing on a server with a realistic data
volume and a realistic workload. For example, if you have huge redo
log files, reading them in the course of an incremental backup could
take as long as reading the InnoDB data files using the traditional
incremental technique. Conversely, if your data volume is large,
reading all the data files to find the few changed pages could be less
efficient than processing the much smaller redo log files.
Other Considerations for Incremental Backups
The incremental backup feature is primarily intended for InnoDB
tables, or non-InnoDB tables that are read-only or rarely updated.
Incremental backups detect changes at the level of pages in the InnoDB
data files, as opposed to table rows; each page that has changed is
backed up. Thus, the space and time savings are not exactly
proportional to the percentage of changed InnoDB rows or columns.
For non-InnoDB files, the entire file is included in an incremental
backup if that file has changed since the previous backup, which means
the savings for backup resources are less significant when comparing
with the case with InnoDB tables.
You cannot perform incremental backups with the --compress option.
When making an incremental backup that is based on a backup (full or
incremental) created using the --no-locking option, use the
--skip-binlog option to skip the backing up of the binary log, as binary log information will be unavailable to mysqlbackup in that
situation.
Examples of Incremental Backups
This example uses mysqlbackup to make an incremental backup of a MySQL server, including all databases and tables. We show two alternatives, one using the --incremental-base option and the other using the --start-lsn option.
With the --incremental-base option, you do not have to keep track of LSN values between one backup and the next. Instead, you can just specify the directory of the previous backup (either full or incremental), and mysqlbackup figures out the starting point for this backup based on the metadata of the earlier one. Because you need a known set of directory names, you might want to use hardcoded names or generate a sequence of names in your own backup script, rather than using the --with-timestamp option.
$ mysqlbackup --defaults-file=/home/pekka/.my.cnf --incremental \
--incremental-base=dir:/incr-backup/wednesday \
--incremental-backup-dir=/incr-backup/thursday \
backup
...many lines of output...
mysqlbackup: Backup created in directory '/incr-backup/thursday'
mysqlbackup: start_lsn: 2654255717
mysqlbackup: incremental_base_lsn: 2666733462
mysqlbackup: end_lsn: 2666736714z
101208 17:14:58 mysqlbackup: mysqlbackup completed OK!
Note that if your last backup was a single-file instead of a directory backup, you can still use --incremental-base by specifying for dir:directory_path the location of the temporary directory you supplied with the --backup-dir option during the full backup.
As an alternative to specifying --incremental-base=dir:directory_path, you can tell mysqlbackup to query the end_lsn value from the last successful backup as recorded in the backup_history table on the server using --incremental-base=history:last_backup (this required that the last backup was made with mysqlbackup connected to the server).
You can also use the --start-lsn option to specify where the incremental backup should start. You have to record the LSN of the previous backup reported by mysqlbackup at the end of the backup:
mysqlbackup: Was able to parse the log up to lsn 2654255716
The number is also recorded in the meta/backup_variables.txt file in the folder specified by --backup-dir during the backup. Supply then that number to mysqlbackup using the --start-lsn option. The incremental backup then includes all changes that came after the specified LSN. Since then the location of the previous backup is not very significant then, you can use --with-timestamp to create named subdirectories automatically.
$ mysqlbackup --defaults-file=/home/pekka/.my.cnf --incremental \
--start-lsn=2654255716 \
--with-timestamp \
--incremental-backup-dir=/incr-backup \
backup
...many lines of output...
mysqlbackup: Backup created in directory '/incr-backup/2010-12-08_17-14-48'
mysqlbackup: start_lsn: 2654255717
mysqlbackup: incremental_base_lsn: 2666733462
mysqlbackup: end_lsn: 2666736714
101208 17:14:58 mysqlbackup: mysqlbackup completed OK!
To create an incremental backup image instead, use the following command, specifying with --incremental-backup-dir a temporary directory for storing the metadata for the backup and some temporary files:
$ mysqlbackup --defaults-file=/home/pekka/.my.cnf --incremental \
--start-lsn=2654255716 \
--with-timestamp \
--incremental-backup-dir=/incr-tmp \
--backup-image=/incr-backup/incremental_image.bi
backup-to-image
In the following example though, because --backup-image does not provide a full path to the image file to be created, the incremental backup image is created under the folder specified by --incremental-backup-dir:
$ mysqlbackup --defaults-file=/home/pekka/.my.cnf --incremental \
--start-lsn=2654255716 \
--with-timestamp \
--incremental-backup-dir=/incr-images \
--backup-image=incremental_image1.bi
backup-to-image
https://dev.mysql.com/doc/mysql-enterprise-backup/3.7/en/mysqlbackup.incremental.html

Optimize innodb table when innodb_file_per_table disable

I am using innodb. I have a problem with app performance. When I run mysqlturner.pl I get:
[--] Data in InnoDB tables: 8G (Tables: 1890)
[!!] Total fragmented tables: 1890
Ok, i have run mysqlcheck -p --optimize db.
I have diceded that innodb_file_per_table is disabled. The last 2 years a database wasn't reindex. How I can do it?
Make mysqldump?
Stop mysql serice?
insert enable innodb_file_per_table into my.cnf?
Start mysql serice?
Import from mysqldump?
run mysqlcheck -p --optimize db?
Will everything be ok?
Tables fragmented
Bogus.
All InnoDB tables are always (according to that tool) fragmented.
In reality, only 1 table in 1000 needs to be defragmented.
OPTIMIZE TABLE foo; will defrag a single table. But, again, I say "don't bother".
ibdata1 bloated
On the other hand... If you are concerned about the size of ibdata1 and Data_free (SHOW TABLE STATUS) shows that a large chunk of ibdata1 is "free", then the only cure is painful: As already mentioned: dump, stop, remove ibdata1, restart, reload.
But... If you don't have enough disk space to dump everything, you are in deep weeds.
You have 8GB of tables? (Sum Data_length and Index_length across all InnoDB tables.) If ibdata1 is, say 10GB, don't bother. If it is 100GB, then you have a lot of wasted space. But if you are not running out of disk space, again I say "don't bother".
If you are running out of disk space and ibdata1 seems to have a lot "free", then do the dump etc. But additionally: If you have 1890 tables, probably most are "tiny". Maybe a few are somewhat big? Maybe some table fluctuates dramatically in size (eg, add a million rows, then delete most of them)? I suggest having innodb_file_per_table ON when creating big or fluctuating tables; OFF for tiny tables.
Tiny tables take less space when living in ibdata1; big/fluctuating tables can be cleaned up if needed via OPTIMIZE if ever needed.
Yes that would do it. Take a backup before you start and test the entire process out first on an copy of the host, esp. the exact mysqldump command, you don't want databases like 'information_schema' in the dump.
Also ensure that other services cannot connect to your db once you start the export - any data they change would be lost during the import.
There are detailed instructions in this previous question:
Howto: Clean a mysql InnoDB storage engine?

How to shrink ibdata1 file size after moving to innodb_file_per_table?

I've followed the instruction in this thread ( https://dba.stackexchange.com/questions/8982/what-is-the-best-way-to-reduce-the-size-of-ibdata-in-mysql ) successfully but after 2 months the ibdata1 continues to grow. I've tried optimizing tables but it didn't reclaimed lost storage space. How do i shrink ibdata1 file size? Is there any way to delete it safely?
As you want to reclaim the space from ibdata1 you actually have to
delete the file:
Do a mysqldump of all databases, procedures, triggers etc except the mysql and performance_schema databases
Drop all databases except the above 2 databases
Stop mysql
Delete ibdata1 and ib_log files
Start mysql
Restore from dump
Source

moving InnoDb DB

I have DB InnoDb innodb_db_1. I have turned on innodb_file_per_table.
If I go to var/lib/mysql/innodb_db_1/ I will find files table_name.ibd, table_name.frm, db.opt.
Now, I'm trying to copy these files to another DB for example to innodb_db_2(var/lib/mysql/innodb_db_2/) but nothing happened.
But if my DB will be MyIsam, I can copy in such way and everything be ok.
What suggestions to move the DB by copying the file of InnoDb DB?
Even when you use file-per-table, the tables keep some of their data and metadata in /var/lib/mysql/ibdata1. So you can't just move .ibd files to a new MySQL instance.
You'll have to backup and restore your database. You can use:
mysqldump, included with MySQL, reliable but slow.
mydumper, a community contributed substitute for mysqldump, this supports compression and parallel execution and other neat features.
Percona XtraBackup, which is free and performs high-speed physical backups of InnoDB (and also supports other storage engines). This is recommended to minimize interruption to your live operations, and also if your database is large.
Re your comment:
No, you cannot just copy .ibd files. You cannot turn off the requirement for ibdata1. This file includes, among other things, a data dictionary which you can think of like a table of contents for a book. It tells InnoDB what tables you have, and which physical file they reside in.
If you just move a .ibd file into another MySQL instance, this does not add it to that instance's data dictionary. So InnoDB has no idea to look in the new file, or which logical table it goes with.
If you want a workaround, you could ALTER TABLE mytable ENGINE=MyISAM, move that file and its .frm to another instance, and then ALTER TABLE mytable ENGINE=InnoDB to change it back. Remember to FLUSH TABLES WITH READ LOCK before you move MyISAM files.
But these steps are not for beginners. It would be a lot safer for you to use the backup & restore method unless you know what you're doing. I'm trying to save you some grief.
There is an easy procedure to move the whole Mysql InnoDB from pc A to pc B.
The conditions to perform the procedure are:
You need to have innodb_file_per_table option set
You need to be able to make a shutdown of the database
In my case i had to move whole 150Gb MySql database (the biggest table had aprox. 60Gb). Making sqldumps and loading them back was not an option (too slow).
So what I did was I made a "cold backup" of the mysql database (mysql doc) and then simply move files to another computer.
The steps to make after moving the databse are described here dba stackexchange.
I am writing this, because (assuming you are able to follow mentioned conditions) this is by far the fastest (and probalby the easiest) method to move a (large) MySql InnoDb and nobody mentioned it yet.
You can copy MyISAM tables all day long (safely, as long as they are flushed and locked or the server is stopped) but you can't do this with InnoDB, because the two storage engines handle tables and tablespaces very differently.
MyISAM automatically discovers tables by iterating the files in the directory named for the database.
InnoDB has an internal data dictionary stored in the system tablespace (ibdata1). Not only do the tables have to be consistent, there are identifiers in the .ibd files that must match what the data dictionary has stored internally.
Prior to MySQL 5.6, with the introduction of transportable tablespaces, this wasn't a supported operation. If you are using MySQL 5.6, the link provides you with information on how this works.
The alternatives:
use mysqldump [options] database_name > dumpfile.sql without the --databases option, which will dump the tables in the specified database but will omit any DATABASE commands (DROP DATABASE, CREATE DATABASE and USE), some or all of which, based on the combination of options specified, are normally added to the dump file. You can then import this with mysql [options] < dumpfile.sql.
CREATE TABLE db2.t1 LIKE db1.t1; INSERT INTO db2.t1 SELECT * FROM db1.t1; (for each table; you'll have to add any foreign key constraints back in)
ALTER TABLE on each table, changing it to MyISAM, then flushing and locking the tables with FLUSH TABLES WITH READ LOCK;, copying them over, then altering everything back to InnoDB. Not the greatest idea, since you'll lose all foreign key declarations and have to add them back on the original tables, but it is an alternative.
As far as I know, "hot copying" table files is a very bad idea (I've done it two times, and only made it work with MyISAM tables, and i did it only because I had no other choice).
My personal recomendation is: Use mysqldump. On your shell:
mysqldump -h yourHost -u yourUser -pYourPassword yourDatabase yourTable > dumpFile.sql
To copy the data from a dump file to another database, on your shell:
mysql -h yourHost -u yourUser -pYourPassword yourNewDatabase < dumpFile.sql
Check: mysqldump — A Database Backup Program.
If you insist on copying InnoDB files by hand, please read this: Backing Up and Recovering an InnoDB Database