I'm trying to take a full dump of my database. While taking a dump, mysqldump skips a few tables, especially those with foreign keys. It's not that every table with foreign keys is skipped. Some specific tables only!
I tried the -f switch. It forced it to include a few tables but still two tables are being skipped.
Is this normal? I mean, does this happen? Does my schema has some problems? How can this be solved?
In reference to #Nikhil's comment on McAfee. I ran into a situation where McAfee was trying to read (and thus was blocking) the temporary files that MySQL creates when queries move from in memory to temporary. We had to create a specific rule that prevented McAfee from trying to scan the temporary file so that MySQL wouldn't have issues. In this situation my educated guess would be that McAfee was doing something similar with the MySQL dump process.
Related
I tried to search for this case everywhere but, couldn't find anything that answers this - probably weird - question: What happens to the incremental backups taken from a mariadb server using mariabackup if one of the databases is dropped?
Suppose you dropped one of the databases in a mariadb server, then you created an incremental backup afterwards, where the base full backup certainly includes the dropped database, does applying the incremental backup when preparing to restore include that removal? or that dropped database will still be present in the fully prepared backup?
PS: I realize that mariabackup uses InnoDB LSN to backup the changes / diffs only but, do those diffs include the removal of a table or a database?
My guess is that when preparing the incremental backup over the base, it would remove the tables and / or databases which are missing from the latest delta backups but, I might be wrong so, that's why I'm asking.
Well, after trying out the scenario I've found out that the dropped databases do exist in the full integral backup but, their tables are removed.
So, I think that a database structure changes are also included in the incremental backup e.g. modifications in table columns, foreign keys, indices, table creation and dropping etc.. are all tracked but, dropping the database itself is NOT tracked however, a dropped database will have all its tables missing from the result backup of integrating all incremental backups to the base one.
I need a fresh opinion on the case. Any thoughts are appreciated.
Input: we have a huge percona mysql (5.5) database that takes a couple of Tb (terabytes). Tables on innodb engine.
More than a half (2/3) of that size should be deleted as quick as possible.
Also we have master-slave configuration.
As the quickest way to achieve that I am considering the following solution:
Execute for each table on the slave server (to avoid production downtime) :
Stop replication
Select the rows NOT to be deleted into an empty new table that has the same structure as the original table
Rename original table to "table_old", new table - to correct name
Drop the original table "table_old"
Start replication
The problem is that we have a lot of FK constraints. Also I am afraid to break the replication during this process.
Questions:
1) What the potential problems can be with FK constraints in this solution?
2) How do not break replication?
3) Opinions? Alternative solutions?
Thank you in advance.
if you can put db offline (aka no one is accessing the db except you) for a while, you can go with your solution but you need to drop the FK involved before and to recreate them after. You should also check for AUTO_INCREMENT columns that will change number with copy operation.
the FK are needed if you want the db online, I had a similar problem with some huge log tables, any try to delete all the records at a time will probably lock the database or corrupt the table.
so I went for a slow approach, I made a procedure that will delete batches of rows from the tables using clustered primary key, and then I scheduled it to run every n seconds.
Mysql 5.05 that is hosting an older application that still gets lots of love from the users. Unfortunately, I'm not anything other than a hack dba at best, and am very hesitant in my skills to safely migrate to a new version of the database unless absolutely necessary. We are in the process of procuring a new application to take over responsibilities for the old application, but are probably a year out or so.
Anyway, I was patching the application the other day and added a column to a table, the command took a while to complete and in the meantime nearly filled up my drive hosting the datafiles. (table is roughly 25G) I believe this was a function of the creation of a temporary table. For reasons I'm not clear on, the space did not become free again after the column was added; i.e., I lost roughly 25G of disk space. I believe (?) this was due to the fact that the database was created with a single datafile; I'm not really sure on the whys, but I do know that I had to free up some space elsewhere to get the drive to an operable state.
That all being said, I've got the column added, but it is worthless to the application without an index. I held off adding the index trying to figure out if it is going to create another massive, persistent 'temporary' table at index creation time. Can anyone out there give me insight into:
Will a create index and or alter table create index statement result in the creation of a temporary table the same size as the existing table?
How can I recover the space that got added to ibdata1 when I added the column?
Any and all advice is greatly appreciated.
MySQL prior to version 5.1 adds/removes indices on InnoDB tables by building temporary tables. It's very slow and expensive. The only way around this is to either upgrade MySQL to 5.1, or to dump the table with e.g. mysqldump, drop it, recreate it with the new indices, and then restore it from the dump.
You can't shrink ibdata1 at all. Your only solution is to rebuild from scratch. It is possible to configure MySQL so it doesn't use one giant ibdata1 file for all the databases - read that answer and it will explain how to configure MySQL/InnoDB so this doesn't happen again, and also how to safely dump and recreate all your databases.
Ultimately, you probably want to
Make a complete dump of your database
Upgrade to MySQL 5.1 or newer
Turn on InnoDB one-file-per-table mode
Restore the dump.
I have MySQL Server 5.1.62 installed on production server. I am monitoring mysql server's error log file every day and suddenly I found below error in my error log file.
InnoDB: Cannot delete/update rows with cascading foreign key constraints that exceed max depth of 250
Please drop excessive foreign constraints and try again
I have a database structure with primary key - foreign key relationships with proper update/delete actions and I need to delete data of child tables if the data in parent table deleted by application or manually (backend).
I had googled this issue but I can't find proper solution. How can I resolve this issue?
Have a look at this link - Cascade Delete results in "Got error -1 from storage engine". There is a suggestion.
Also, as a solution you may try to do it without ON DELETE CASCADE option, just use DELETE statement that removes records from some tables (multiple-table syntax).
The picture of a schema isn't very useful, because it doesn't show any cascading declarations. For example, if deletes are supposed to cascade from tbl_indentmaster to tbl_tepdetails, but deletes are not supposed to cascade from tbl_tepdetails to tbl_tepnoting, then then I'd expect some deletes to fail. (But with a different error message.)
If there is a circular referential constraint that's causing this, I'd expect it to be caused in part by a cascading reference from tbl_indentmaster to tbl_tepdetails. You might want to try dropping that foreign key constraint for testing. Do that on a tset sserver, not on the production server.
If this started suddenly, and your database worked correctly before, I'd first think about
restoring the database from backup, or
restoring the schema from backup, and reloading the current data, or
checking out the current version and rebuilding the database. (You do have the database schema under version control, don't you?)
I'll assume you don't have a good backup, and that you don't have your schema under version control.
Are you starting with a good database? Run mysqlcheck. Read that documentation carefully. Don't --repair before you have a tested, good backup.
Assuming that your database is good, that cascading deletes ought to work correctly in your database, and that your Google skills are good, I think your best start is to
install MySQL 5.5 or 5.6 on a test server,
load your database onto that test server, and
see whether you can reproduce that specific error.
To load your database onto the test server, dump the contents using mysqldump. Don't copy files at the filesystem level--one or more of them might be corrupt.
Although this might not resolve your issue, it might tell you exactly where the issue is. If it works correctly, you know the problem is probably related to the server version, and that it might be resolved with a version upgrade.
I agree with the original answers by #Devart and #Catcall here but I'd like to add a few things after exchanging a few comments with the OP.
First, I have reduced the schema image representation to only the tables that are affected by a DELETE query on tbl_indentmaster.
From what I could see there are no circular FK references in this schema diagram.
Also, the OP ran the following query:
DELETE FROM tbl_indentmaster WHERE indentId IN (1,2,3,4,5,6,...,150,151,155,156,....)
That's an aweful lot of rows to delete. On enquiring further the OP claims that the query works for smaller subsets of indentId's.
From this I think we can take two possibilities:
There's a bug in MySQL (highly unlikely but possible) which causes large queries with CASCADE DELETE like yours to fail. Note I am suggesting the possibility of a new bug not the one [posted already][2]. Ideally the number of rows to delete should not matter.
There is a particular indentId entry within tbl_indentmaster which is causing the entire query to fail.
I'd suggest that you first try to diagnose the issue considering point (2) is the actual culprit. You can break the DELETE query into smaller chunks and find the offending id's.
If this script is something that has to be periodically executed through code (in a larger application) then you should consider executing the query in smaller chunks there as well (probably 15 id's per query is a good start IMO). In addition to doing this I'd suggest logging errors with offending id's in a log file so you know exactly which entries are failing.
I have about 100 databases (all the same structure, just on different servers) with approx a dozen tables each. Most tables are small (lets say 100MB or less). There are occasional edge-cases where a table may be large (lets say 4GB+).
I need to run a series of ALTER TABLE commands on just about every table in each database. Mainly adding some rows to the structure, but a few changes like change a row from a varchar to tinytext (or vice versa). Also adding a few new indexes (but indexing new rows, not existing ones, so assuming that isn't a big deal).
I am wondering how safe this is to do, and if there are any best practices to this process.
First, is there any chance I may corrupt or delete data in the tables. I suspect no, but need to be certain.
Second, I presume for the larger tables (4GB+), this may be a several-minutes to several-hours process?
Anything and everything I should know about performing ALTER TABLE commands on a production database I am interested in learning.
If its of any value knowing, I am planning on issuing commands via PHPMYADMIN for the most part.
Thanks -
First off before applying any changers, make backups. Two ways you can do it: mysqldump everything or you can copy your mysql data folder.
Secondly, you may want to use mysql from the command line. PHPMyAdmin will probably time out. Most PHP server has timeout less than 10 minutes. Or you accidently close the browser.
Here is my suggestion.
You can do fail-over the apps.(make sure no connections on all dbs) .
You can create indexes by using "create index statements" .don't use alter table add index statements.
Do these all using script like(keep all these statements in a file and run from source).
Looks like table sizes are very small so it wont create any headache.