I have MySQL Server 5.1.62 installed on production server. I am monitoring mysql server's error log file every day and suddenly I found below error in my error log file.
InnoDB: Cannot delete/update rows with cascading foreign key constraints that exceed max depth of 250
Please drop excessive foreign constraints and try again
I have a database structure with primary key - foreign key relationships with proper update/delete actions and I need to delete data of child tables if the data in parent table deleted by application or manually (backend).
I had googled this issue but I can't find proper solution. How can I resolve this issue?
Have a look at this link - Cascade Delete results in "Got error -1 from storage engine". There is a suggestion.
Also, as a solution you may try to do it without ON DELETE CASCADE option, just use DELETE statement that removes records from some tables (multiple-table syntax).
The picture of a schema isn't very useful, because it doesn't show any cascading declarations. For example, if deletes are supposed to cascade from tbl_indentmaster to tbl_tepdetails, but deletes are not supposed to cascade from tbl_tepdetails to tbl_tepnoting, then then I'd expect some deletes to fail. (But with a different error message.)
If there is a circular referential constraint that's causing this, I'd expect it to be caused in part by a cascading reference from tbl_indentmaster to tbl_tepdetails. You might want to try dropping that foreign key constraint for testing. Do that on a tset sserver, not on the production server.
If this started suddenly, and your database worked correctly before, I'd first think about
restoring the database from backup, or
restoring the schema from backup, and reloading the current data, or
checking out the current version and rebuilding the database. (You do have the database schema under version control, don't you?)
I'll assume you don't have a good backup, and that you don't have your schema under version control.
Are you starting with a good database? Run mysqlcheck. Read that documentation carefully. Don't --repair before you have a tested, good backup.
Assuming that your database is good, that cascading deletes ought to work correctly in your database, and that your Google skills are good, I think your best start is to
install MySQL 5.5 or 5.6 on a test server,
load your database onto that test server, and
see whether you can reproduce that specific error.
To load your database onto the test server, dump the contents using mysqldump. Don't copy files at the filesystem level--one or more of them might be corrupt.
Although this might not resolve your issue, it might tell you exactly where the issue is. If it works correctly, you know the problem is probably related to the server version, and that it might be resolved with a version upgrade.
I agree with the original answers by #Devart and #Catcall here but I'd like to add a few things after exchanging a few comments with the OP.
First, I have reduced the schema image representation to only the tables that are affected by a DELETE query on tbl_indentmaster.
From what I could see there are no circular FK references in this schema diagram.
Also, the OP ran the following query:
DELETE FROM tbl_indentmaster WHERE indentId IN (1,2,3,4,5,6,...,150,151,155,156,....)
That's an aweful lot of rows to delete. On enquiring further the OP claims that the query works for smaller subsets of indentId's.
From this I think we can take two possibilities:
There's a bug in MySQL (highly unlikely but possible) which causes large queries with CASCADE DELETE like yours to fail. Note I am suggesting the possibility of a new bug not the one [posted already][2]. Ideally the number of rows to delete should not matter.
There is a particular indentId entry within tbl_indentmaster which is causing the entire query to fail.
I'd suggest that you first try to diagnose the issue considering point (2) is the actual culprit. You can break the DELETE query into smaller chunks and find the offending id's.
If this script is something that has to be periodically executed through code (in a larger application) then you should consider executing the query in smaller chunks there as well (probably 15 id's per query is a good start IMO). In addition to doing this I'd suggest logging errors with offending id's in a log file so you know exactly which entries are failing.
Related
I'm in the process of learning MySQL, but have noticed that the results I am getting are occasionally inconsistent with what is described in tutorials and other reference material.
First, I am using a version of MySQL greater than 5.7 (specifically, 5.7.14), and have read that the default table type/storage engine should be InnoDB. However, unless specified otherwise, the tables I create are MyISAM.
Second, the material I have read indicate "If a table is renamed, foreign keys pointing to the table are not automatically updated, and must be dropped and recreated manually". However, I have tested this repeatedly and am finding that my foreign keys ARE, in fact, automatically updated. I am not complaining (it seems to me that in most cases this behavior would be preferable), but I am a bit perplexed.
Have I read incorrect material (or read the material incorrectly)? Do I have some nonstandard version of MySQL installed? If not, what are some possible explanations for these discrepancies?
I was running my ruby scripts to load in to mysql. It has an error:
Mysql::Error: Duplicate entry '4444281482' for key 'PRIMARY'
Where my primary key is Auto-increment ID (Big-INT). I was running the script in multiple terminals with different data using screen, to load into the same table. This problem never happened before, but when it happens, all the scripts in different terminals are likely to suffer from that problem. The dataset is different. It seems to happen randomly.
What is likely to be the cause?
Why there would be duplicate in an auto-increment field?
You mention that you are running the script from different terminals using different data. According to the MySQL manual, and assuming your engine is InnoDB, since each transaction is inserting a different amount of rows against an AUTO_INCREMENT column, the engine may not know how many rows will be retrieved in advance. This could possibly explain why you are receiving a duplicate key error. With the use of a table-level lock held to the end of the statement, only one INSERT statement can execute at a time and the generation of auto-increment numbers won't interleave.
I'm pretty sure I had this problem - it has nothing to do with client (I mean its reproducible in both my app, query browser, cli client etc.).
If you don't bother with gaps in your id numeration you can try
ALTER TABLE `tableName` AUTO_INCREMENT = 4444281492;
(of course you can try to add more than 10 indexes, like 100000 to be sure ;) you can always revert counter to old value with same query)
This will change your auto increment counter to greater number and potentially skip invalid indexes - although I have no idea what is the cause of this issue (in my case it persisted durign mysqld restart or entire machine reboot)
oh and I should add - I did it on dev server, if this is production I would advice further investigation.
What are some best practices tips for tinkering, deleting tables, making reversible changes in MySQL (not production) testing server? In my case I'm learning a PHP/MySQL framework.
The only general tool I have in my toolbox is to rename files before I delete them. If there is a problem I can always return a file to its original name. I would imagine it should be OK to apply the same practice to a database, since clients can lose their connection to a host. Yet, how does a web application framework proceed when referential integrity is broken only in one place?
I guess you are referring to transactions. InnoDB engine in MySQL supports transactions as well as Foreign Key constraints.
In transactional design, you can execute a bunch of queries that need to be executed as a single entity in order to be meaningful and to maintain data integrity. A transaction is started and if something goes wrong it does a Rollback, thus reverting every change done so far, or committing the entire set of modifications in the database.
Foreign keys are constraints for referential data. Thus in a master-detail relationship you cannot e.g. refer to a master record that does not exist. If there is a table comments with a user_id referring to the users.id field , you are not allowed to enter a comment for a non-existent user.
Read more here if you will
http://dev.mysql.com/doc/refman/5.0/en/innodb-transaction-model.html
and for foreign keys
http://dev.mysql.com/doc/refman/5.0/en/innodb-foreign-key-constraints.html
A really weird (for me) problem is occurring lately. In an application that accepts user submitted data the following occurs at random:
Rows from the Database Table where the user submitted data is stored are disappearing.
Please note that there is NO DELETE, DROP, TRUNCATE or other SQL statement issued on the database table except from the INSERT statement.
Could this be a bug of Mysql? Did some research on mysql.com (forums, bugs, etc) and found 2 similar cases but without getting a solid answer (just suggestions).
Some info you might find useful:
Storage Engine: InnoDB
User Submitted Data sanitized and checked for SQL Injection attempts
Appreciate any suggestions, info.
regards,
Here's 3 possibilities:
The data never got to the database in the first place. Something happened elsewhere so the data disappeared. Maybe intermitten network issues, overloaded server, application bug.
A database transaction was not commited, and got rolled back. Maybe a bug in your application code, maybe some invalid data screwd things up, maybe a concurrency exception occured etc.
A bug in mysql.
I'd look at 1. and 2. first.
A table on which you only ever insert (and presumably select) and never update or delete should be really stable. Are you absolutely certain you're protecting thoroughly against SQL injection attacks? Because those could (of course) delete rows and such if successful.
You haven't mentioned which table engine you're using (there are several), but it's well worth running whatever diagnostic tools there are for it on the table in question. For instance, on a MyISAM table, run myisamchk. Or more generically (this works for several table types), use the CHECK TABLE statement.
Have you had issues with the underlying storage? It may be worth checking for those.
Activating binlog and periodically monitoring DELETE queries can help to identify the culprit.
One more case to fullfill the above. There could also be the case of client-side and server-side parts of application. Client-side initiated changes can be processed on the server side with additional code logics.
For example, in our case, local admin panel updated an order information with pay_date = NULL and php-website processed this table to clean-up overdue orders from this table. As php logics were developed by another programmer, it looked strange when orders update resulted in records to disappear after some time.
The same refers to crone operations, working on mysql database in a schedule.
I'm trying to take a full dump of my database. While taking a dump, mysqldump skips a few tables, especially those with foreign keys. It's not that every table with foreign keys is skipped. Some specific tables only!
I tried the -f switch. It forced it to include a few tables but still two tables are being skipped.
Is this normal? I mean, does this happen? Does my schema has some problems? How can this be solved?
In reference to #Nikhil's comment on McAfee. I ran into a situation where McAfee was trying to read (and thus was blocking) the temporary files that MySQL creates when queries move from in memory to temporary. We had to create a specific rule that prevented McAfee from trying to scan the temporary file so that MySQL wouldn't have issues. In this situation my educated guess would be that McAfee was doing something similar with the MySQL dump process.