In our system we have a table which over the night is under constant attack (we do many imports at night)
Because we were getting fewer records than usual we put some sneaks in the imports code and it raised up that in many update statements the"Incorrect key file for table './xxx/zzz.MYI'; try to repair it". error appears.
I can repair it every day, but the next morning the errors appear again.
How can I find out which is the cause?
May it be happening because there's too many inserts/updates at the same time?
Will switching to innoDB solve the problem?
Anybody knows anything abut it?
Also I collect the errors in the php which do the imports somehow the table don't get lebeled as crashed and keeps working.
It's a myisam table whit about 500.000 rows and the imports are 12 XML feeds(of about 10MB each) full of car ads that are inserted/updated to the table in question.
Could it be for the table size?
thank you!
Related
I have a MySQL table named requests. It's growing so fast. Currently, it has 3 million rows. Also, the current table engine is "InnoDB".
Few days ago, I got this error:
ERROR 1114 (HY000): The table is full
I've resolved the problem (temporary I guess) by adding this to MySQL configuration:
innodb_data_file_path = ibdata1:10M:autoextend
But still sometimes my serves goes to down and I have to restart it to make it available. Recently, I've made that table empty and I got rid of being down.
But it will be full again soon. Any idea how can I fix that issue? Should I use another engine for that table? Should I use compressed type-row for it? or what?
Noted that, sometimes I need to select from that table and it has one index on a column (in addition to pk)
You are limited by disk space. You must keep an eye on that and take action before you get to "table full".
If you store the data in a MyISAM table, you can go twice as long before getting "table full". If you simply write to a disk file (no database, no queries; etc), you can squeeze a little more in before "disk full".
Do you need to "purge" old data? That way you could continue to receive new data without ever hitting "table full". The best way to do that is via InnoDB and PARTITION BY RANGE(TO_DAYS(...)). If you purge anything over a month old, use daily partitions. For, say, 90 days, use weekly partitions. More discussion: http://mysql.rjweb.org/doc.php/partitionmaint
What will you do with the data? Analyze it? Search it? What SQL queries do you envision? Answer those; there could be other tips.
I accidentally wrote a code with infinity loop, and added A LOT OF ROWS IN A TABLE. The loop ran for around 20-60 sec (not sure).
So, i tried to TRUNCATE "lastrequestdone"; but, it is not working. I still the same amount of rows in the table.
I tried to drop the table and re-create it, but i found out the rows still exist even after dropping the table and recreating it.
I also started to get this message in PHPMyAdmin
The phpMyAdmin configuration storage is not completely configured, some extended features have been deactivated. Find out why.
Or alternately go to 'Operations' tab of any database to set it up there.
and when i click on "find out why"
I get this message
Configuration of pmadb… not OK
General relation features Disabled
Create a database named 'phpmyadmin' and setup the phpMyAdmin configuration storage there.
PHP is really fast, I have no idea, how many rows i added....
P.S: I tried to drop and re-create the table again after 10 minutes, and it is empty now. Sorry for rushing to stackoverflow, I was scared to death.
But, I still get those error in yellow background above.
I have inherited a large network database written in Access 2010 VBA.
it is split into front and back ends with users of approx 100 but about 5-10 concurrent.
The #Deleted (across all fields) record appears only in one table which has a primary key of Long Int and records around the 100k range.
The table doesn't have any relationships so cascade protection isn't the issue.
Every few days a record shows up as #deleted which then prevents a form linked to the table (dynaset) from showing its records.
Compact and repair does remove the #deleted records but also removes the primary key for the table so that has to be reapplied before saving.
This leads me to believe the record corruption is also effecting the index of the table.
Then again a few days later the same scenario.
I have recreated the table from scratch replicating the PK and indexes and transferring the data across but no luck.
i know running concurrent users can cause data corruption but this application has ran for years without this situation happening so i cant believe that its suddenly become a problem.
Does anyone have any ideas as to why this could be happening? or ideas on how to narrow down the potential issues?
I am trying to figure out why under high load (not under normal load) our magento store throws the following error at random intervals:
Payment transaction failed. Reason
SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry
'INV1392428' for key 'UNQ_SALES_FLAT_INVOICE_INCREMENT_ID'
This results in the card being processed but the order not going through. My guess is that transactions are colliding on the db, (we are running InnoDB) but I cant figure out how to set it so that it "locks" the key properly to keep from duplicates being created.
Any help is greatly appreciated!
Thanks,
Rick
The increment is done in PHP (Mage_Eav_Model_Entity_Type::Mage_Eav_Model_Entity_Type()), not the database, and so there is a defined period of time where it is possible to get two of the same increment IDs. Normally this period of time is very, very small though there are two scenarios that could increase it.
Extremely high database load that is slowing down save operations
A customization (or possibly Magento itself) that is starting a transaction earlier in the save process. Transactions may lock tables and increase the window of opportunity for duplicate increment IDs to be generated. Couple this with #1 and the window for this error to occur will increase.
My website is experiencing issues at checkout. I'm using Magento Enterprise 1.8 and my checkout module is Idev's Onestepcheckout.
The issue we are seeing is that the eav_entity_store table is taking an exceedingly long time (up to 51 seconds) to return an order number to Mage_Eav_Model_Entity_Type.
What I do know is that the query run to get this is a transaction run as 'FOR UPDATE' so the row being accessed is locked until the transaction completes. I've looked at other parts of the code as well as the PHP code throughout the transaction where the row is locked (we're using InnoDB so the lock should be getting released once the transaction is committed) and I'm just not seeing anything there (or in the slow query logs) that should be causing a lock wait anywhere near 51 seconds.
I have considered that requests may be getting stacked up and slowly creeping up in time as they wait, but I'm seeing the query time go from 6ms to 20k ms to 50k ms 1,2,3. It isn't an issue of 100-200 requests stacked up, as there are only a few dozen of these a day.
I'm aware that MySql uses parent locking, but there are no FK's related to this table whatsoever. There are two BTREE indexes that at one point were FK's but have since been Altered (that happened years ago). For those who are un-Magento savy, the eav_entity_store table has less than 50 rows and is only 5 columns wide (4 smallint and a varchar). I seriously doubt tablesize or improper indexing is the culprit. In the spirit of TLDR, however, I will say that the two BTREE indexes are the two columns by which we select from this table.
One possibility is that I may need to replace the two indexes with a compound index, as the ONLY reads to this table are coming from a query that reads (FROM [Column with Index A] AND [Column with Index B]). I simply don't know if row-level locking would prevent this query from accessing another row in the table with the indexes currently on the table.
At this point, I've become convinced that the underlying issue is strictly DB related, but any Magento or MySql advice regarding this would be greatly appreciated. Anybody still actually reading this can hopefully appreciate that I have exhausted a number of options already and am seriously stumped here. Any info that you think may help is welcome. Thanks.
Edit The exact error we are seeing is:
Error message: SQLSTATE[HY000]: General error: 1205 Lock wait timeout exceeded; try restarting transaction
Issue solved. Wasn't a problem with MySql. For some reason, generation of Invoice Numbers was taking an obscene amount of time. Company doesn't use Invoices from Magento. Turned them off. Problem solved. No full RCA done on what specifically the problem with invoice generation was.