I have inherited a large network database written in Access 2010 VBA.
it is split into front and back ends with users of approx 100 but about 5-10 concurrent.
The #Deleted (across all fields) record appears only in one table which has a primary key of Long Int and records around the 100k range.
The table doesn't have any relationships so cascade protection isn't the issue.
Every few days a record shows up as #deleted which then prevents a form linked to the table (dynaset) from showing its records.
Compact and repair does remove the #deleted records but also removes the primary key for the table so that has to be reapplied before saving.
This leads me to believe the record corruption is also effecting the index of the table.
Then again a few days later the same scenario.
I have recreated the table from scratch replicating the PK and indexes and transferring the data across but no luck.
i know running concurrent users can cause data corruption but this application has ran for years without this situation happening so i cant believe that its suddenly become a problem.
Does anyone have any ideas as to why this could be happening? or ideas on how to narrow down the potential issues?
Related
So I run this TYPO3 website with nearly 80 tables. TYPO3 don't delete records really, it only writes an "1" into the table field deleted to mark them. This leads to a big table with many records that are not visible in the application but have to be processed in every database query.
My question is: Until how many dead entries should you keep those entries before facing disadvantages like performance decrease? Is there any known number of entries no matter the server hardware?
Thanks in advance!
TYPO3 has an included task to do cleanups for old/deleted entries, called:
Table garbage collection : cleans up old records from any table in the database.
See https://docs.typo3.org/c/typo3/cms-scheduler/master/en-us/Installation/BaseTasks/Index.html#table-garbage-collection-task
You may decide, which kind of entries should be cleaned in which period, depend on your use case and your server environment.
It depends. If there are good indexes, the extra rows may not hurt performance much. Are you seeing a slowdown? (There's an old saying: "If it ain't broke, don't fix it.")
Something like DELETE FROM t WHERE deleted may be a viable way to clean up table t. But it may run into issues with FOREIGN KEYs.
How many rows in the tables? If there are millions of rows to DELETE, it gets tricky to do the task without bringing the system to its knees.
I have an application which is using MySql database.
In the last period, I am noticing that some records just disappear from the table.
The table has over 30,000 rows. It is real pain to find what is missing.
Is there any way to lock these rows so they couldn't be deleted? This morning I've found missing rows from (35748 - 35754), the previous month the same happen and I am afraid it will repeat.
I am using MyISAM storage engine, should I switch to InnoDB. The table is used very often for inserting and reading data as well as row updates. I switched one time to InnoDB, but then the app was very slow so I had to return to MySql. That happened a year ago.
Is there a query that I can make to show me what records are missing for all Id in the table. ID is an auto-increment.
Any suggestions on how to make this not happen again?
I'm having another issue with an Microsoft Access database. Every so often, some records will get corrupted. Something happens and different shapes, Chinese characters, and wrong data will be in the records. I did find a way on not losing the corrupted records by having a backup for that table that I update everyday. Still, it's a bit of an annoyance especially when an update is ran.
I've tried to look for different solutions for this problem but none have really worked. It's a database that can be used by multiple users at the same time. It's an older one that I've had to update a bit. I don't have any memo fields present in the table either.
If you are using an autonumber field as a primary key, that could cause an increased corruption risk if the autonumber seed is reset and begins duplicating existing values. This has since been fixed, but you may need to update your Jet Engine Service Pack
If you are in a multi-user environment and have not split your database, you should try that. You can split the database using the database tools tab on the ribbon in the "Move Data" section. That can reduce corruption risk by better managing concurrent updates to the same record. See further discussion here.
Unfortunately I can't tell you the problem without more information regarding your tables and relationships. If the corruption is a common result of your update query, I would start by looking through your update routine for errors.
In our system we have a table which over the night is under constant attack (we do many imports at night)
Because we were getting fewer records than usual we put some sneaks in the imports code and it raised up that in many update statements the"Incorrect key file for table './xxx/zzz.MYI'; try to repair it". error appears.
I can repair it every day, but the next morning the errors appear again.
How can I find out which is the cause?
May it be happening because there's too many inserts/updates at the same time?
Will switching to innoDB solve the problem?
Anybody knows anything abut it?
Also I collect the errors in the php which do the imports somehow the table don't get lebeled as crashed and keeps working.
It's a myisam table whit about 500.000 rows and the imports are 12 XML feeds(of about 10MB each) full of car ads that are inserted/updated to the table in question.
Could it be for the table size?
thank you!
Scenario
I have an hourly cron that inserts roughly 25k entries into a table that's about 7 million rows. My Primary Key is a composite of 5 different fields. I did this so that I wouldn't have to search the table for duplicates prior to insert, assuming the dupes would just fall to the floor on insert. Due to PHP memory issues I was seeing while reading these 25k entries in (downloading multiple json files from a url and constructing insert queries), I break the entries into 2k chunks and insert them at once via INSERT INTO blah (a,b,c) VALUES(1,2,3),(4,5,6),(7,8,9);. Lastly I should probably mention I'm on DreamHost so I doubt my server/db setup is all that great. Oh and the db is MyIsam(default).
Problem
Each 2k chunk insert is taking roughly 20-30 seconds(resulting in about a 10 minute total script time including 2 minutes for downloading 6k json files) and while this is happening, user selects from that table appear to be getting blocked/delayed making the website unresponsive for users. My guess would be that the slowdown is coming from the insert trying to index the 5 field PK into a table of 7 million.
What I'm considering
I originally thought enabling concurrent insert/selects would help the unresponsive site, but as far as I can tell, my table is already MyIsam and I have concurrent inserts enabled.
I read that LOAD DATA INFILE is a lot faster so I was thinking of maybe inserting all my values into an empty temp table that will be mostly collision free(besides dupes from the current hour), exporting those w/ SELECT * INTO OUTFILE and then using LOAD DATA INFILE, but i don't know if the overhead of inserting and writing negates the speed benefit. Also the guides I've read talk about further optimizing by disabling my indexes prior to insert, but i think that would break my method of avoiding duplicates on insert...
It's probably obvious that I'm a bit clueless here, I know just enough to get myself really confused on what to do next. Any advice on how to speed up the inserts or just to make selects still responsive while these inserts are occurring would be greatly appreciated.