I have an application which is using MySql database.
In the last period, I am noticing that some records just disappear from the table.
The table has over 30,000 rows. It is real pain to find what is missing.
Is there any way to lock these rows so they couldn't be deleted? This morning I've found missing rows from (35748 - 35754), the previous month the same happen and I am afraid it will repeat.
I am using MyISAM storage engine, should I switch to InnoDB. The table is used very often for inserting and reading data as well as row updates. I switched one time to InnoDB, but then the app was very slow so I had to return to MySql. That happened a year ago.
Is there a query that I can make to show me what records are missing for all Id in the table. ID is an auto-increment.
Any suggestions on how to make this not happen again?
Related
I have two tables and in both tables I get 1 million records .And I am using cron job every night for inserting records .In first table I am truncating the table first and then inserting the records and in second table I am updating and inserting record according to primary key. I am using mysql as my database.My problem is I need to do this task each day but I am unable to insert all data .So what can be the possible solution for this problem
Important is to set off all kind of actions and checks MySQL wants to perform when posting data, like autocommit, indexing, etc.
https://dev.mysql.com/doc/refman/5.7/en/optimizing-innodb-bulk-data-loading.html
Because if you do not do this, MySQL does a lot of work after every record added, and it adds up, when the process is proceeding, resulting in a very slow processing and importing in the end, and may not complete in one day.
If you must use MySql : For the first table, disable the indexes, do the inserts, than enable indexes. This will works faster.
Alternatively MongoDb will be faster, and Redis is very fast.
We have a large MySQL 5.5 database in which many rows are inserted daily and never deleted or updated. There are also users querying the live database. Tables are MyISAM.
But it is effectively impossible to run ANALYZE TABLES because it takes way too long. And so the query optimizer will often pick the wrong index. (15 hours, and sometimes crashes the tables.)
We want to try switching to all InnoDB. Will we need to run ANALYZE TABLES or not?
The MySQL docs say:
The cardinality (the number of different key values) in every index of a table
is calculated when a table is opened, at SHOW TABLE STATUS and ANALYZE TABLE and
on other circumstances (like when the table has changed too much).
But that begs the question: when is a table opened? If that means accessed during a connection then we need do nothing special. But I do not think that that is the case for InnoDB.
So what is the best approach? Run ANALYZE TABLE periodically? Perhaps with an increased dive count?
Or will it all happen automatically?
The query users use apps to get the data, so each run is a separate connection. They generally do NOT expect the rows to be up-to-date within just minutes.
Actually i queried optimize table query for one table. then i didn't do any operation on that table. then again i'm querying optimize table query at the end of every month. but the data in the table may be changed once in four or 8 months. is it create any problem in performance of the mysql query?
If you don't do DML operations on the table, OPTIMIZE TABLE is useless.
OPTIMIZE TABLE cleans the table of deleted records, sorts the index pages (brings the physical order of the pages in consistence to logical one) and recalculates the statistics.
For the duration of the command, the table is unavailable both for reading and writing, and the command may take long for large tables.
Did your read the manual about OPTIMIZE? And do you have a problem you want to solve using OPTIMIZE? If not, don't use this statement at all.
If the data doesn't quite change over a period of 4-8 months it should not create any issue with performance for the end of month report.
However if the count of rows that are changed in the 4-8 months period is huge then you would want to rebuild indexes/analyze the tables so that the queries run fine after the load.
I have a table with 8 millions records in mysql.
I want to keep last one week data and delete the rest, i can take a dump and recreate the table in another schema.
I am struggling to get the queries right, please share your views and best approaches to do this.Best way to delete so that it will not affect other tables in the production.
Thanks.
MySQL offers you a feature called partitioning. You can do a horizontal partition and split your tables by rows. 8 Million isn't that much, how is the insertion rate per week?
CREATE TABLE MyVeryLargeTable (
id SERIAL PRIMARY KEY,
my_date DATE
-- your other columns
) PARTITION BY HASH (YEARWEEK(my_date)) PARTITIONS 4;
You can read more about it here: http://dev.mysql.com/doc/refman/5.1/en/partitioning.html
Edit: This one creates 4 partitions, so this will last for 4 weeks - therefore I suggest changing to partitions based on months / year. Partition limit is quite high but this is really a question how the insertion rate per week/month/year looks like.
Edit 2
MySQL5.0 comes with an Archive Engine, you should use this for your Archive table ( http://dev.mysql.com/tech-resources/articles/storage-engine.html ). Now how to get your data into the archive table? It seems like you have to write a cron-job that runs on the beginning of every week, moving all records to the archive table and deleting them from the original one. You could write a stored procedure for this but the cron-job needs to run on the shell. Keep in mind this could affect your data integrity in some way. What about upgrading to MySQL 5.1?
I have a table in MySQL 5 (InnoDB) that is used as a daemon Processing Queue, thus it is being accessed very often. It is typical to have around 250 000 records inserted per day. When I select records to be processed, they are read using a FOR UPDATE query to eliminate race conditions (everything is Transaction Based).
Now I am developing a "queue archive" and I have stumbled into a serious dead-lock problem. I need to delete "executed" records from the table as they are being processed (live), yet the table dead-locks every once in a while if I do so (two-three times per day at).
I though of moving towards delayed deletion (once per day at low load times) but this will not eliminate the problem only make it less obvious.
Is there a common-practice in dealing with high-load tables in MySQL?
InnoDB locks all rows it examines, not only those requested.
See this question for more details.
You need to create an index that would exactly match your search condition to get rid of unnecessary locks, and make sure it is used.
Unfortunately, DML queries in MySQL do not accept hints.