MyIsam table level locking / deadlock - mysql

I having different list of tables apart of that one table is type of InnoDb and others are MyIsm type.
My problem is, I am not getting any response on my website. When I check process list on MySql I found that :
A table having list of all question that sending data , some other process in waiting to update question's column and some other process in waiting to insert data in same table.
This table contains list of question. The type is MyIsam . The size of table is about 5 GB.
How can I resolve that?

There are a couple of solutions for this, some may apply, some not.
Add indexes to some columns
Change database type to innoDB
Take a look at your query and improve the performance
When you add indexes to you table the lookup will be much faster and therefor the table lock won't be as long.
When you change your database type to InnoDB the lock will only affect the row(s) the query is using, so other rows are still available for other query's
In your query itself most of the times a lot of performance can be gained by removing unnecessary joins or order by clauses. Maybe you can use temporary tables instead of multiple subselects, etc...

Related

Updating MySQL Innodb Index Statistics

We have a large MySQL 5.5 database in which many rows are inserted daily and never deleted or updated. There are also users querying the live database. Tables are MyISAM.
But it is effectively impossible to run ANALYZE TABLES because it takes way too long. And so the query optimizer will often pick the wrong index. (15 hours, and sometimes crashes the tables.)
We want to try switching to all InnoDB. Will we need to run ANALYZE TABLES or not?
The MySQL docs say:
The cardinality (the number of different key values) in every index of a table
is calculated when a table is opened, at SHOW TABLE STATUS and ANALYZE TABLE and
on other circumstances (like when the table has changed too much).
But that begs the question: when is a table opened? If that means accessed during a connection then we need do nothing special. But I do not think that that is the case for InnoDB.
So what is the best approach? Run ANALYZE TABLE periodically? Perhaps with an increased dive count?
Or will it all happen automatically?
The query users use apps to get the data, so each run is a separate connection. They generally do NOT expect the rows to be up-to-date within just minutes.

Optimize table on huge mysql tables without partition

We have a very huge Mysql table which is MyISAM. Whenever we run optimize table command, the table is locked and performance is getting impacted. The table is not read only and hence creating temporary tables and swapping them may not work out. We are not able to partition the table also.
Is there any other way/tool to achieve optimize table functionality without degrading the performance. Any suggestion would be of great help.
Thanks in advance.
http://dev.mysql.com/doc/refman/5.5/en/optimize-table.html
For InnoDB tables, OPTIMIZE TABLE is mapped to ALTER TABLE, which
rebuilds the table (...)
Therefore, I would not expect any improvement in switching to InnoDB, as Quassnoi probably suggests.
By definition, OPTIMIZE TABLE needs some exclusive access to the table, hence the degraded performances during OPTIMIZE'ation
Nevertheless, there could be some steps to take to reduce the time taken by OPTIMIZE, depending on how your table is "huge" :
if your table has many fields, your table might need to be normalized. Conversely, you might want to de-normalize your table by spreading your columns into several "narrower" tables, and establish one-to-one relations.
if your table has many records, implement a "manual" partitionning in your application code. A simple step would be to create an "archive" table that holds rarely updated records. This way you only need to optimize a smaller set of records (the non-archive table).
optimize table command lock the table,it decrease the performance.
you download percona tool kit command to optimize table.
this command not lock the table during optimize table.
use below link :
https://www.percona.com/doc/percona-toolkit/2.1/pt-online-schema-change.html

is separating update column in other table will help in optimization(MySQL MyISAM table)?

MySQL MyISAM "Table1" having 70% select , 13% update and 0.67% insert statements approximate.
There is one "count_column(int)" which used to increase count with primary key.(Update statements)
Updating of "count_column" make table select queries in "Waiting for table level lock"
So separating "count_column" in other table will reduce "Waiting for table level lock" or not?
I also need separated column in select statements with join.
Thanks, Yogs
AFAIK your locking problem is the COUNT with INSERT, not the UPDATE itself - but you must have a huge bunch of SELECTs. Your question is lacking quite some details...
COUNT is really optimized on MyISAM tables, if you encounter problems with that you maybe should consider a count estimate or memory tables holding this value:-\ But an exact row count is stored for MyISAM that is extremely quick to get by the storage engine, so you maybe even slowed down MySQL with your solution. "Slow" COUNT is valid for engines like InnoDB because of their transactional nature.
One other thing to consider is, that storing a count in a column in the table itself is an additional column for each row and quite bad.
And if you are using triggers to accomplish that you should be aware of http://dev.mysql.com/doc/refman/5.0/en/faqs-triggers.html#qandaitem-B-5-1-12 :)
Moving the frequently updated cells in another table will greatly reduce number of locks on the table and speed up select on it. Converting the table to InnoDB also can help (if you are not using full-text indexes, they are still not supported in MySQL 5.5 InnoDB), since it uses row-level locks instead of table-level. If you have a lot of queries, take a look at this article about implementing efficient counters

Update IN MYSQL InnoDB million records

MYSQL Innodb Update Issue:
Once I receive a response (status) for a record ,I need to update the response to a very large table (Approximate 1 million records and will keep increasing),and this will keep happen may be 100 times per second. May I know will there any performance issue? OR any setting I can modify to avoid table locking or query slowing issue.
Thanks.
It sounds like a design issue.
Instead storing the flag (which the status-record update changes) for million data-records, you should store a reference in data-records pointing to the status-record. So, when you update the status-record, no further db operation required. Also, when you're scanning through the data-records, you should JOIN for the status-records (if it's needed to display). If status-record change occurs often, it's better than update millions of data-records.
Maybe, I'm wrong, you should explain the db (structure, table record counts) for more accurate answers.
If you store your table using the MyISAM storage engine, then your table will lock with every update.
However, the InnoDB storage engine is capable of locking individual rows.
If you need to UPDATE multiple records simultaneously, InnoDB may be better.
Any indexes you have on the database (especially clustered indexes) will slow your writes down.
Indexes speed up reading, but they slow down writing. Most databases get read more than written to, but it sounds like yours gets written to much more.

Generating a massive 150M-row MySQL table

I have a C program that mines a huge data source (20GB of raw text) and generates loads of INSERTs to execute on simple blank table (4 integer columns with 1 primary key). Setup as a MEMORY table, the entire task completes in 8 hours. After finishing, about 150 million rows exist in the table. Eight hours is a completely-decent number for me. This is a one-time deal.
The problem comes when trying to convert the MEMORY table back into MyISAM so that (A) I'll have the memory freed up for other processes and (B) the data won't be killed when I restart the computer.
ALTER TABLE memtable ENGINE = MyISAM
I've let this ALTER TABLE query run for over two days now, and it's not done. I've now killed it.
If I create the table initially as MyISAM, the write speed seems terribly poor (especially due to the fact that the query requires the use of the ON DUPLICATE KEY UPDATE technique). I can't temporarily turn off the keys. The table would become over 1000 times larger if I were to and then I'd have to reprocess the keys and essentially run a GROUP BY on 150,000,000,000 rows. Umm, no.
One of the key constraints to realize: The INSERT query UPDATEs records if the primary key (a hash) exists in the table already.
At the very beginning of an attempt at strictly using MyISAM, I'm getting a rough speed of 1,250 rows per second. Once the index grows, I imagine this rate will tank even more.
I have 16GB of memory installed in the machine. What's the best way to generate a massive table that ultimately ends up as an on-disk, indexed MyISAM table?
Clarification: There are many, many UPDATEs going on from the query (INSERT ... ON DUPLICATE KEY UPDATE val=val+whatever). This isn't, by any means, a raw dump problem. My reasoning for trying a MEMORY table in the first place was for speeding-up all the index lookups and table-changes that occur for every INSERT.
If you intend to make it a MyISAM table, why are you creating it in memory in the first place? If it's only for speed, I think the conversion to a MyISAM table is going to negate any speed improvement you get by creating it in memory to start with.
You say inserting directly into an "on disk" table is too slow (though I'm not sure how you're deciding it is when your current method is taking days), you may be able to turn off or remove the uniqueness constraints and then use a DELETE query later to re-establish uniqueness, then re-enable/add the constraints. I have used this technique when importing into an INNODB table in the past, and found even with the later delete it was overall much faster.
Another option might be to create a CSV file instead of the INSERT statements, and either load it into the table using LOAD DATA INFILE (I believe that is faster then the inserts, but I can't find a reference at present) or by using it directly via the CSV storage engine, depending on your needs.
Sorry to keep throwing comments at you (last one, probably).
I just found this article which provides an example of a converting a large table from MyISAM to InnoDB, while this isn't what you are doing, he uses an intermediate Memory table and describes going from memory to InnoDB in an efficient way - Ordering the table in memory the way that InnoDB expects it to be ordered in the end. If you aren't tied to MyISAM it might be worth a look since you already have a "correct" memory table built.
I don't use mysql but use SQL server and this is the process I use to handle a file of similar size. First I dump the file into a staging table that has no constraints. Then I identify and delete the dups from the staging table. Then I search for existing records that might match and put the idfield into a column in the staging table. Then I update where the id field column is not null and insert where it is null. One of the reasons I do all the work of getting rid of the dups in the staging table is that it means less impact on the prod table when I run it and thus it is faster in the end. My whole process runs in less than an hour (and actually does much more than I describe as I also have to denormalize and clean the data) and affects production tables for less than 15 minutes of that time. I don't have to wrorry about adjusting any constraints or dropping indexes or any of that since I do most of my processing before I hit the prod table.
Consider if a simliar process might work better for you. Also could you use some sort of bulk import to get the raw data into the staging table (I pull the 22 gig file I have into staging in around 16 minutes) instead of working row-by-row?