Table lock issue while using sequelize - mysql

I am facing Table Lock issues when getting rows in Parent & child case.
The scenario is below:
TBL_COMMON 1--------------1 TABLE-1
TBL_COMMON 1--------------1 TABLE-2
While inserting a record in TABLE1, as a validation step we check entry exist in TBL_COMMON. So one select operation performed.
If succeed, insert operation is performed.
Everything executed in one method call.
Sometimes TBL_COMMON Table is get locked.
This issue does not occur every time but didn't know the actual reason.
Can anybody help me out?

Are you using MyISAM for TBL_COMMON? If so, can you try to change that to InnoDB?
InnoDB has row level locking.

Related

MySQL Insert with lock tables write

The task is to upload a price-list of sorts, so quick question before I implement this.
If I want to INSERT say 1000 rows at a time, for say 100,000 the recommendation is:
http://dev.mysql.com/doc/refman/5.5/en/optimizing-myisam-bulk-data-loading.html
"If you do very many successive inserts, you could do a LOCK TABLES followed by an UNLOCK TABLES once in a while (each 1,000 rows or so) to permit other threads to access table. This would still result in a nice performance gain."
Obviously while I have the "WRITE LOCK" on the table you can still read the table right?
The reason I ask is that:
http://dev.mysql.com/doc/refman/5.0/en/lock-tables.html
says:
Only the session that holds the lock can access the table. No other session can access it until the lock is released.
"Can access it"... my gosh, if that is the case them our entire system would freeze up... we simply cant have that... Is this in fact the case, or did they mean "...No other session can write to the table until the lock is released."?
Ultimately what I want to be able to do is INSERT 100,000 simple rows of data without impacting the system. I have used:
INSERT INTO a VALUES (1,0.00),(2,0.00), ... ..., (999,0.00)
But this often results in no rows added for some reason.
If you lock a table with LOCK TABLES ... WRITE no other thread can read/write that table.
The best approach is to use multi-row insert statements
INSERT INTO table (f1,f2,f3....) VALUES
(v1,v2,v3...),
(v1,v2,v3...),
...
You will probably need to split your rows into multiple statements with 1-10K rows each (maybe more depending on your max_packet_size and other settings).
MyISAM locks the table if it inserts into empty space somewhere in the middle of the table. If you only do inserts (no deletions) on that table, you should be OK with multiple threads inserting and selecting.
If the table has frequent deletions/updates you should consider switching to InnoDB.
As for "no rows added for some reason"- there is almost certainly an error somewhere in your statement/code. The statement will either insert the rows or return an error.

make InnoDB only lock the row currently being UPDATEd?

Building on the query in this answer (please note/assume that the GROUP_CONCATs are now also held in user defined variables), when and what will InnoDB lock on table1?
I'd prefer that it only lock the table1 row that it's currently updating and release it upon starting on the next row.
I'd also prefer that when it locks table2 (or its' rows) that SELECTs will at least be able to read it.
The column being updated is not PK or even indexed.
How can this be achieved, or is it already doing that?
This is in a TRIGGER.
Many thanks in advance!
The lock is held for the entire transaction (as the operation is atomic, this means that either all of the rows are updated or no rows) and you can't change that (without changing the storage engine). However it does not block reads (unless you are in SEIALIZABLE isolation level), so SELECT queries will be executed, but they will read the old values. Only SELECT FOR UPDATE and SELECT...LOCK IN SHARE MODE will be blocked by an update.

mysql Delete and Database Relationships

If I'm trying to delete multiple rows from a table and one of those rows can't be deleted because of a database relationship, what will happen?
Will the rows that aren't constrained by a relationship still be deleted? Or will the entire delete fail?
In MySQL, if you set a foreign key constraint, the query will fail if you try to insert a non-existing ID, or try to delete an existing ID.
In other words, your delete will fail.
If it's a single delete statement, then the entire delete will fail.
All the rows will delete just fine. However, you should make sure that your program deletes related rows otherwise missing posts/records/whatever may ensue.
There is a more general question here:
If I do an SQL statement which affects multiple rows, and it encounters an error after modifying some rows, what happens.
The answer is essentially "None of them are affected, even ones which had already succeeded".
What happens internally is rather complicated. InnoDB supports transaction savepoints, and the database creates an implicit savepoint at the beginning of the statement within the current transaction. If the statement fails part way through, a rollback happens back to the implicit savepoint. This means that it then looks like the statement never happened (except if people insist in using READ_UNCOMMITTED isolation level, which they should not if they care).
This happens whether you're using explicit transactions or not. If you are using explicit transactions, the current transaction is not rolled back (except on certain types of error such as deadlock and lock wait timeout, where it must do to allow a deadlock to be broken), instead it only rolls back as far as the beginning of the statement.

how to avoid deadlock in mysql

I have the following query (all tables are innoDB)
INSERT INTO busy_machines(machine)
SELECT machine FROM all_machines
WHERE machine NOT IN (SELECT machine FROM busy_machines)
and machine_name!='Main'
LIMIT 1
Which causes a deadlock when I run it in threads, obviously because of the inner select, right?
The error I get is:
(1213, 'Deadlock found when trying to get lock; try restarting transaction')
How can I avoid the deadlock? Is there a way to change to query to make it work, or do I need to do something else?
The error doesn't happen always, of course, only after running this query lots of times and in several threads.
To my understanding, a select does not acquire lock and should not be the cause of the deadlock.
Each time you insert/update/or delete a row, a lock is acquired. To avoid deadlock, you must then make sure that concurrent transactions don't update row in an order that could result in a deadlock. Generally speaking, to avoid deadlock you must acquire lock always in the same order even in different transaction (e.g. always table A first, then table B).
But if within one transaction you insert in only one table this condition is met, and this should then usually not lead to a deadlock. Are you doing something else in the transaction?
A deadlock can however happen if there are missing indexes. When a row in inserted/update/delete, the database need to check the relational constraints, that is, make sure the relations are consistent. To do so, the database needs to check the foreign keys in the related tables. It might result in other lock being acquired than the row that is modified. Be sure then to always have index on the foreign keys (and of course primary keys), otherwise it could result in a table lock instead of a row lock. If table lock happen, the lock contention is higher and the likelihood of deadlock increases.
Not sure what happens exactly in your case, but maybe it helps.
You will probably get better performance if you replace your "NOT IN" with an outer join.
You can also separate this into two queries to avoid inserting and selecting the same table in a single query.
Something like this:
SELECT a.machine
into #machine
FROM all_machines a
LEFT OUTER JOIN busy_machines b on b.machine = a.machine
WHERE a.machine_name!='Main'
and b.machine IS NULL
LIMIT 1;
INSERT INTO busy_machines(machine)
VALUES (#machine);

Deleting rows cause lock timeout

I keep getting these errors when trying to delete rows from a table. The special case here is that I may be running 5 processes at the same time.
The table itself is an Innodb table with ~4.5 million rows. I do not have an index on the column used in my WHERE clause. Other indices are working as supposed to.
It's being done within a transcation, first I delete records, then I insert replacing records, and only if all records are inserted should the transaction be commited.
Error message:
Query error: Lock wait timeout exceeded; try restarting transaction while executing DELETE FROM tablename WHERE column=value
Would it help to create an index on the referenced column here? Should I explicitly lock the rows?
I have found some additional information in question #64653 but I don't think it covers my situation fully.
Is it certain that it is the DELETE statement that is causing the error, or could it be other statements in the query? The DELETE statement is the first one so it seems logical but I'm not sure.
An index would definitely help. If you are trying to replace deleted records I would recommend you modify your query to use an update instead of a DELETE followed by an INSERT, if possible:
INSERT INTO tableName SET
column2 = 'value2'
WHERE column = value
ON DUPLICATE KEY UPDATE
column2 = 'value2'
An index definitely helps. I once worked on a DB containing user data. There was sometimes a problem with the web front end and user deletion. During the day it worked fine (although it took quite long). But in the late afternoon it sometimes timed out, because the DB server was under more load due to end of day processing.
Whacked an index on the affected column and everything ran smoothly from there on.