I have some table proxy. There proxies are stored. Application is multithreaded. Each thread gets proxy from proxy table. And I need to use proxies that frequency of each proxy usage must be accidentally equals. There is some field 'last_usage' with timestamp with microseconds.
Now to achieve this goal I do the next: block table, select one proxy with older last_usage, then update last_usage of selected proxy and unlock table.
table engine is inno_db.
Another my idea is to use the following solution:
SET #uids := null;
UPDATE footable
SET foo = 'bar'
WHERE fooid > 5
AND ( SELECT #uids := CONCAT_WS(',', fooid, #uids) );
SELECT #uids;
I think it should have the same effect. Because mysql should block the row or table when update is executing. And another threads should not be able to select this row.
May I use second solution for my goal ? Which way is better or can you suggest better way ?
The clean way would be to use two queries in a single transaction:
start transaction;
select foo_id into #foo_id
from foo_table
order by last_usage asc
limit 1
for update;
update foo_table
set last_usage = now()
where foo_id = #foo_id;
commit;
FOR UPDATE is used, to lock the selected row until the transaction is commited.
There is an INNODB lock explained here
https://dev.mysql.com/doc/refman/8.0/en/innodb-locking.html
which can be used in your case.
in your case using
SELECT * FROM footable WHERE fooid > 5
AND ( SELECT #uids := CONCAT_WS(',', fooid, #uids) ) FOR UPDATE;
and then to update and get rid of the lock with same session:
UPDATE footable
SET foo = 'bar'
WHERE fooid > 5
AND ( SELECT #uids := CONCAT_WS(',', fooid, #uids) );
Related
I have table like:
level
id: primary key
order: interger
plan
id: primary key
level: FOREIGN KEY (level)
limit: interger
request:
id: primary key
plan_id: FOREIGN KEY (plan)
When have a request, save data to request table, after that, I counting all request and compare with limit in table plan . If equal I insert data to table plan with level_id = level_id of order+1 in table level else do nothing. I implement it with multiple single query, but now I want optimize it in single query. Is this possible? Thank in advance
first:
INSERT INTO request(plan_id) SELECT id FROM PLAN WHERE ...
next:
A = SELECT COUNT(request.id) FROM request
WHERE request.plan_id = ...
B = SELECT limit FROM plan
WHERE ...
IF A = B (I using php to compare)
INSERT INTO plan (level_id, order) SELECT id, order FROM level WHERE ..
else
noting to do
I don't really understand the logic of what you're doing, but to answer your question, you can move what you currently have as queries for A and B into sub queries inside your insert. So something like:
INSERT INTO plan (level_id, order)
SELECT id, order
FROM level
WHERE {existing where logic here}
AND (
SELECT COUNT(request.id) FROM request
WHERE request.plan_id = ...
) = (
SELECT limit FROM plan
WHERE ...
)
If the sub queries don't equal each other then the insert just won't do anything.
I think the UPDATE better be handled by writing a trigger watching the insertion on table request like this:
CREATE TRIGGER level_trig AFTER INSERT ON request FOR EACH ROW
#new here represents the newly inserted row in request
#A := (SELECT COUNT(request.id) FROM request WHERE request.plan_id = new.plan_id)
#B := (SELECT limit FROM plan WHERE plan.id = new.plan_id)
if #A = #B then
#whatever your insert query was. I'm not very clear about that.
INSERT INTO plan (level_id, order) SELECT id, order FROM level WHERE ..
end if;
If we don't have a transaction block (SQL Server 2008)
BEGIN TRAN
END TRAN
Just DELETE, UPDATE, INSERT or INSERT SELECT
is it possible to get a deadlock? If so, can you give me an example?
Yes, a deadlock can occur between 2 different sessions even without an explicit transaction. The example script below generates a deadlock on my test box.
--prep script
CREATE TABLE dbo.Example(
Col1 int NOT NULL CONSTRAINT PK_Example PRIMARY KEY
, Col2 int NOT NULL
, Col3 int NOT NULL
, Col4 char(2000) NULL
);
GO
WITH
t10 AS (SELECT n FROM (VALUES(0),(0),(0),(0),(0),(0),(0),(0),(0),(0)) t(n))
,t1k AS (SELECT 0 AS n FROM t10 AS a CROSS JOIN t10 AS b CROSS JOIN t10 AS c)
,t1m AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) AS num FROM t1k AS a CROSS JOIN t1k AS b CROSS JOIN t1k AS c)
INSERT INTO dbo.Example(Col1, Col2, Col3)
SELECT num, num % 100, num % 150
FROM t1m
WHERE num <= 1000000;
GO
CREATE INDEX idx_Col2 ON dbo.Example(Col2);
CREATE INDEX idx_Col3 ON dbo.Example(Col3);
GO
CHECKPOINT;
DBCC DROPCLEANBUFFERS;
GO
--run this on session 1 after changing time to a near future value
WAITFOR TIME '12:00:00';
UPDATE dbo.Example SET Col3 = 1 WHERE Col2 = 1;
GO
--run this on session 2 after changing time to same time as session 1
WAITFOR TIME '12:00:00';
UPDATE dbo.Example SET Col2 = 2 WHERE Col3 = 1;
GO
I used a large number of rows in this script because it reliably reproduced a deadlock even on a fast machine. Remember that deadlocks are a matter of timing so I expect one could use fewer rows on a slow box to also reproduce a deadlock.
Even with a small table, efficient queries, and automatic single-statement transactions, deadlocks are possible (albeit unlikely) when queries access the same resource via different access paths. The queries in this example use different indexes so the different locking order can lead to a deadlock.
I have a stored procedure which simulate a create or update.
Here is my Algorithm:
SELECT Id INT rId FROM MY_TABLE WHERE UNIQUE_FIELD = XXX LIMIT 1;
IF (rId is not null) THEN
UPDATE ELSE INSERT
The problem is that i got duplicates. How can i prevent theses duplicates? I can't add an UNIQUE INDEX because some fields can be NULL.
Thank you.
EDIT: I'm using InnoDB. Does row lock can helpme ? Locking the whole table is not acceptable for performance reason.
Use a transaction along with the FOR UPDATE clause in SELECT. This will just lock that one row, and should not block transactions that use other rows of the table.
START TRANSACTION;
SELECT id as rId FROM my_table
WHERE unique_field = XXX LIMIT 1
FOR UPDATE;
If (rId IS NOT NULL) THEN
UPDATE ...
ELSE
INSERT ...
END;
COMMIT;
I have two tables, one having a many-to-many relationship (fooBarTable with columns fooId and barId) and another InnoDB table fooCounterTable with columns fooId and counter counting the occurences of fooId in fooBarTable.
When deleting all barId's from fooBarTable, I need to update the fooCounterTable accordingly.
The first thing I tried was this:
UPDATE fooCounterTable SET counter = counter - 1
WHERE fooId IN (SELECT fooId FROM fooBarTable WHERE barId = 42 ORDER BY fooId);
But I got this error:
MySQL error (1205): Lock wait timeout exceeded; try restarting transaction
Updating the table when adding barId's is working fine with this SQL statement:
INSERT INTO `fooCounterTable` (fooId, counter) VALUES (42,1), (100,1), (123,1)
ON DUPLICATE KEY UPDATE counter = counter + 1;
So I thought I'd do the same thing when decreasing the counter, even if it looks stupid to insert 0-Values, which should never happen:
INSERT INTO `fooCounterTable` (SELECT fooId, 0 FROM fooBarTable WHERE barId = 42 ORDER BY fooId)
ON DUPLICATE KEY UPDATE counter = counter - 1;'
This seems to work fine in most cases, but sometimes I get a deadlock:
MySQL error (1213): Deadlock found when trying to get lock; try restarting transaction
So I read about deadlocks and found out about SELECT ... FOR UPDATE and I tried this:
START TRANSACTION;
SELECT fooId FROM fooCounterTable
WHERE fooId IN (SELECT fooId FROM fooBarTable WHERE barId = 42 ORDER BY fooId) FOR UPDATE;
UPDATE fooCounterTable SET counter = counter - 1
WHERE fooId IN (SELECT fooId FROM fooBarTable WHERE barId = 42 ORDER BY fooId);
COMMIT;
which resulted in:
MySQL error (2014): commands out of sync
Can anyone tell me how to resolve my problem?
Update
The last error (2014) occured, because I did not use and free the SELECT statement's results before executing the UPDATE statement, which is mandatory. I fixed that and I got rid of error 2014, but I still have deadlocks (error 1205) from time to time and I don't understand, why.
Do you know what fooID you have just deleted when firing this query?
If so seems like this would work...
UPDATE fooCounterTable SET counter =
(SELECT count(*) FROM fooBarTable WHERE fooId = 42)
WHERE fooID = 42
I wonder if you really need that counter table tho. If your indexes are set up properly there shouldn't be too much of a speed penalty to a more normalized approach.
I have a table (ft_ttd) and want to sort it descending (num) and insert rating numbers into rating column.
Initial Table http://dl.dropbox.com/u/3922390/2.png
Something like that:
Result Table http://dl.dropbox.com/u/3922390/1.png
I've created a procedure.
CREATE PROCEDURE proc_ft_ttd_sort
BEGIN
CREATE TEMPORARY TABLE ft_ttd_sort
(id int (2),
num int (3),
rating int (2) AUTO_INCREMENT PRIMARY KEY);
INSERT INTO ft_ttd_sort (id, num) SELECT id, num FROM ft_ttd ORDER BY num DESC;
TRUNCATE TABLE ft_ttd;
INSERT INTO ft_ttd SELECT * FROM ft_ttd_sort;
DROP TABLE ft_ttd_sort;
END;
When I call it - it works great.
CALL proc_ft_ttd_sort;
After that I've created trigger calling this procedure.
CREATE TRIGGER au_ft_ttd_fer AFTER UPDATE ON ft_ttd FOR EACH ROW
BEGIN
CALL proc_ft_ttd_sort();
END;
Now every time when I update ft_ttd table I've got a error.
UPDATE ft_ttd SET num = 9 WHERE id = 3;
ERROR 1422 (HY000): Explicit or implicit commit is not allowed in stored function ortrigger.
Any ideas how to make it work? Maybe this process can be optimized?
Thank you!
The create table statement is an implicit commit, since it's DDL. Basically, the answer is you can't create a table in a trigger.
http://dev.mysql.com/doc/refman/5.0/en/stored-program-restrictions.html
Triggers can't do it
DDL aside, your trigger-based approach has a few difficulties. First, you want to modify the very table that's been updated, and that's not permitted in MySQL 5.
Second, you really want a statement-level trigger rather than FOR EACH ROW — no need to re-rank the whole table for every affected row — but that's not supported in MySQL 5.
Dynamically compute "rating"
So ... is it enough to just compute rating dynamically using a MySQL ROW_NUMBER() workaround?
-- ALTER TABLE ft_ttd DROP COLUMN rating; -- if you like
SELECT id,
num,
#i := #i + 1 AS rating
FROM ft_ttd
CROSS JOIN (SELECT #i := 0 AS zero) d
ORDER BY num DESC;
Unfortunately, you cannot wrap that SELECT in a VIEW (since a view's "SELECT statement cannot refer to system or user variables"). However, you could hide that in a selectable stored procedure:
CREATE PROCEDURE sp_ranked_ft_ttd () BEGIN
SELECT id, num, #i := #i + 1 AS rating
FROM ft_ttd CROSS JOIN (SELECT #i := 0 AS zero) d
ORDER BY num DESC
END
Or UPDATE if you must
As a kluge, if you must store rating in the table rather than compute it, you can run this UPDATE as needed:
UPDATE t
CROSS JOIN ( SELECT id, #i := #i + 1 AS new_rating
FROM ft_ttd
CROSS JOIN (SELECT #i := 0 AS zero) d
ORDER BY num DESC
) ranked
ON ft_ttd.id = ranked.id SET ft_ttd.rating = ranked.new_rating;
Now instruct your client code to ignore rows where rating IS NULL — those haven't been ranked yet. Better, create a VIEW that does that for you.
Kluging further, you can likely regularly UPDATE via CREATE EVENT.