I don't have a real code sorry. But only a problem explanation.
I would like to understand how is the best way to solve this problem.
I have 3 queries:
The first one is a long Transaction which performs an SQL INSERT statement in a table.
The second query COUNTs the number of rows of the previous table after the INSERT took place
The third query UPDATEs one field of the previously inserted record with the count number retrieved by the second query.
So far so good. My 3 queries are executed correctly.
Now suppose that these 3 queries are executed inside an API call. What happens now is that if multiple API calls are executed too fast and simultaneously, the second COUNT query retrieves a wrong value and consequently the 3th UPDATE has also a wrong value.
Nevertheless I have dead locks on the INSERT query because while making the INSERT, the SELECT COUNT tried to read at the same time on a second api call.
My question is what would be the best approach to solve this kind of problem.
I don't need code. I just would like to understand the best way to go.
Would I need to lock all the tables, for example?
It is unclear what you are doing, but this might be faster:
CREATE TEMPORARY TABLE t ...; -- all columns except count
INSERT IN t ...; -- the incoming data
SELECT COUNT(*) INTO #ct FROM t;
INSERT INTO real_table
(...) -- including the count-column last
SELECT ..., #ct FROM t; -- Note how count is tacked on last
Related
I have a db table with 30 million rows and obviously even with indexes created have issues with query times when doing even simple counts, sums and averages.
I believe that using a trigger would be solve for this problem but wanted to ask the community.
Here is what I am working but having trouble with:
delimiter //
create trigger after_products_update
after inserton product_stats
for each row begin
select distinct product_cat, count(distinct product), sum(revenue), avg(cost) from products_table;
END; //
delimiter ;
I guess a good closing would be ...am I on the right route here?
Thanks in advance for any help here!
I believe that using a trigger would be solve for this problem
NO, it's not. In essence what you are trying to do is; after every insert running a SELECT query on 30M rows which shouldn't be done. Generally, triggers are for doing some DML operation on some other table based on DML operation in your table.
You should run your SELECT query once all the INSERT statements are performed for verification.
I would like to query a database table for some of it's oldest entries and update them with a second query afterwards.
But how can I prevent that another process (that does the same) will return the same rows by the SELECT query and the UPDATE part will modify the entries twice?
As far as I see a simple transaction cannot prevent this from happening.
Use the SELECT ... FOR UPDATE mechanism to do this (see http://dev.mysql.com/doc/refman/5.0/en/innodb-locking-reads.html)
I need a little help with SELECT FOR UPDATE (resp. LOCK IN SHARE MODE).
I have a table with around 400 000 records and I need to run two different processing functions on each row.
The table structure is appropriately this:
data (
`id`,
`mtime`, -- When was data1 set last
`data1`,
`data2` DEFAULT NULL,
`priority1`,
`priority2`,
PRIMARY KEY `id`,
INDEX (`mtime`),
FOREIGN KEY ON `data2`
)
Functions are a little different:
first function - has to run in loop on all records (is pretty fast), should select records based on priority1; sets data1 and mtime
second function - has to run only once on each records (is pretty slow), should select records based on priority2; sets data1 and mtime
They shouldn't modify the same row at the same time, but the select may return one row in both of them (priority1 and priority2 have different values) and it's okay for transaction to wait if that's the case (and I'd expect that this would be the only case when it'll block).
I'm selecting data based on following queries:
-- For the first function - not processed first, then the oldest,
-- the same age goes based on priority
SELECT id FROM data ORDER BY mtime IS NULL DESC, mtime, priority1 LIMIT 250 FOR UPDATE;
-- For the second function - only processed not processed order by priority
SELECT if FROM data ORDER BY priority2 WHERE data2 IS NULL LIMIT 50 FOR UPDATE;
But what I am experiencing is that every time only one query returns at the time.
So my questions are:
Is it possible to acquire two separate locks in two separate transactions on separate bunch of rows (in the same table)?
Do I have that many collisions between first and second query (I have troubles debugging that, any hint on how to debug SELECT ... FROM (SELECT ...) WHERE ... IN (SELECT) would be appreciated )?
Can ORDER BY ... LIMIT ... cause any issues?
Can indexes and keys cause any issues?
Key things to check for before getting much further:
Ensure the table engine is InnoDB, otherwise "for update" isn't going to lock the row, as there will be no transactions.
Make sure you're using the "for update" feature correctly. If you select something for update, it's locked to that transaction. While other transactions may be able to read the row, it can't be selected for update, updated or deleted by any other transaction until the lock is released by the original locking transaction.
To keep things clean, try explicitly starting a transaction using "START TRANSACTION", run your select "for update", do whatever you're going to do to the records that are returned, and finish up by explicitly executing a "COMMIT" to close out the transaction.
Order and limit will have no impact on the issue you're experiencing as far as I can tell, whatever was going to be returned by the Select will be the rows that get locked.
To answer your questions:
Is it possible to acquire two separate locks in two separate transactions on separate bunch of rows (in the same table)?
Yes, but not on the same rows. Locks can only exist at the row level in one transaction at a time.
Do I have that many collisions between first and second query (I have troubles debugging that, any hint on how to debug SELECT ... FROM (SELECT ...) WHERE ... IN (SELECT) would be appreciated )?
There could be a short period where the row lock is being calculated, which will delay the second query, however unless you're running many hundreds of these select for updates at once, it shouldn't cause you any significant or noticable delays.
Can ORDER BY ... LIMIT ... cause any issues?
Not in my experience. They should work just as they always would on a normal select statement.
Can indexes and keys cause any issues?
Indexes should exist as always to ensure sufficient performance, but they shouldn't cause any issues with obtaining a lock.
All points in accepted answer seem fine except below 2 points:
"whatever was going to be returned by the Select will be the rows that get locked." &
"Can indexes and keys cause any issues?
but they shouldn't cause any issues with obtaining a lock."
Instead all the rows which are internally read by DB during deciding which rows to select and return will be locked. For example below query will lock all rows of the table but might select and return only few rows:
select * from table where non_primary_non_indexed_column = ? for update
Since there is no index, DB will have to read the entire table to search for your desired row and hence lock entire table.
If you want to lock only one row either you need to specify its primary key or an indexed column in the where clause. Thus indexing becomes very important in case of locking only the appropriate rows.
This is a good reference - https://dev.mysql.com/doc/refman/5.7/en/innodb-locking-reads.html
I have a table that stores the summed values of a large table. I'm not calculating them on the fly as I need them frequently.
What is the best way to update these values?
I could delete the relevant rows from the table, do a full group by sum on all the relevant lines and then insert the new data.
Or I could index a timestamp column on the main table, and then only sum the latest values and add them to the existing data. This is complicated because some sums won't exist so both an insert and an update query would need to run.
I realize that the answer depends on the particulars of the data, but what I want to know is if it is ever worth doing the second method; if there are millions of rows being summed in the first example and only tens in the second, would the second be significantly faster to execute?
You can try with triggers on update/delete. Then you check inserted or deleted value and according to it modify the sum in second table.
http://dev.mysql.com/doc/refman/5.0/en/triggers.html
For me there is several ways :
Make a view which should be up-to-date (i don't know if you can do concrete views in mysql)
Make a table which will be up-to-date using a trigger (on update/delete/insert as example) or using a batch during (night, so data will be 1 day old)
Make a stored procedure which will be retrieving and computing only the data needed.
I would do something like this (INSERT UPDATE):
mysql_query("
INSERT INTO sum_table (col1, col2)
SELECT id, SUM(value)
FROM table
GROUP BY id
ON DUPLICATE KEY UPDATE col2 = VALUES(col2)
");
Please let me know if you need more examples.
if an INSERT and a SELECT are done simultaneously on a mysql table which one will go first?
Example: Suppose "users" table row count is 0.
Then this two queries are run at the same time (assume it's at the same mili/micro second):
INSERT into users (id) values (1)
and
SELECT COUNT(*) from users
Will the last query return 0 or 1?
Depends whether your users table is MyISAM or InnoDB.
If it's MyISAM, one statement or the other takes a lock on the table, and there's little you can do to control that, short of locking tables yourself.
If it's InnoDB, it's transaction-based. The multi-versioning architecture allows concurrent access to the table, and the SELECT will see the count of rows as of the instant its transaction started. If there's an INSERT going on simultaneously, the SELECT will see 0 rows. In fact you could even see 0 rows by a SELECT executed some seconds later, if the transaction for the INSERT has not committed yet.
There's no way for the two transactions to start truly simultaneously. Transactions are guaranteed to have some order.
It depends on which statement will be executed first. If first then the second will return 1, if the second one executes first, then it will return 0. Even you are executing them on the computer with multiple physical cores and due to the lock mechanism, they will never ever execute at the exactly same time stamp.