MySQL version 5.7
Engine : InnoDB
I have table called "md_waiting_slot_count" and it has following columns:
cell | popupDate | userId | creationTime
Now I have following query:
insert into md_waiting_slot_count
select cell, '2017-08-31' as pd, 'abnc' as ui, '2017-08-26 15:55:51'
from
(select sum(slotcount) as tt, cell from
( select 0 as slotcount, cell_str as cell, 'master' from cell where
cell_str in
("Gujarat_Jamnagar_Jamnagar_00-18_Male","Gujarat_Jamnagar_Jamnagar_19-
22_Male")
union all
select slotcount, cell, wting from
(select count(*) as slotcount, cell as cell, 'waiting' as wting from
md_waiting_slot_count where
cell in(SELECT cell_str as cell FROM cell where cell_str
in("Gujarat_Jamnagar_Jamnagar_00-18_Male","Gujarat_Jamnagar_Jamnagar_19-
22_Male"))
and popupDate='2017-08-31' and creationTime > DATE_SUB(NOW(), INTERVAL
20 MINUTE) group by cell ) as t1
union all
select filledslotcount as slotcount, id as cell, 'final' from
md_slot_count where id in(
SELECT cell_str as cell FROM cell where cell_str
in("Gujarat_Jamnagar_Jamnagar_00-18_Male","Gujarat_Jamnagar_Jamnagar_19-
22_Male"))
and popupSlotDate='2017-08-31' ) t group by cell having tt < 4) as ft
order by cell, pd, ui
on duplicate key update creationTime = "'2017-08-26 15:55:51'";
Here 2 other table also used which are as follow
md_slot_count
id| popupDate| state| district| taluka| ageGroup| gender| filledSlotCount
cell
cell_str| state| district| taluka| ageGroup| gender
This insert...select statement causing deadlock after 3-4 successful run.
Help me with this.
How to see "last deadlock log" in MySQL?
I want to do something like this
Transaction 1 --> evaluate above query --> insert row
Transaction 2 --> evaluate above query --> insert row
Here when second transaction evaluate query it has to consider the data inserted by previous transaction. Here I want to allow max 4 transaction to insert row, no more than that. So the evaluated query allow to insert then only insert.
Now in parallel request if those 2 process of query evaluation and insertion is separate and no consider previous transaction data, then more then 4 transaction can come and insert data.
So the ultimate goal is to
If one transaction begin and read data and fulfil the condition then insert data and mean while no one else make insertion, and as the first transaction complete, the second transaction has to consider all the updated data only. So either complete or nothing for one transaction and other transaction has to wait. I do not achieve in concurrent request, as all read together and so it read old data so all are able to add data in table.
So I take this whole in one single query.
You want to insert into md_waiting_slot_count some data calculated from md_waiting_slot_count.
So deadlock is unavoidable. Try to create a temporary table containing your values and then insert your values from your temporary table.
Related
i had query like this
CREATE TRIGGER `tambah_riwayatobat` AFTER INSERT ON `obat`
FOR EACH ROW insert into riwayat_obat(nama, keterangan, distributor,tanggal)
(select new.nama, 'Masuk', d.nama ,now()
From distributor d
join obat ON new.id_distributor = d.id_distributor)
i try to insert data with trigger and one of part data i fetch with constraint, but why the data be duplicate entry ?
Output :
example, if i try to insert data obat 1st time, data on tambah_riwayatobat insert 1 too
if i try to insert data obat 2nd time, data on tambah_riwayatobat insert 2 times with same data
if i try to insert data obat 3rd time, data on tambah_riwayatobat insert 3 times with same data
I'm not certain exactly what's happening, but it's a result of the join in your trigger code. You’re joining obat to distributor, but your join condition makes no mention of obat so you're getting some sort of cross-product where on the second and subsequent INSERT your SELECT subquery is selecting more than one row.
You shouldn't (and don't need to) use the join, since all the data you need from obat is already in the pseudorecord NEW. The following code should work much better:
CREATE TRIGGER `tambah_riwayatobat`
AFTER INSERT ON `obat`
FOR EACH ROW
INSERT INTO riwayat_obat
(nama, keterangan, distributor, tanggal)
(SELECT NEW.nama, 'Masuk', d.nama, now()
FROM distributor d
WHERE new.id_distributor = d.id_distributor
LIMIT 1);
The LIMIT clause will ensure that the SELECT selects only one row, so the INSERT inserts only one row; if distributor.id_distributor is a primary key the LIMIT clause is unnecessary.
Consider two tables that have timestamp and data columns. I need to construct an SQL that does the following:
Insert data (unique timestamp and data column) in one table if timestamp value is not present in the table ("insert my data in table 1 for timestamp="12:00 1999-01-01" only if that timestamp is not present in table 1...)
Otherwise, insert very same data in different table without any checks, and overwrite if necessary (... otherwise insert same set of fields in table 2).
How I could possibly achieve this on SQL? I could do it using a client but this is way slower. I use MySQL
Run a query for your 2nd bullet first. i.e. insert data into table 2 if it is present in table 1
insert into table2 (data, timestamp)
select 'myData', '12:00 1999-01-01'
from table1
where exists (
select 1 from table1
where timestamp = '12:00 1999-01-01'
)
limit 1
Then run your the query for your 1st bullet i.e. insert into table1 only if the data doesn't already exist
insert into table1 (data, timestamp)
select 'myData', '12:00 1999-01-01'
from table1
where not exists (
select 1 from table1
where timestamp = '12:00 1999-01-01'
)
limit 1
Running both these queries will always only insert 1 row into 1 table because if the row exists in table1, the not exists condition of the 2nd query will be false and if it doesn't exist in table1, then the exists condition of the 1st query will be false.
You may want to consider creating a unique constraint on table1 to automatically prevent duplicates so you can use insert ignore for your inserts into table1
alter table table1 add constraint myIndex (timestamp);
insert ignore into table1 (data,timestamp) values ('myData','12:00 1999-01-01');
A regural INSERT statement can insert records into one table only. You have 2 options:
Code the logic within the application
Create a stored procedure within mysql and code the application logic there
No matter which route you choose, I would
Add a unique index on the timestamp column in both tables.
Attempt to insert the data into the 1st table. If the insert succeeds, everything is OK. If the timestamp exists, then you will get an error (or a warning depending on mysql confioguration). Your solution handles the error (in mysql see DECLARE ... HANDLER ...).
Insert the data into the 2nd table using INSERT INTO ... ON DUPLICATE KEY UPDATE ... statement, which will insert the data if the timestamp does not exists, or updates the record if it does.
What I'm trying to do is write a stored procedure that will query a view, process each row and and make one or more inserts into a table for each row pulled from the view. Everything seems fine, except for the fact that, arbitrarily, mid-point during the process, the server seems to hang on the insert command. I have no idea if there's some memory limit on cursor results sets, or what could be happening. Relevant parts of the SP and a few clarifying comments posted below.
CREATE PROCEDURE `Cache_Network_Observations` ()
BEGIN
-- Declare all variables
/* This cursor is hitting the view which should be returning a number of rows on the scale of ~5M+ records
*/
DECLARE cursor1 CURSOR FOR
SELECT * FROM usanpn2.vw_Network_Observation;
CREATE TABLE Cached_Network_Observation_Temp (observation_id int, name varchar(100), id int);
OPEN cursor1;
load_loop: loop
FETCH cursor1 INTO observation_id, id1, name1, id2, name2, id3, name3, gid1, gname1, gid2, gname2, gid3, gname3;
IF id1 IS NOT NULL THEN
INSERT INTO usanpn2.Cached_Network_Observation_Temp values (observation_id, name1, id1);
END IF;
-- some additional logic here, essentially just the same as the above if statement
END LOOP;
CLOSE cursor1;
END
That being the SP, when I actually run it, everything goes off without a hitch until the process runs and runs and runs. Taking a look at the active query report, I am seeing this:
| 1076 | root | localhost | mydb | Query | 3253 | update | INSERT INTO usanpn2.Cached_Network_Observation values ( NAME_CONST('observation_id',2137912), NAME_ |
Not positive where the NAME_CONST function is coming from or what that has to do with anything. I've tried this multiple times, the observation_id variable / row in the view varies each time, so it doesn't seem to be anything tied to the record.
TIA!
I don't see a NOT FOUND handler for your fetch loop. There's no "exit" condition.
DECLARE done INT DEFAULT FALSE;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
Immediately following the fetch, test the done flag, and exit the loop when it's true.
IF done THEN
LEAVE load_loop;
END IF;
Without that, I think you have yourself a classic infinite loop.
The statement shown in the SHOW FULL PROCESSLIST output is inserting to a different table. (There's no _Temp at the end of the tablename.)
But why on earth do you need a cursor loop to process this row-by-agonizing-row?
If you need a table loaded, just load the flipping table, and be done with it.
Replace all of that "declare cursor", "open cursor", fetch loop, exit handler, individual insert statement nonsense with a single statement that does what you need done:
INSERT INTO Cached_Network_Observation_Temp (observation_id, `name`, id)
SELECT s.observation_id, s.name1 AS `name`, s.id1 AS id
FROM usanpn2.vw_Network_Observation s
WHERE s.id1 IS NOT NULL
That is going to be way more efficient. And it won't clog up the binary logs with a bloatload of unnecessary INSERT statements. (This also has me wanting to backup to a bigger picture, and understand why this table is even needed. This also has me wondering if vw_Network_Observation is a view, and whether the overhead of materializing a derived table is warranted. The predicate in that outer query isn't getting pushed down into the view definition. MySQL processes views much differently than other RDBMSs do.)
EDIT
If the next part of the procedure that is commented out is checking whether id2 is not null to conditionally insert id2,name2 to the _Temp table, that can be done in the same way.
Or, the multiple queries can be combined with UNION ALL operator.
INSERT INTO Cached_Network_Observation_Temp (observation_id, `name`, id)
SELECT s1.observation_id, s1.name1 AS `name`, s1.id1 AS id
FROM usanpn2.vw_Network_Observation s1
WHERE s1.id1 IS NOT NULL
UNION ALL
SELECT s2.observation_id, s2.name2 AS `name`, s2.id2 AS id
FROM usanpn2.vw_Network_Observation s2
WHERE s2.id2 IS NOT NULL
... etc.
FOLLOWUP
If we need to generate multiple rows out a single row, and the number of rows isn't unreasonably large, I'd be tempted to test something like this, processing id1, id2, id3 and id4 in one fell swoop, using a CROSS JOIN of the row source (s) and a artificially generated set of four rows.
That would generate four rows per row from the row source (s), and we can use conditional expressions to return id1, id2, etc.
As an example, something like this:
SELECT s.observation_id
, CASE n.i
WHEN 1 THEN s.id1
WHEN 2 THEN s.id2
WHEN 3 THEN s.id3
WHEN 4 THEN s.id4
END AS `id`
, CASE n.i
WHEN 1 THEN s.name1
WHEN 2 THEN s.name2
WHEN 3 THEN s.name3
WHEN 4 THEN s.name4
END AS `name`
FROM usanpn2.vw_Network_Observation s
CROSS
JOIN ( SELECT 1 AS i UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4) n
HAVING `id` IS NOT NULL
We use a predicate in the HAVING clause rather than the WHERE clause because the value generated for the id column in the resultset isn't available when the rows are accessed. The predicates in the HAVING clause are applied nearly last in the execution plan, after the rows are accessed, just before the rows are returned. (I think a "filesort" operation to satisfy an ORDER BY, and the LIMIT clause are applied after the HAVING.)
If the number of rows to be processed is "very large", then we may get better performance processing rows in several reasonably sized batches. If we do a batch size of two, processing two rows per INSERT, that effectively halves the number of INSERTs we need to run. With 4 rows per batch, we cut that in half again. Once we are up to a couple of dozen rows per batch, we've significantly reduced the number of individual INSERT statements we need to run.
As the batches get progressively larger, our performance gains become much smaller. Until the batches become unwieldy ("too large") and we start thrashing to disk. There's a performance "sweet spot" in there between the two extremes (processing one row at a time vs processing ALL of the rows in one batch).
I am currentlty using triggers in a table to copy the last row into a designated table based on a WHERE condition and ORDER BY. Using one trigger works fine and copies to the respective table. but both triggers are running causing duplicates of a previous row to appear in the table I dont want inserted. (SQL2008 Management Studio). I have specific tables to send this row too based on the partnumber. Here is the Structure:
ALTER TRIGGER NewT3650 ON JD_Passdata
FOR INSERT
AS
INSERT T3_650_TestData (SerialNumber, Partnumber, etc)
SELECT TOP 1 SerialNumber, Partnumber, etc
FROM JD_Passdata
WHERE partnumber = 'T3_650'
ORDER BY passdata_ndx DESC
ALTER TRIGGER NewT4450 ON JD_Passdata
FOR INSERT
AS
INSERT T4_450_TestData (SerialNumber, Partnumber, etc)
SELECT TOP 1 SerialNumber, Partnumber, etc
FROM JD_Passdata
WHERE partnumber = 'T4_450'
ORDER BY passdata_ndx DESC
Original PassData Table:
201244999, T4_450
201245001, T3_650
201245002, T3_650
201245003, T3_650
Returns Results for table 1
201245001, T3_650
201245002, T3_650
201245003, T3_650
Returns Results for table 2
201244999, T4_450
201244999, T4_450
201244999, T4_450
I would like this to be an OR condition or a UNION that only takes the last row and enters it into the correct table and remove the additional trigger if possible. Otherwise a check for duplicate and update may do it too. Also the database is going to get quite large to do a DESC every entry may get slow. A method to remove the order by would be a consideration as well.
Any sugesstions would be greatly appreciated...THX
I have a solution that works for both triggers but may be causing a problem when a "NULL" result occurs. It does get rid of the ORDER BY and may be faster. What this does is use the unique last row index number as the row location and checks the part number to determine an insert into the new table. Also, when I add a third trigger using this same format for a different table the initial move to the main table (JD_PassData) fails. Not sure why yet.
Here is the new code:
ALTER TRIGGER NewT3650 ON JD_Passdata
AFTER INSERT
AS
INSERT T3_650_TestData (SerialNumber, Partnumber, etc)
SELECT SerialNumber, Partnumber, etc
FROM JD_Passdata
WHERE passdata_ndx=(SELECT MAX(passdata_ndx) FROM JD_PassData) AND PartNumber = 'T3_650'
How can I store only 10 rows in a MySQL table? The older rows should be deleted when a new row is added but only once the table has 10 rows.
Please help me
You could achieve this with an after insert trigger, delete the row where it is min date. e.g. DELETE FROM myTable WHERE myTimestamp = (SELECT MIN(myTimestamp) FROM myTable) but that could in theory delete multiple rows, depending on the granularity of your updates.
You could have an incrementing sequence, and always just delete the min of that sequence.
The question is why you'd want to do this though? It's a slightly unusual requirement.
A basic example (not validated/executed, I don't have mySQL on this particular machine) would look something like.
CREATE TRIGGER CycleOldPasswords AFTER INSERT ON UserPasswords FOR EACH ROW
set #mycount = SELECT COUNT(*) FROM UserPasswords up where up.UserId = NEW.UserId;
if myCount >= 10 THEN
DELETE FROM UserPasswords up where up.Timestamp = (SELECT min(upa Timestamp) FROM UserPasswords upa WHERE NEW.UserId = upa.UserId) AND NEW.UserId = up.UserId;
END
END;
You can retrieve the last inserted id when your first row is inserted, and store it in a variable. When 10 rows are inserted, delete the row having id < id of the first inserted record. Please try it.
first of all insert all values using your insert query
and then run this query
delete from table_name where (cond) order by id desc limit 10
must specify an id or time in one column