deleting the data after copying it into another database - mysql

I have a stored procedure , its contents are as follows:
-- --------------------------------------------------------------------------------
-- Routine DDL
-- Note: comments before and after the routine body will not be stored by the server
-- --------------------------------------------------------------------------------
DELIMITER $$
CREATE DEFINER=`MailMe`#`%` PROCEDURE `sp_archivev3`()
BEGIN
INSERT INTO
send.sgev3_archive(a_bi,
b_vc,
c_int,
d_int,
e_vc,
f_vc,
g_vc,
h_vc,
i_dt,
j_vc,
k_vc,
l_vc,
m_dt,
n_vch,
o_bit)
SELECT a_bi,
b_vc,
c_int,
d_int,
e_vc,
f_vc,
g_vc,
h_vc,
i_dt,
j_vc,
k_vc,
l_vc,
m_dt,
n_vch,
o_bit
FROM send.sgev3
WHERE m_dt BETWEEN '2014-06-09' AND CURDATE();
END
Since, my query is inserting the records into send.sgev3_archive from send.sgev3. I want to do one more thing. I want to delete the records present in the send.sgev3 table after selecting and inserting the same into send.sgev3_archive. Should I write the DELETE query right below the SELECT query in my code above? Just wanted to confirm as I don't want to mess up my real data and accidently delete any records without getting it copied. Please advise.

Yes exactly. Include a DELETE statement saying
DELETE FROM send.sgev3
WHERE m_dt BETWEEN '2014-06-09' AND CURDATE();
To be more sure that INSERT does completes before DELETE invokes; wrap the INSERT and DELETE in a Transaction Block saying
START TRANSACTION;
INSERT INTO send.sgev3_archive ...
SELECT ... FROM send.sgev3
COMMIT;
You can as well handle error condition in your procedure and ROLLBACK the entire transaction by using exit handler in stored procedure. Below post already shows an way to do the same. Take a look.
How can I use transactions in my MySQL stored procedure?
MySQL Rollback in transaction
EDIT:
why transaction is necessary? Can't I just proceed like the way I have
mentioned in my question?
Instead of explaining you why; let's show you an example (Quite resemble your scenario)
Let's say you have a table named parent declared as
create table parent(id int not null auto_increment primary key,
`name` varchar(10),city varchar(10));
Insert some records to it
insert into parent(`name`,city) values('sfsdfd','sdfsdfdf'),('sfsdfd','sdfsdfdf'),('sfsdfd',null)
Now, you have another table named child defined as below (Notice the last column has not null constraint)
create table child(id int not null auto_increment primary key,
`name` varchar(10),city varchar(10) not null)
Now execute both the below statement (what you are currently doing)
insert into child(`name`,city) select * from parent;
delete from parent;
Result: INSERT will fail due to the not null constraint in child table but delete will succeed.
To avoid this exact scenario you need Transaction in place. so that, if INSERT fails you don't go for delete at all.
A pseudo code on how you handle this in transaction
start transaction
insert into child(`name`,city) select * from parent;
if(ERROR)
rollback
exit from stored proc
else
commit
delete from parent;
SideNote: exit from stored proc can be implemented using LEAVE

Related

Cannot update value on insert if salary is greater than

I have a problem creating a trigger for a basic table that will check on insert if one of the values inserted is bigger than 3000 and replace it with 0. It throws this error:
Can't update table 'staff' in stored function/trigger because it is already used by statement which invoked this stored function/trigger
The structure of the table is very simple:
CREATE TABLE `staff` (
`ID` int(11) NOT NULL,
`NAZWISKO` varchar(50) DEFAULT NULL,
`PLACA` float DEFAULT NULL
)
And the trigger for it looks like this:
BEGIN
IF new.placa >= 3000 THEN
UPDATE staff SET new.placa = 0;
END IF;
END
I don't understand fully what occurs here, but I suspect some recursion, but I am quite new to the topic of triggers and I have lab coming, so I want to be prepared for it.
MySQL disallows triggers from doing UPDATE/INSERT/DELETE against the same table for which the trigger executed, because there is too great a chance of causing an infinite loop. That is, in UPDATE trigger, if you could UPDATE the same table, that would cause the UPDATE trigger to execute, which would UPDATE the same table, and so on and so on.
But I guess you only want to change the value of placa on the same row being handled by the trigger. If so, just SET it:
BEGIN
IF new.placa >= 3000 THEN
SET new.placa = 0;
END IF;
END
Remember that you must use a BEFORE trigger when changing column values.

MySQL 100% get last insert id LAST_INSERT_ID();

MySQL has LAST_INSERT_ID(); function which gets the last insert.
But this is not safe: If i run some query then get LAST_INSERT_ID() and between the two another query is executed I can get the wrong id.
This can happen in multiple threads using the same connection, or using pconnect (persistend connection for multiple users)
Is there safe method for getting the ID that i want 100% ?
Thanks
Store procedure may help in the case:
Create table test (id int AUTO_INCREMENT primary key, name varchar(50) )
Store procedure as:
delimiter $$
DROP PROCEDURE IF EXISTS InserData$$
CREATE PROCEDURE InserData(IN _Name VARCHAR(50))
BEGIN
START TRANSACTION;
INSERT INTO test(name) VALUES (_Name);
SELECT LAST_INSERT_ID() AS InsertID;
COMMIT;
END
Call the stored procedure using
CALL InserData('TESTER')
Give it a try as we have transaction statement but it can't ensure the value in multi threaded environment.
The link Mysql thread safety of last_insert_id explain it will work based on per connection model.
Is using SELECT Max(ID) FROM table safer than using SELECT last_insert_id(), where they run as 2 separate queries?
According to your question the table must have a primary key.
So you can get last record from MAX(ID)

MySQL Contiguous Sequential Rows Field even on delete and insert

I need a sequential number sequence for the rows in a table and I need to ensure that it is always sequential with no gaps on insert , when deleted I can leave the row gap, but on insert I must fill the gaps with the new rows. The reason for this is a different system must line up one for one with the row records. Yet the db can be manipulated by others in both the sql end and also via an application ; I am thinking a trigger will allow me to accomplish the something changed part - but how to actually determine if I have gaps and perform the insert of this sequence number - even if I have to maintain the deleted sequences in a separate table and manage is fine - I am required to line up one for one with this other system no matter how the table gets manipulated .
Auto Increment field will not work as a row gets deleted the next insert will be the last auto Increment value. I would need an insert at .. or perhaps keep the row and add a field IsDeleted and force the table as read only or no more inserts / deletes ..but how to do that?
Perhaps when row is inserted I could set sequence number at gap if found or at end if not.
Does somebody have experience doing this kind of thing ?
I know there is a lot here. I tried to document it rather well inside the code and here and there. It uses Stored Procedures. You can naturally pull the code out and not use that method. It uses a main table that houses next available incrementors. It uses safe INNODB Intention Locks for concurrency. It has a reuse table and stored procs to support it.
It does not in anyway use the table myTable. It is shown there for your own imagination based on comments under your question. The summary of that is that you know that you will have gaps upon DELETE. You want some orderly fashion to reuse those slots, those sequence numbers. So, when you DELETE a row, use the stored procs accordingly to add that number. Naturally there is a stored proc to get the next sequence number for reuse and other things.
For the purposes of testing, your sectionType = 'devices'
And best of all it is tested!
Schema:
create table myTable
( -- your main table, the one you cherish
`id` int auto_increment primary key, -- ignore this
`seqNum` int not null, -- FOCUS ON THIS
`others` varchar(100) not null
) ENGINE=InnoDB;
create table reuseMe
( -- table for sequence numbers to reuse
`seqNum` int not null primary key, -- FOCUS ON THIS
`reused` int not null -- 0 upon entry, 1 when used up (reused)
-- the primary key enforces uniqueness
) ENGINE=InnoDB;;
CREATE TABLE `sequences` (
-- table of sequence numbers system-wide
-- this is the table that allocates the incrementors to you
`id` int NOT NULL AUTO_INCREMENT,
`sectionType` varchar(200) NOT NULL,
`nextSequence` int NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `sectionType` (`sectionType`)
) ENGINE=InnoDB;
INSERT sequences(sectionType,nextSequence) values ('devices',1); -- this is the focus
INSERT sequences(sectionType,nextSequence) values ('plutoSerialNum',1); -- not this
INSERT sequences(sectionType,nextSequence) values ('nextOtherThing',1); -- not this
-- the other ones are conceptuals for multi-use of a sequence table
Stored Proc: uspGetNextSequence
DROP PROCEDURE IF EXISTS uspGetNextSequence;
DELIMITER $$
CREATE PROCEDURE uspGetNextSequence(p_sectionType varchar(200))
BEGIN
-- a stored proc to manage next sequence numbers handed to you.
-- driven by the simple concept of a name. So we call it a section type.
-- uses SAFE INNODB Intention Locks to support concurrency
DECLARE valToUse INT;
START TRANSACTION;
SELECT nextSequence into valToUse from sequences where sectionType=p_sectionType FOR UPDATE;
IF valToUse is null THEN
SET valToUse=-1;
END IF;
UPDATE sequences set nextSequence=nextSequence+1 where sectionType=p_sectionType;
COMMIT; -- get it and release INTENTION LOCK ASAP
SELECT valToUse as yourSeqNum; -- return as a 1 column, 1 row resultset
END$$
DELIMITER ;
-- ****************************************************************************************
-- test:
call uspGetNextSequence('devices'); -- your section is 'devices'
After you call uspGetNextSequence() it is your RESPONSIBILITY to ensure that that sequence #
is either added into myTable (by confirming it), or that if it fails, you insert it into
the reuse table with a call to uspAddToReuseList(). Not all inserts succeed. Focus on this part.
Because with this code you cannot "put" it back into the sequences table because of
concurrency, other users, and the range already passed by. So, simply, if the insert fails,
put the number into reuseMe via uspAddToReuseList()
.
.
.
Stored Proc: uspAddToReuseList:
DROP PROCEDURE IF EXISTS uspAddToReuseList;
DELIMITER $$
CREATE PROCEDURE uspAddToReuseList(p_reuseNum INT)
BEGIN
-- a stored proc to insert a sequence num into the reuse list
-- marks it available for reuse (a status column called `reused`)
INSERT reuseMe(seqNum,reused) SELECT p_reuseNum,0; -- 0 means it is avail, 1 not
END$$
DELIMITER ;
-- ****************************************************************************************
-- test:
call uspAddToReuseList(701); -- 701 needs to be reused
Stored Proc: uspGetOneToReuse:
DROP PROCEDURE IF EXISTS uspGetOneToReuse;
DELIMITER $$
CREATE PROCEDURE uspGetOneToReuse()
BEGIN
-- a stored proc to get an available sequence num for reuse
-- a return of -1 means there aren't any
-- the slot will be marked as reused, the row will remain
DECLARE retNum int; -- the seq number to return, to reuse, -1 means there isn't one
START TRANSACTION;
-- it is important that 0 or 1 rows hit the following condition
-- also note that FOR UPDATE is the innodb Intention Lock
-- The lock is for concurrency (multiple users at once)
SELECT seqNum INTO retNum
FROM reuseMe WHERE reused=0 ORDER BY seqNum LIMIT 1 FOR UPDATE;
IF retNum is null THEN
SET retNum=-1;
ELSE
UPDATE reuseMe SET reused=1 WHERE seqNum=retNum; -- slot used
END IF;
COMMIT; -- release INTENTION LOCK ASAP
SELECT retNum as yoursToReuse; -- >0 or -1 means there is none
END$$
DELIMITER ;
-- ****************************************************************************************
-- test:
call uspGetOneToReuse();
Stored Proc: uspCleanReuseList:
DROP PROCEDURE IF EXISTS uspCleanReuseList;
DELIMITER $$
CREATE PROCEDURE uspCleanReuseList()
BEGIN
-- a stored proc to remove rows that have been successfully reused
DELETE FROM reuseMe where reused=1;
END$$
DELIMITER ;
-- ****************************************************************************************
-- test:
call uspCleanReuseList();
Stored Proc: uspOoopsResetToAvail:
DROP PROCEDURE IF EXISTS uspOoopsResetToAvail;
DELIMITER $$
CREATE PROCEDURE uspOoopsResetToAvail(p_reuseNum INT)
BEGIN
-- a stored proc to deal with a reuse attempt (sent back to you)
-- that you need to reset the number as still available,
-- perhaps because of a failed INSERT when trying to reuse it
UPDATE reuseMe SET reused=0 WHERE seqNum=p_reuseNum;
END$$
DELIMITER ;
-- ****************************************************************************************
-- test:
call uspOoopsResetToAvail(701);
Workflow ideas:
Let GNS mean a call to uspGetNextSequence().
Let RS mean Reuse Sequence via a call to uspGetOneToReuse()
When a new INSERTis desired, call RS:
A. If RS returns -1 then nothing is to be reused so call GNS which returns N. If you can successfully INSERT with myTable.seqNum=N with a confirm, you are done. If you cannot successfully INSERT it, then call uspAddToReuseList(N).
B. If RS returns > 0, note in your head that slot has reuseMe.reused=1, a good thing to remember. So it is assumed to be in the process of being successfully reused. Let's call that sequence number N. If you can successfully INSERT with myTable.seqNum=N with a confirm, you are done. If you cannot successfully INSERT it, then call uspOoopsResetToAvail(N).
When you deem it safe to call uspCleanReuseList() do so. Adding a DATETIME to the reuseMe table might be a good idea, denoting when a row from myTable was orignally deleting and causing the reuseMe row to get its original INSERT.

Concurrent MYSQL procedure calls returning different results with an unrelated SELECT statement

I'm experiencing some very strange transactional behaviour in my MYSQL application.
I've managed to reduce the problem down to a small isolated test case, the code for which I’ve included below:
-- Setup a new environment
SET GLOBAL TRANSACTION ISOLATION LEVEL READ COMMITTED;
DROP DATABASE IF EXISTS `testDB`;
CREATE DATABASE `testDB`;
USE `testDB`;
-- Create a table I want two procedure calls to interact with
CREATE TABLE `tbl_test` (
`id` INT(10) UNSIGNED NOT NULL
, PRIMARY KEY (`id`)
);
-- A second table purely to demonstrate the issue
CREATE TABLE `tbl_test2` (
`id` INT(10) UNSIGNED NOT NULL
);
DELIMITER $$
DROP PROCEDURE IF EXISTS `sp_test` $$
CREATE PROCEDURE `sp_test` ()
BEGIN
START TRANSACTION;
-- CRAZY LINE
SELECT * FROM `tbl_test2`;
-- Insert ignore so both calls don’t try to insert the same row
INSERT IGNORE INTO `tbl_test` (`id`) VALUES (1);
-- Sleep added to make it possible to run concurrently manually
SELECT SLEEP(1) INTO #rubbish;
-- The result I am interested in
SELECT COUNT(*) FROM `tbl_test`;
COMMIT;
END $$
DELIMITER ;
Steps to Reproduce:
Run in the above script to create a test database, two tables and a stored procedure.
In two separate connections, as near to simultaneously as possible, run the stored procedures (you can increase the SLEEP time if you need longer):
USE `testDB`;
CALL sp_test ();
The Problem
When executed concurrently over two separate connections the SELECT COUNT(*) FROM `tbl_test`; statement returns different values for the two calls.
When I follow the steps above, I get back 1 from the first of the two procedure calls and 0 from the second.
My understanding of transactional behaviour and table locking is that when the first call reaches the INSERT statement it will create a lock. The second procedure call will reach the same line but must then wait until the transaction from the first call has been committed. Increasing the sleep time reinforces this idea as the second call will take twice as long to complete. If this is the case however, then the second procedure call should pick up the insert from the first call and both results should be equal to 1.
TL;DR
I'm expecting both to equal 1
Note that I am using READ_COMMITTED as my transaction isolation level.
I've tested this using MYSQL server and MariaDB
Further Weirdness
So at this point I assumed my understanding was incorrect. However, I then noticed that by removing the line SELECT * FROM `tbl_test2`; the results suddenly produced the expected values!
I've been experimenting with the script but essentially, including a SELECT statement to any table within the database before the INSERT line causes unanticipated results. I have absolutely no idea why this is the case.
Questions
Is my understanding of the expected transactional behaviour correct?
Why on earth does the SELECT statement to an unrelated table cause the transactional locking to fail?
If anyone can shed some light on this I would be very grateful!

modify the table from its own trigger in Mysql- PhpMyAdmin

I'm using Mysql in phpMyAdmin,where i have to delete a entry from tableA if i insert a row with same primary key.I thought to do it in trigger of tableA BEFORE INSERT
For Ex,
if the tableA contains
1 Hai Hello
here 1 is the primary key
And now if i insert a row 1 Bye Hello then the trigger BEFORE INSERT will delete the old entry and then the new row (2nd) will be inserted. But Mysql has restriction of not being able to update a table inside a trigger defined for that same table.
It gives the error
#1442 - Can't update table 'tableA' in stored
function/trigger because it is already used by statement which invoked
this stored function/trigger.
So i changed my way, i called a procedure from trigger BEFORE INSERT of tableA and in that procedure i do the task what i thought to do in trigger. But unfortunately i'm getting the same error.
In trigger BEFORE INSERT i simply called the procedure as
CALL proce1(new.Reg_No);
In procedure i have done this
DECLARE toup integer;
select count(*) into toup from tableA where Reg_No=reg;/*Here Reg_No is primary key */
if toup > 0 then
delete from tableA where Reg_No=reg;
end if;
Need some other Idea to achieve this. Help Me.....
I don't like to use triggers so much because they are hard to manage somethimes. Also they will cause you a downgrade in performance. I am not against trigger as they can be handy in some cases.
In your case have you though of using REPLACE(....) or INSERT INTO .... ON DUPLICATE KEY UPDATE or even INSERT INTO IGNORE .....
REPLACE(....) will delete a record if a record is found and insert a new one with the same ID.
INSERT INTO .... ON DUPLICATE KEY UPDATE will allow you to override existing field if a duplicate is found.
INSERT INTO IGNORE ..... will allow you to ignore the new inserted row if one already exists
Since you mentioned in the comments that you are importing records from a file, then try to use LOAD DATA INFILE logic which will allow you to REPLACE field on duplicate
LOAD DATA LOCAL INFILE 'x3export.txt'
REPLACE INTO TABLE x3export