I'm having issues where an update statement should (to my knowledge) update 5 rows (I have selected 5 rows into a temp table and used an INNER JOIN in the update statement)
however when it comes to running the update statement it updates anything that could have been selected into the temp table not just the joined contents of the temp table itself.
I'm using the FOR UPDATE code in the selection statement to lock the rows, (as I'm expecting multiple queries to be aimed at this table at one time, (NOTE removing that does not change the error effect)
I've generalised the entire code base and it still has the same effect, I've been on this for the past few days and I'm sure that its just something silly I must be doing
Code description
TABLE `data`.`data_table Table to store data and to show it has been taken my the external program.
Stored Procedure `admin`.`func_fill_table debug code to populate the above table.
Stored Procedure `data`.`func_get_data Actual code designed to retrieve a batch size records, mark them as picked up and then return them to an external application.
Basic Setup Code
DROP TABLE IF EXISTS `data`.`data_table`;
DROP PROCEDURE IF EXISTS `admin`.`func_fill_table`;
DROP PROCEDURE IF EXISTS `data`.`func_get_data`;
DROP SCHEMA IF EXISTS `data`;
DROP SCHEMA IF EXISTS `admin`;
CREATE SCHEMA `admin`;
CREATE SCHEMA `data`;
CREATE TABLE `data`.`data_table` (
`identification_field_1` char(36) NOT NULL,
`identification_field_2` char(36) NOT NULL,
`identification_field_3` int(11) NOT NULL,
`information_field_1` int(11) NOT NULL,
`utc_action_time` datetime NOT NULL,
`utc_actioned_time` datetime DEFAULT NULL,
PRIMARY KEY (`identification_field_1`,`identification_field_2`,`identification_field_3`),
KEY `NC_IDX_data_table_action_time` (`utc_action_time`)
);
Procedure Creation
DELIMITER //
CREATE PROCEDURE `admin`.`func_fill_table`(
IN records int
)
BEGIN
IF records < 1
THEN SET records = 50;
END IF;
SET #processed = 0;
SET #action_time = NULL;
WHILE #processed < records
DO
SET #action_time = DATE_ADD(now(), INTERVAL FLOOR(RAND()*(45)-10) MINUTE);#time shorter for temp testing
SET #if_1 = UUID();
SET #if_2 = UUID();
INSERT INTO data.data_table(
identification_field_1
,identification_field_2
,identification_field_3
,information_field_1
,utc_action_time
,utc_actioned_time)
VALUES (
#if_1
,#if_2
,FLOOR(RAND()*5000+1)
,FLOOR(RAND()*5000+1)
,#action_time
,NULL);
SET #processed = #processed +1;
END WHILE;
END
//
CREATE PROCEDURE `data`.`func_get_data`(
IN batch int
)
BEGIN
IF batch < 1
THEN SET batch = 1; /*Minimum Batch Size of 1 */
END IF;
DROP TABLE IF EXISTS `data_set`;
CREATE TEMPORARY TABLE `data_set`
SELECT
`identification_field_1` as `identification_field_1_local`
,`identification_field_2` as `identification_field_2_local`
,`identification_field_3` as `identification_field_3_local`
FROM `data`.`data_table`
LIMIT 0; /* Create a temp table using the same data format as the table but insert no data*/
SET SESSION sql_select_limit = batch;
INSERT INTO `data_set` (
`identification_field_1_local`
,`identification_field_2_local`
,`identification_field_3_local`)
SELECT
`identification_field_1`
,`identification_field_2`
,`identification_field_3`
FROM `data`.`data_table`
WHERE
`utc_actioned_time` IS NULL
AND `utc_action_time` < NOW()
FOR UPDATE; #Select out the rows to process (up to batch size (eg 5)) and lock those rows
UPDATE
`data`.`data_table` `dt`
INNER JOIN
`data_set` `ds`
ON (`ds`.`identification_field_1_local` = `dt`.`identification_field_1`
AND `ds`.`identification_field_2_local` = `dt`.`identification_field_2`
AND `ds`.`identification_field_3_local` = `dt`. `identification_field_3`)
SET `dt`.`utc_actioned_time` = NOW();
# Update the table to say these rows are being processed
select ROW_COUNT(),batch;
#Debug output for rows altered (should be maxed by batch number)
SELECT * FROM
`data`.`data_table` `dt`
INNER JOIN
`data_set` `ds`
ON (`ds`.`identification_field_1_local` = `dt`.`identification_field_1`
AND `ds`.`identification_field_2_local` = `dt`.`identification_field_2`
AND `ds`.`identification_field_3_local` = `dt`. `identification_field_3`);
# Debug output of the rows that should have been modified
SELECT
`identification_field_1_local`
,`identification_field_2_local`
,`identification_field_3_local`
FROM
`data_set`; /* Output data to external system*/
/* Commit the in process field and allow other processes to access thoese rows again */
END;
//
Run Code
call `admin`.`func_fill_table`(5000);
call `data`.`func_get_data`(5);
You are misusing the sql_select_limit-setting:
The maximum number of rows to return from SELECT statements.
It only applies to select statements (to limit results sent to the client), not to insert ... select .... It is intended as a safeguard to prevent users to be accidentally flooded with millions of results, not as another limit function.
While in general, you cannot use a variable for limit, you can do it in a stored procedure (for MySQL 5.5+):
The LIMIT clause can be used to constrain the number of rows returned by the SELECT statement. LIMIT takes one or two numeric arguments, which must both be nonnegative integer constants, with these exceptions: [...]
Within stored programs, LIMIT parameters can be specified using integer-valued routine parameters or local variables.
So in your case, you can simply use
...
FROM `data`.`data_table`
WHERE `utc_actioned_time` IS NULL AND `utc_action_time` < NOW()
LIMIT batch
FOR UPDATE;
Related
When I call this stored procedure it shows error: unknown column...
BEGIN
if (
`LastRow.Transaction`=4 and `LastRow.Xre`>1)
then
SELECT
sleep(2);
END if;
end
Please note that sleep(2) is just to demonstrate to do something if condition is true. What would be the proper way to accomplish a test based on value of a specific record? In the above example the table (actually a View) has only one row.
Q: What would be the proper way to accomplish a test based on value of a specific record?
If you mean, based on values in columns stored in one row of a table... it seems like we would need a query that references the table that retrieve the values stored in the row. And then we can have those values available in the procedure.
As an example
BEGIN
-- local procedure variables, specify appropriate datatypes
DECLARE lr_transaction BIGINT DEFAULT NULL;
DECLARE lr_xre BIGINT DEFAULT NULL;
-- retrieve values from columns into local procedure variables
SELECT `LastRow`.`Transaction`
, `LastRow`.`Xre`
INTO lr_transaction
, lr_xre
FROM `LastRow`
WHERE someconditions
ORDER BY someexpressions
LIMIT 1
;
IF ( lr_transaction = 4 AND lr_xre > 1 ) THEN
-- do something
END IF;
END$$
That's an example of how we can retrieve a row from a table, and do some check. We could also do the check with SQL and just return a boolean
BEGIN
-- local procedure variables, specify appropriate datatypes
DECLARE lb_check TINYINT(1) UNSIGNED DEFAULT 0;
-- retrieve values from columns into local procedure variables
SELECT IF(`LastRow`.`Transaction` = 4 AND `LastRow`.`Xre` > 1,1,0)
INTO lb_check
FROM `LastRow`
WHERE someconditions
ORDER BY someexpressions
LIMIT 1
;
IF ( lb_check ) THEN
-- do something
END IF;
END$$
This stored procedure that I'm working on errors out some times. I am getting a Result consisted of more than one row error, but only for certain JOB_ID_INPUT values. I understand what causes this error, and so I have tried to be really careful to make sure that my return values are scalar when they should be. Its tough to see into the stored procedure, so I'm not sure where the error could be generated. Since the error is thrown conditionally, it has me thinking memory could be an issue, or cursor reuse. I don't work with cursors that often so I'm not sure. Thank you to anyone who helps.
DROP PROCEDURE IF EXISTS export_job_candidates;
DELIMITER $$
CREATE PROCEDURE export_job_candidates (IN JOB_ID_INPUT INT(11))
BEGIN
DECLARE candidate_count INT(11) DEFAULT 0;
DECLARE candidate_id INT(11) DEFAULT 0;
# these are the ib variables
DECLARE _overall_score DECIMAL(5, 2) DEFAULT 0.0;
# declare the cursor that will be needed for this SP
DECLARE curs CURSOR FOR SELECT user_id FROM job_application WHERE job_id = JOB_ID_INPUT;
# this table stores all of the data that will be returned from the various tables that will be joined together to build the final export
CREATE TEMPORARY TABLE IF NOT EXISTS candidate_stats_temp_table (
overall_score_ib DECIMAL(5, 2) DEFAULT 0.0
) engine = memory;
SELECT COUNT(job_application.id) INTO candidate_count FROM job_application WHERE job_id = JOB_ID_INPUT;
OPEN curs;
# loop controlling the insert of data into the temp table that is retuned by this function
insert_loop: LOOP
# end the loop if there is no more computation that needs to be done
IF candidate_count = 0 THEN
LEAVE insert_loop;
END IF;
FETCH curs INTO candidate_id;
# get the ib data that may exist for this user
SELECT
tests.overall_score
INTO
_overall_score
FROM
tests
WHERE
user_id = candidate_id;
#build the insert for the table that is being constructed via this loop
INSERT INTO candidate_stats_temp_table (
overall_score
) VALUES (
_overall_score
);
SET candidate_count = candidate_count - 1;
END LOOP;
CLOSE curs;
SELECT * FROM candidate_stats_temp_table WHERE 1;
END $$
DELIMITER ;
The WHERE 1 (as pointed out by #cdonner) definitely doesn't look right, but I'm pretty sure this error is happening because one of your SELECT ... INTO commands is returning more than one row.
This one should be OK because it's an aggregate without a GROUP BY, which always returns one row:
SELECT COUNT(job_application.id) INTO candidate_count
FROM job_application WHERE job_id = JOB_ID_INPUT;
So it's probably this one:
# get the ib data that may exist for this user
SELECT
tests.overall_score
INTO
_overall_score
FROM
tests
WHERE
user_id = candidate_id;
Try to figure out if it's possible for this query to return more than one row, and if so, how do you work around it. One way might be to MAX the overall score:
SELECT MAX(tests.overall_sore) INTO _overall_score
FROM tests
WHERE user_id = candidate_id
I think you want to use
LIMIT 1
in your select, not
WHERE 1
Aside from using this safety net, you should understand your data to figure out why you are getting multiple results. Without seeing the data, it is difficult for me to take a guess.
I am new to the area of MySQL function but I can't seem to get this working properly.
Basically, every time I run a SELECT query from this one particular table, the next incremented value should be displayed.
Here is my table:
CREATE TABLE testdb.id_generator(
invoice_id INT(11) NOT NULL,
PRIMARY KEY (invoice_id)
)
ENGINE = INNODB
AVG_ROW_LENGTH = 16384
CHARACTER SET latin1
COLLATE latin1_swedish_ci;
Here is the function:
CREATE
FUNCTION testdb.f_id_test()
RETURNS INT(11)
BEGIN
DECLARE v_val INT;
DECLARE c_select CURSOR FOR SELECT invoice_id
FROM
id_generator;
OPEN c_select;
FETCH c_select INTO v_val;
CLOSE c_select;
UPDATE id_generator
SET
invoice_id = invoice_id + 1;
RETURN v_val;
END
Whenever I try to run the Query
SELECT f_id_test()
FROM
id_generator
it says "Can't update table 'id_generator' in stored function/trigger because it is already used by statement which invoked this stored function/trigger." What am I doing wrong here?
When you run your statement, try it without the "FROM id_generator", essentially just:
SELECT f_id_test()
Your function is already pulling the data from the id_generator table (so the FROM clause doesn't alter it), and the error is saying that your SELECT statement is using the same table that the function is trying to update.
I'm writing a stored procedure to update a table:
UPDATE st SET somedate = NOW();
The client of the SP must know the exact date and time generated by the NOW function.
There are two options:
1) the client passes an input parameter (called _now) to the SP giving it the current date and time
UPDATE st SET somedate = _now;
2) the SP returns back the NOW's output to the client into an out parameter
UPDATE st SET somedate = NOW();
SELECT somedate FROM st INTO _now;
What do you think is the best option?
Are other options possible?
varnow = now()
UPDATE st set somedate = varnow
return varnow
i would do something like this:
drop table if exists users;
create table users
(
user_id int unsigned not null auto_increment primary key,
username varchar(32) unique not null,
created_date datetime not null
)
engine=innodb;
delimiter ;
drop procedure if exists insert_user;
delimiter #
create procedure insert_user
(
in uname varchar(32)
)
proc_main:begin
declare id int unsigned default null;
declare created datetime default null;
set created = now();
insert into users (username, created_date) values (uname, created);
set id = last_insert_id();
-- use id elsewhere maybe...
select id as user_id, created as created_date;
end proc_main #
delimiter ;
call insert_user('f00');
call insert_user('bar');
select * from users;
I suspect that both approaches are wrong.
client of the SP must know the exact date and time
Why? I suspect you really men that the client must be able to identify the records affected by a transaction - but using a timestamp to do that will not be accurate. And its not just a transaction spanning more than 1 second that is the problem. Potentially two such operations may occur in the same second.
If you've got a set of records which you need to identify as belonging to some group then that must be expressed in the schema - the timestamp of the most transaction is obviously not reliable even assuming that you never have further updates on the table.
Add another column or another table and generate a surrogate key to describe the transaction.
C.
I have a trigger in which I want to have a variable that holds an INT I get from a SELECT, so I can use it in two IF statements instead of calling the SELECT twice. How do you declare/use variables in MySQL triggers?
You can declare local variables in MySQL triggers, with the DECLARE syntax.
Here's an example:
DROP TABLE IF EXISTS foo;
CREATE TABLE FOO (
i SERIAL PRIMARY KEY
);
DELIMITER //
DROP TRIGGER IF EXISTS bar //
CREATE TRIGGER bar AFTER INSERT ON foo
FOR EACH ROW BEGIN
DECLARE x INT;
SET x = NEW.i;
SET #a = x; -- set user variable outside trigger
END//
DELIMITER ;
SET #a = 0;
SELECT #a; -- returns 0
INSERT INTO foo () VALUES ();
SELECT #a; -- returns 1, the value it got during the trigger
When you assign a value to a variable, you must ensure that the query returns only a single value, not a set of rows or a set of columns. For instance, if your query returns a single value in practice, it's okay but as soon as it returns more than one row, you get "ERROR 1242: Subquery returns more than 1 row".
You can use LIMIT or MAX() to make sure that the local variable is set to a single value.
CREATE TRIGGER bar AFTER INSERT ON foo
FOR EACH ROW BEGIN
DECLARE x INT;
SET x = (SELECT age FROM users WHERE name = 'Bill');
-- ERROR 1242 if more than one row with 'Bill'
END//
CREATE TRIGGER bar AFTER INSERT ON foo
FOR EACH ROW BEGIN
DECLARE x INT;
SET x = (SELECT MAX(age) FROM users WHERE name = 'Bill');
-- OK even when more than one row with 'Bill'
END//
CREATE TRIGGER clearcamcdr AFTER INSERT ON `asteriskcdrdb`.`cdr`
FOR EACH ROW
BEGIN
SET #INC = (SELECT sip_inc FROM trunks LIMIT 1);
IF NEW.billsec >1 AND NEW.channel LIKE #INC
AND NEW.dstchannel NOT LIKE ""
THEN
insert into `asteriskcdrdb`.`filtre` (id_appel,date_appel,source,destinataire,duree,sens,commentaire,suivi)
values (NEW.id,NEW.calldate,NEW.src,NEW.dstchannel,NEW.billsec,"entrant","","");
END IF;
END$$
Dont try this # home
`CREATE TRIGGER `category_before_ins_tr` BEFORE INSERT ON `category`
FOR EACH ROW
BEGIN
**SET #tableId= (SELECT id FROM dummy LIMIT 1);**
END;`;
I'm posting this solution because I had a hard time finding what I needed. This post got me close enough (+1 for that thank you), and here is the final solution for rearranging column data before insert if the data matches a test.
Note: this is from a legacy project I inherited where:
The Unique Key is a composite of rridprefix + rrid
Before I took over there was no constraint preventing duplicate unique keys
We needed to combine two tables (one full of duplicates) into the main table which now has the constraint on the composite key (so merging fails because the gaining table won't allow the duplicates from the unclean table)
on duplicate key is less than ideal because the columns are too numerous and may change
Anyway, here is the trigger that puts any duplicate keys into a legacy column while allowing us to store the legacy, bad data (and not trigger the gaining tables composite, unique key).
BEGIN
-- prevent duplicate composite keys when merging in archive to main
SET #EXIST_COMPOSITE_KEY = (SELECT count(*) FROM patientrecords where rridprefix = NEW.rridprefix and rrid = NEW.rrid);
-- if the composite key to be introduced during merge exists, rearrange the data for insert
IF #EXIST_COMPOSITE_KEY > 0
THEN
-- set the incoming column data this way (if composite key exists)
-- the legacy duplicate rrid field will help us keep the bad data
SET NEW.legacyduperrid = NEW.rrid;
-- allow the following block to set the new rrid appropriately
SET NEW.rrid = null;
END IF;
-- legacy code tried set the rrid (race condition), now the db does it
SET NEW.rrid = (
SELECT if(NEW.rrid is null and NEW.legacyduperrid is null, IFNULL(MAX(rrid), 0) + 1, NEW.rrid)
FROM patientrecords
WHERE rridprefix = NEW.rridprefix
);
END
Or you can just include the SELECT statement in the SQL that's invoking the trigger, so its passed in as one of the columns in the trigger row(s). As long as you're certain it will infallibly return only one row (hence one value). (And, of course, it must not return a value that interacts with the logic in the trigger, but that's true in any case.)
As far I think I understood your question
I believe that u can simply declare your variable inside "DECLARE"
and then after the "begin" u can use 'select into " you variable" ' statement.
the code would look like this:
DECLARE
YourVar varchar(50);
begin
select ID into YourVar from table
where ...