I put 10,000 avatar photos on a server and I would like for each row inserted into the 'studenttable' table, the 'photo' column to be the concatenation of the url of the folder of my photos + the id of the inserted student.
However, the CONCAT function returns a NULL value with the basic trigger used.
First, here is the above mentioned table :
CREATE TABLE `studenttable` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(50) NOT NULL,
`gender` enum('Male','Female','Other') NOT NULL,
`email` varchar(100) DEFAULT NULL,
`birthDate` date DEFAULT NULL,
`photo` varchar(535) DEFAULT NULL,
`mark` double DEFAULT NULL,
`comment` varchar(535) DEFAULT NULL,
PRIMARY KEY (`id`)
)
and here is the basic trigger I created:
DELIMITER $$
create trigger IMAGE_LienApi
before insert on studenttable
for each row
begin
set NEW.photo = CONCAT('https://url-of-folder-with-my-images/',NEW.id,'.png');
end$$
DELIMITER ;
For information, the images are referenced in this way:
number.png
So when I insert a new student with this trigger, the photo column is always set to NULL.
The problem must come from NEW.id, because when I replace this value with a string, it works.
I also tried with
NEW.photo = 'https://url-of-folder-with-my-images/' + CONVERT(VARCHAR(5),NEW.id),'.png';
but it did not work
Thank you in advance for your help and if someone could explain to me especially why the CONCAT does not work, that would be great !
CONCAT() returns NULL if any of its arguments are NULL.
In a BEFORE INSERT trigger, the NEW.id value is NULL. It hasn't been generated yet.
But in an AFTER INSERT trigger, it's too late to change the NEW.photo column of your row. You can't change columns in an AFTER trigger.
You cannot make a BEFORE INSERT trigger to concatenate an auto-increment value with other columns. The best you can do is let the INSERT complete, and then subsequently do an UPDATE to change your photo column.
The alternative is to forget about using the builtin AUTO_INCREMENT to generate id values, instead generate them some other way and specify the value in your INSERT statement.
I have this table
CREATE TABLE `pcodes` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`product_id` int(10) unsigned NOT NULL,
`code` varchar(100) NOT NULL,
`used` int(10) unsigned NOT NULL DEFAULT '0',
`created_at` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
`updated_at` datetime DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`)
)
and an insert command is the following:
INSERT INTO `pcodes` (`product_id`, `code`) VALUES ('1', 'test2');
The table contains random codes for each product_id. I want to get one unused code randomly (LIMIT 1 is ok for the job), mark the code as used and return it to the next layer.
So far I did this:
SELECT * FROM pcodes where product_id=1 and used=0 LIMIT 1
UPDATE pcodes SET used= 1 WHERE (id = 2);
but this does not work well when multiple threads request the first unused code. What is the optimal solution to do this query? I would like to avoid stored procedures.
Possible solution.
Assumes that there aren't other predefined values stored in used column except 0 and 1.
CREATE PROCEDURE select_one_random_row (OUT rowid BIGINT)
BEGIN
REPEAT
UPDATE pcodes SET used = CONNECTION_ID() WHERE used = 0 LIMIT 1;
SELECT id INTO rowid FROM pcodes WHERE used = CONNECTION_ID();
UNTIL rowid IS NOT NULL END REPEAT;
UPDATE pcodes SET used = 1 WHERE used = CONNECTION_ID();
END
To prevent indefinite loop (for example when no rows with used=0) add some counter which increments in REPEAT cycle and breaks it after some reasonable iteration attempts.
The code may be converted to FUNCTION which returns selected rowid.
It is possible that the procedure/function fails (by some external reasons), and a row will stay "selected by current CONNECTION_ID()" whereas the connection is broken itself. So you need in service procedure executed by Event Scheduler which will garbage the rows which belongs to non-existed connections and clear their used value back to zero returning such rows to unused pool.
I am trying to create an event in mysql
Schema :
create event alert_2 ON SCHEDULE EVERY 300 SECOND DO
BEGIN
DECLARE current_time DATETIME;
DECLARE attempted INT;
DECLARE completed INT;
DECLARE calc_value DECIMAL;
set #current_time = CONVERT_TZ(NOW(), ##session.time_zone, '+0:00');
select count(uniqueid) as #attempted,SUM(CASE WHEN seconds > 0 THEN 1 ELSE 0 END) as #completed from callinfo where date >= DATE_SUB(#current_time, INTERVAL 300 SECOND) AND date <= #current_time;
SET #calc_value = (ROUND((#completed/#attempted)*100,2);
IF #calc_value <= 10.00 THEN
INSERT INTO report(value1) value (#calc_value);
END IF;
END;
Problem :
Event is not going to creating
Need suggestion :
Is this create any overload on callinfo table ?
If yes,Would you like to suggest any other way to achieve same thing ?
May i create similar but multiple around 50.Will it create huge load on call info table.
Call info schema :
CREATE TABLE `callinfo` (
`uniqueid` varchar(60) NOT NULL DEFAULT '',
`accountid` int(11) DEFAULT '0',
`type` tinyint(1) NOT NULL DEFAULT '0',
`callerid` varchar(120) NOT NULL,
`callednum` varchar(30) NOT NULL DEFAULT '',
`seconds` smallint(6) NOT NULL DEFAULT '0',
`trunk_id` smallint(6) NOT NULL DEFAULT '0',
`trunkip` varchar(15) NOT NULL DEFAULT '',
`callerip` varchar(15) NOT NULL DEFAULT '',
`disposition` varchar(45) NOT NULL DEFAULT '',
`date` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`debit` decimal(20,6) NOT NULL DEFAULT '0.000000',
`cost` decimal(20,6) NOT NULL DEFAULT '0.000000',
`provider_id` int(11) NOT NULL DEFAULT '0',
`pricelist_id` smallint(6) NOT NULL DEFAULT '0',
`package_id` int(11) NOT NULL DEFAULT '0',
`pattern` varchar(20) NOT NULL,
`notes` varchar(80) NOT NULL,
`invoiceid` int(11) NOT NULL DEFAULT '0',
`rate_cost` decimal(20,6) NOT NULL DEFAULT '0.000000',
`reseller_id` int(11) NOT NULL DEFAULT '0',
`reseller_code` varchar(20) NOT NULL,
`reseller_code_destination` varchar(80) DEFAULT NULL,
`reseller_cost` decimal(20,6) NOT NULL DEFAULT '0.000000',
`provider_code` varchar(20) NOT NULL,
`provider_code_destination` varchar(80) NOT NULL,
`provider_cost` decimal(20,6) NOT NULL DEFAULT '0.000000',
`provider_call_cost` decimal(20,6) NOT NULL,
`call_direction` enum('outbound','inbound') NOT NULL,
`calltype` enum('STANDARD','DID','FREE','CALLINGCARD') NOT NULL DEFAULT 'STANDARD',
`profile_start_stamp` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`answer_stamp` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`bridge_stamp` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`progress_stamp` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`progress_media_stamp` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`end_stamp` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`billmsec` int(11) NOT NULL DEFAULT '0',
`answermsec` int(11) NOT NULL DEFAULT '0',
`waitmsec` int(11) NOT NULL DEFAULT '0',
`progress_mediamsec` int(11) NOT NULL DEFAULT '0',
`flow_billmsec` int(11) NOT NULL DEFAULT '0',
`is_recording` tinyint(1) NOT NULL DEFAULT '1' COMMENT '0 for On,1 for Off'
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='callinfo';
ALTER TABLE `callinfo` ADD UNIQUE KEY `uniqueid` (`uniqueid`), ADD KEY `user_id` (`accountid`);
More Information about callinfo table :
In call info table around 20K/hour rercords are inserted.
Please suggest ,If need to apply any indexing in schema to get good performance.
Some suggestions:
user-defined variables (variables named starting with # character) are separate and distinct from local variables
there's no need to declare local variables that aren't referenced
use local variables in favor of user-defined variables
a column alias (identifier) that starts with # character need to be escaped (or MySQL will throw a syntax error)
assigning a column alias (identifier) that looks like a user-defined variable is just a column alias; it is not a reference to a user-defined variable
use SELECT ... INTO to assign scalar values returned from statement into local variables and/or user-defined variables
declaring datatype DECIMAL is equivalent to specifying DECIMAL(10,0)
in INSERT ... VALUES statement the keyword is VALUES not VALUE
best practice is to give local variables names that are different from column names
best practice is to qualify all column references
its a bit odd to insert only a single column, a calculated value, into a table without some other identifying values (it's not illegal. it may be exactly what the specification calls for. it just strikes me as a bit odd. I bring it up in light of the code as written, because appears that the author of the code is not familiar with MySQL.)
using CONVERT_TZ is a bit odd; given that any datetime value referenced in a SQL statement will be interpreted in the current session time zone; we're kind of assuming that the date column is DATETIME datatype, but that's just a guess.
to create a MySQL stored program that contains semicolons, the DELIMITER for the session needs to be changed to character(s) that don't appear in the stored program definition
Rather than address each individual problem in the stored program, I'm going to suggest a revision that does what it looks like the original code was intended to do:
DELIMITER $$
CREATE EVENT alert_2 ON SCHEDULE EVERY 300 SECOND DO
BEGIN
DECLARE ld_current_time DATETIME;
DECLARE ln_calc_value DECIMAL(20,2);
-- DECLARE li_attempted INT;
-- DECLARE li_completed INT;
SET ld_current_time = CONVERT_TZ(NOW(), ##session.time_zone, '+0:00');
SELECT ROUND( 100.0
* SUM(CASE WHEN c.seconds > 0 THEN 1 ELSE 0 END)
/ COUNT(c.uniqueid)
,2) AS calc_value
-- , COUNT(c.uniqueid) AS attempted
-- , SUM(CASE WHEN c.seconds > 0 THEN 1 ELSE 0 END) AS completed
FROM callinfo c
WHERE c.date > ld_current_time + INTERVAL -300 SECOND
AND c.date <= ld_current_time
INTO ln_calc_value
-- , li_attempted
-- , li_completed
;
IF ln_calc_value <= 10.00 THEN
INSERT INTO report ( value1 ) VALUES ( ln_calc_value );
END IF;
END$$
DELIMITER ;
For performance, we want to have an index with date as the leading column
... ON `callinfo` (`date`, ...)
Ideally (for the query in this stored program) the index with the leading column of date would be a covering index (including all of the columns that are referenced in the query), e.g.
... ON `callinfo` (`date`,`seconds`,`uniqueid`)
Q: Is this create any overload on callinfo table ?
Since this runs a query against callinfo table, it will need to obtain shared locks. With an appropriate index available, and assuming that 5 minutes of call info is a smallish set of rows, I wouldn't expect this query to contribute significantly towards performance problems or contention issues. If it does cause a problem, I would expect that this query in this stored program isn't the root cause of the problem, it will only exacerbate a problem that already exists.
Q: If yes,Would you like to suggest any other way to achieve same thing ?
It's difficult to suggest alternatives to achieving a "thing" when we haven't defined the "thing" we are attempting to achieve.
Q: May i create similar but multiple around 50. Will it create huge load on callinfo table.
A: As long as the query is efficient, is selecting a smallish set of rows via a suitable index, and runs quickly, I wouldn't expect that query to create huge load, no.
FOLLOWUP
For optimal performance, we are definitely going to want an index with leading column of date.
I'd remove the reference to uniqueid in the query. That is, replace COUNT(c.uniqueid) with SUM(1). The results from those are equivalent (given that uniqueid is guaranteed to be non-NULL) except in the case of no rows, COUNT() will return 0 and SUM() will return NULL.
Since we're dividing by that expression, in the case of "no rows" it's a difference between "divide by zero" and "divide by null". And a "divide by zero" operation will raise an error with some settings of sql_mode. If I divide by COUNT(), I'm going to want to convert a zero to NULL before I do the division
... / NULLIF(COUNT(...),0)
or the more ansi standards compliant
... / CASE WHEN COUNT(...) = 0 THEN NULL ELSE COUNT(...) END
but we can avoid that rigmarole by using SUM(1) instead, then we don't have any special handling for the "divide by zero" case. But what that really buys us is that we are removing the reference to the uniqueid column.
Then a "covering index" for the query will require only two columns.
... ON `callinfo` (`date`,`seconds`)
(i.e. EXPLAIN will show "Using index" in the Extra column, and show "range" for access)
Also, I'm not getting my brain wrapped around the need for CONVERT_TZ.
I am trying to figure out make a trigger to assign the value of the auto incremented 'ID' primary key field that is auto generated upon insert to another field 'Sort_Placement' so they are the same after insert.
If you are wondering why I am doing this, 'Sort_Placement' is used as a sort value in a table that can be changed but by default the record is added to the bottom of the table
Table Data
`ID` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,
`Account_Num` mediumint(8) unsigned NOT NULL,
`Product_Num` mediumint(8) unsigned NOT NULL,
`Sort_Placement` mediumint(8) unsigned DEFAULT NULL,
`Order_Qty_C` smallint(6) NOT NULL DEFAULT '0',
`Order_Qty_B` smallint(6) NOT NULL DEFAULT '0',
`Discount` decimal(6,2) NOT NULL DEFAULT '0.00',
PRIMARY KEY (`ID`),
UNIQUE KEY `ID_UNIQUE` (`ID`)
After Insert Trigger
CREATE
TRIGGER `order_guide_insert_trigger`
AFTER INSERT ON `order_guide`
FOR EACH ROW
BEGIN
IF Sort_Placement IS NULL THEN
SET Sort_Placement = NEW.ID;
END IF;
END;
I have tried a bunch of combinations of using the "NEW" prefix with no luck. For example putting the NEW prefix before each field name.
Trying it out
INSERT INTO `order_guide` (`Account_Num`, `Product_Num`) VALUES ('5966', '3');
Insert Error
ERROR 1054: Unknown column 'Sort_Placement' in 'field list'
This seems like a bit of a hack job but I was able to get it working using the LAST_INSERT_ID() function built into MySQL.
CREATE TRIGGER `order_guide_insert_trigger`
BEFORE INSERT ON `order_guide`
FOR EACH ROW
BEGIN
IF NEW.Sort_Placement IS NULL THEN
SET NEW.Sort_Placement = LAST_INSERT_ID() + 1;
END IF;
END;
This also works and seems to work
CREATE TRIGGER `order_guide_insert_trigger`
BEFORE INSERT ON `order_guide`
FOR EACH ROW
BEGIN
IF NEW.Sort_Placement IS NULL THEN
SET NEW.Sort_Placement = (SELECT ID FROM order_Guide ORDER BY id DESC LIMIT 1) + 1;
END IF;
END;
I ran into a similar (yet different) requirement, where a field value in the table needed to be based on the new record's Auto Increment ID. I found two solutions that worked for me.
The first option was to use an event timer that runs every 60 seconds. The event updated the records where my field was set to the default of null. Not a bad solution if you don't mind the up to 60 second delay (you could run it every 1 second if the field that is being update is indexed). Basically the event does this:
CREATE EVENT `evt_fixerupper`
ON SCHEDULE EVERY 1 MINUTE
ENABLE
COMMENT '' DO
BEGIN
UPDATE table_a SET table_a.other_field=CONCAT(table_a.id,'-kittens')
WHERE ISNULL(table_a.other_field);
END;
The other option was to generate my own unique primary IDs (rather than relying upon AUTOINCREMENT. In this case I used a function (in my application) modeled after the perl module https://metacpan.org/pod/Data::Uniqid. the generated ID's are huge in length, but they work well, and I know the value before I insert, so I can use it to generate values for additional fields.
My Table looks like this:
CREATE TABLE IF NOT EXISTS `entry_title` (
`entry_title_id` int(11) NOT NULL AUTO_INCREMENT,
`entry_id` int(11) NOT NULL,
`accepted` tinyint(1) DEFAULT NULL,
`entry_title_lang` char(2) CHARACTER SET ascii NOT NULL,
`entry_title_value` varchar(255) NOT NULL,
`entry_title_created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`entry_title_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
A row in the table represents a title for a content on a website.
The idea is that anyone can submit a new (hopefully improved) title.
Then the community accepts or discards the change.
If the accepted flag equals NULL this represents that the change is pending review by the community. 0 is interpreted as discarded and 1 as accepted.
The website displays the title with the most recent timestamp where the accepted flag equals 1.
When a change is pending review I no other change can be submitted until the pending one has either been accepted or declined.
Therefore i want a constraint in my database that makes sure that there is only row per entry_id where the value of accepted is NULL.
I thought about using a seperate field pending_review which is either 1 or NULL and put a UNIQUE constraint on it in combination with entry_id.
The problem with that is that I would somehow need to unset that field when the change gets accepted or declined and consistency on that level would call for another constraint that kind of leads to the same problem as the simpler solution above.
[updated]
In the standard-driven ideal world:
CHECK(NOT EXISTS(SELECT 1 FROM entry_title WHERE accepted IS NULL GROUP BY entry_id HAVING COUNT(*) > 1))
Alas, we live in imperfect world. See this question
So use trigger with the same logic instead.
[update - trigger]
Something like
CREATE TRIGGER triggerName BEFORE INSERT ON entry_title FOR EACH ROW
BEGIN
IF EXISTS(SELECT 1 FROM entry_title
WHERE accepted IS NULL AND entry_id = NEW.entry_id
GROUP BY entry_id
HAVING COUNT(*) > 1) THEN
SIGNAL SQLSTATE '45000'
END IF;
END
and also do same thing for BEFORE UPDATE.
disclaimer, I did not check this.