Concurrency when using SELECT as value during INSERT - mysql

I have a BEFORE INSERT trigger on a sameTable,
BEGIN
set new.seq = (select (max(seq) + 1) from sameTable);
END;
Question:
Does it immune to concurrency?
Let's say, for simplification, I need seq to be unique without enforcing unique-index.
Using: MariaDB, InnoDB engine.
EDIT
To be clear that I can't make use of auto_increment, the trigger looks like this:
BEGIN
case
when new.seq is null then
set new.seq = (
select
(ifnull(max(seq),0) + 1)
from sameTable
where
aField = new.aField
);
else
set new.seq = new.seq;
end case;
END;
It's now clear that seq is not behaving like auto_increment since there's aField. Surely aField-seq is composite-unique, but, let's say, again, for simplification, I need aField-seq to be unique without enforcing composite-unique-index.
I do know that using SELECT ... FOR UPDATE within a transaction,
START TRANSACTION
set val = (
select
(ifnull(max(seq),0) + 1)
from sameTable
where
aField = new.aField
for update
);
case
when seq is null then
set seq = val;
else
set seq = seq;
end case;
COMMIT;
will lock sameTable. Although, I don't know if it works within a trigger, but that's another question.
My question is: If I do a single query, would it lock the table or not? Preferably, why?

Related

MySQL: get a random unique integer ID

I tried to write a SQL-function that generates an unused unique ID in a range between 1000000 and 4294967295. I need numeric values, so UUID() alike is not a solution. It doesn't sound that difficult, but for some reason, the code below does not work when called within an INSERT-statement on a table as value for the primary key (not auto_increment, of course). The statement is like INSERT INTO table (id, content) VALUES ((SELECT getRandomID(0,0)), 'blabla bla');
(Since default values are not allowed in such functions, I shortly submit 0 for each argument and set it in the function to the desired value.)
Called once and separated from INSERT or Python-code, everything is fine. Called several times, something weird happens and not only the whole process but also the server might hang within REPEAT. The process is then not even possible to kill/restart; I have to reboot the machine -.-
It also seems to only have some random values ready for me, since the same values appear again and again after some calls, allthough I actually thought that the internal rand() would be a sufficient start/seed for the outer rand().
Called from Python, the loop starts to hang after some rounds although the very first one in my tests always produces a useful, new ID and therefore should quit after the first round. Wyh? Well, the table is empty...so SELECT COUNT(*)... returns 0 which actually is the signal for leaving the loop...but it doesn't.
Any ideas?
I'm running MariaDB 10.something on SLES 12.2. Here is the exported source code:
DELIMITER $$
CREATE DEFINER=`root`#`localhost` FUNCTION `getRandomID`(`rangeStart` BIGINT UNSIGNED, `rangeEnd` BIGINT UNSIGNED) RETURNS bigint(20) unsigned
READS SQL DATA
BEGIN
DECLARE rnd BIGINT unsigned;
DECLARE i BIGINT unsigned;
IF rangeStart is null OR rangeStart < 1 THEN
SET rangeStart = 1000000;
END IF;
IF rangeEnd is null OR rangeEnd < 1 THEN
SET rangeEnd = 4294967295;
END IF;
SET i = 0;
r: REPEAT
SET rnd = FLOOR(rangeStart + RAND(RAND(FLOOR(1 + rand() * 1000000000))*10) * (rangeEnd - rangeStart));
SELECT COUNT(*) INTO i FROM `table` WHERE `id` = rnd;
UNTIL i = 0 END REPEAT r;
RETURN rnd;
END$$
DELIMITER ;
A slight improvement:
SELECT COUNT(*) INTO i FROM `table` WHERE `id` = rnd;
UNTIL i = 0 END REPEAT r;
-->
UNTIL NOT EXISTS( SELECT 1 FROM `table` WHERE id = rnd ) REPEAT r;
Don't pass any argument to RAND -- that is for establishing a repeatable sequence of random numbers.
mysql> SELECT RAND(123), RAND(123), RAND(), RAND()\G
*************************** 1. row ***************************
RAND(123): 0.9277428611440052
RAND(123): 0.9277428611440052
RAND(): 0.5645420109522921
RAND(): 0.12561983719991504
1 row in set (0.00 sec)
So simplify to
SET rnd = FLOOR(rangeStart + RAND() * (rangeEnd - rangeStart));
If you want to include rangeEnd in the possible outputs, add 1:
SET rnd = FLOOR(rangeStart + RAND() * (rangeEnd - rangeStart + 1));

How to check difference in each on OLD.* to NEW.* column in a MySQL Trigger?

Since SQL does not have a FOR-EACH statement, how could we verify if there is a difference on each value from the OLD object to the NEW object in a AFTER UPDATE type TRIGGER without knowing the table columns [and table names]?
Example today:
CREATE TRIGGER `audit_events_ugly`
AFTER UPDATE ON `accounts`
FOR EACH ROW
BEGIN
DECLARE changes VARCHAR(8000);
IF OLD.user_name <> NEW.user_name THEN
SET changes = 'user_name from % to %';
END IF;
IF OLD.user_type <> NEW.user_type THEN
SET changes = CONCAT(changes, ', user_type from % to %');
END IF;
IF OLD.user_email <> NEW.user_email THEN
SET changes = CONCAT(changes, ', user_email from % to %');
END IF;
CALL reg_event(how_canI_get_tableName?, #user_id, changes);
-- and that can go on and on... differently for every table.
END;
Example as I wish it could be:
CREATE TRIGGER `audit_events_nice`
AFTER UPDATE ON `accounts`
FOR EACH ROW
BEGIN
DECLARE changes VARCHAR(8000);
DECLARE N INT DEFAULT 1;
FOREACH OLD, NEW as OldValue, NewValue
BEGIN
IF OldValue <> NewValue THEN
SET changes = CONCAT(changes, ', column N: % to %');
SET N = N + 1;
END IF;
CALL reg_event(how_canI_get_tableName?, #user_id, changes);
-- now I can paste this code in every table that is audited..
END;
Any Ideas? WHILE, FOREACH, ARRAYS...
I think you cannot do that directly in a for-loop at the trigger level.
However, you could use a script to generate the trigger code. You would need to re-generate it every time you add/remove a field to the table (usually not frequently).

how to optimise MySql function to work for concurrent users

I am new to MySQL. Please can you advice on how can i modify below function to ensure it does not throw locking errors when called by multiple users at the same time.
CREATE FUNCTION `get_val`(`P_TABLE` VARCHAR(50)) RETURNS int(11)
DETERMINISTIC
BEGIN
DECLARE pk_value INT DEFAULT 0;
DECLARE pk_found INT DEFAULT 0;
SELECT 1 INTO pk_found FROM pk_keys WHERE TABLE_NAME = P_TABLE;
IF pk_found = 1
THEN
UPDATE pk_keys SET TABLE_VALUE = (TABLE_VALUE + 1 ) WHERE TABLE_NAME = P_TABLE;
ELSE
INSERT INTO pk_keys VALUES ( P_TABLE, 1 );
END IF;
SELECT TABLE_VALUE INTO pk_value FROM pk_keys WHERE TABLE_NAME = P_TABLE;
RETURN pk_value;
END
thanks
CREATE FUNCTION `get_val`(`P_TABLE` VARCHAR(50)) RETURNS int(11)
DETERMINISTIC
MODIFIES SQL DATA
BEGIN
DECLARE pk_value INT DEFAULT 0;
IF EXISTS (SELECT 1 FROM pk_keys WHERE TABLE_NAME = P_TABLE)
THEN
SELECT TABLE_VALUE + 1 INTO pk_value FROM pk_keys WHERE TABLE_NAME = P_TABLE FOR UPDATE;
UPDATE pk_keys SET TABLE_VALUE = pk_value WHERE TABLE_NAME = P_TABLE;
ELSE
SET pk_value = 1;
INSERT INTO pk_keys VALUES ( P_TABLE, pk_value );
END IF;
RETURN pk_value;
END
Have a look at SELECT ... FOR UPDATE and SELECT ... LOCK IN SHARE MODE Locking Reads
Let us look at another example: We have an integer counter field in a
table child_codes that we use to assign a unique identifier to each
child added to table child. It is not a good idea to use either
consistent read or a shared mode read to read the present value of the
counter because two users of the database may then see the same value
for the counter, and a duplicate-key error occurs if two users attempt
to add children with the same identifier to the table.
Here, LOCK IN SHARE MODE is not a good solution because if two users
read the counter at the same time, at least one of them ends up in
deadlock when it attempts to update the counter.
To implement reading and incrementing the counter, first perform a
locking read of the counter using FOR UPDATE, and then increment the
counter. For example:
SELECT counter_field FROM child_codes FOR UPDATE;
UPDATE child_codes SET counter_field = counter_field + 1;
A SELECT ... FOR UPDATE reads the latest available data, setting
exclusive locks on each row it reads. Thus, it sets the same locks a
searched SQL UPDATE would set on the rows.
Also I replaced your if condition. EXISTS stops as soon as a row is found.

How can I simulate an array variable in MySQL?

It appears that MySQL doesn't have array variables. What should I use instead?
There seem to be two alternatives suggested: A set-type scalar and temporary tables. The question I linked to suggests the former. But is it good practice to use these instead of array variables? Alternatively, if I go with sets, what would be the set-based idiom equivalent to foreach?
Well, I've been using temporary tables instead of array variables. Not the greatest solution, but it works.
Note that you don't need to formally define their fields, just create them using a SELECT:
DROP TEMPORARY TABLE IF EXISTS my_temp_table;
CREATE TEMPORARY TABLE my_temp_table
SELECT first_name FROM people WHERE last_name = 'Smith';
(See also Create temporary table from select statement without using Create Table.)
You can achieve this in MySQL using WHILE loop:
SET #myArrayOfValue = '2,5,2,23,6,';
WHILE (LOCATE(',', #myArrayOfValue) > 0)
DO
SET #value = ELT(1, #myArrayOfValue);
SET #myArrayOfValue= SUBSTRING(#myArrayOfValue, LOCATE(',',#myArrayOfValue) + 1);
INSERT INTO `EXEMPLE` VALUES(#value, 'hello');
END WHILE;
EDIT:
Alternatively you can do it using UNION ALL:
INSERT INTO `EXEMPLE`
(
`value`, `message`
)
(
SELECT 2 AS `value`, 'hello' AS `message`
UNION ALL
SELECT 5 AS `value`, 'hello' AS `message`
UNION ALL
SELECT 2 AS `value`, 'hello' AS `message`
UNION ALL
...
);
Try using FIND_IN_SET() function of MySql
e.g.
SET #c = 'xxx,yyy,zzz';
SELECT * from countries
WHERE FIND_IN_SET(countryname,#c);
Note: You don't have to SET variable in StoredProcedure if you are passing parameter with CSV values.
Nowadays using a JSON array would be an obvious answer.
Since this is an old but still relevant question I produced a short example.
JSON functions are available since mySQL 5.7.x / MariaDB 10.2.3
I prefer this solution over ELT() because it's really more like an array and this 'array' can be reused in the code.
But be careful: It (JSON) is certainly much slower than using a temporary table. Its just more handy. imo.
Here is how to use a JSON array:
SET #myjson = '["gmail.com","mail.ru","arcor.de","gmx.de","t-online.de",
"web.de","googlemail.com","freenet.de","yahoo.de","gmx.net",
"me.com","bluewin.ch","hotmail.com","hotmail.de","live.de",
"icloud.com","hotmail.co.uk","yahoo.co.jp","yandex.ru"]';
SELECT JSON_LENGTH(#myjson);
-- result: 19
SELECT JSON_VALUE(#myjson, '$[0]');
-- result: gmail.com
And here a little example to show how it works in a function/procedure:
DELIMITER //
CREATE OR REPLACE FUNCTION example() RETURNS varchar(1000) DETERMINISTIC
BEGIN
DECLARE _result varchar(1000) DEFAULT '';
DECLARE _counter INT DEFAULT 0;
DECLARE _value varchar(50);
SET #myjson = '["gmail.com","mail.ru","arcor.de","gmx.de","t-online.de",
"web.de","googlemail.com","freenet.de","yahoo.de","gmx.net",
"me.com","bluewin.ch","hotmail.com","hotmail.de","live.de",
"icloud.com","hotmail.co.uk","yahoo.co.jp","yandex.ru"]';
WHILE _counter < JSON_LENGTH(#myjson) DO
-- do whatever, e.g. add-up strings...
SET _result = CONCAT(_result, _counter, '-', JSON_VALUE(#myjson, CONCAT('$[',_counter,']')), '#');
SET _counter = _counter + 1;
END WHILE;
RETURN _result;
END //
DELIMITER ;
SELECT example();
Dont know about the arrays, but there is a way to store comma-separated lists in normal VARCHAR column.
And when you need to find something in that list you can use the FIND_IN_SET() function.
I know that this is a bit of a late response, but I recently had to solve a similar problem and thought that this may be useful to others.
Background
Consider the table below called 'mytable':
The problem was to keep only latest 3 records and delete any older records whose systemid=1 (there could be many other records in the table with other systemid values)
It would be good if you could do this simply using the statement
DELETE FROM mytable WHERE id IN (SELECT id FROM `mytable` WHERE systemid=1 ORDER BY id DESC LIMIT 3)
However this is not yet supported in MySQL and if you try this then you will get an error like
...doesn't yet support 'LIMIT & IN/ALL/SOME subquery'
So a workaround is needed whereby an array of values is passed to the IN selector using variable. However, as variables need to be single values, I would need to simulate an array. The trick is to create the array as a comma separated list of values (string) and assign this to the variable as follows
SET #myvar = (SELECT GROUP_CONCAT(id SEPARATOR ',') AS myval FROM (SELECT * FROM `mytable` WHERE systemid=1 ORDER BY id DESC LIMIT 3 ) A GROUP BY A.systemid);
The result stored in #myvar is
5,6,7
Next, the FIND_IN_SET selector is used to select from the simulated array
SELECT * FROM mytable WHERE FIND_IN_SET(id,#myvar);
The combined final result is as follows:
SET #myvar = (SELECT GROUP_CONCAT(id SEPARATOR ',') AS myval FROM (SELECT * FROM `mytable` WHERE systemid=1 ORDER BY id DESC LIMIT 3 ) A GROUP BY A.systemid);
DELETE FROM mytable WHERE FIND_IN_SET(id,#myvar);
I am aware that this is a very specific case. However it can be modified to suit just about any other case where a variable needs to store an array of values.
I hope that this helps.
DELIMITER $$
CREATE DEFINER=`mysqldb`#`%` PROCEDURE `abc`()
BEGIN
BEGIN
set #value :='11,2,3,1,';
WHILE (LOCATE(',', #value) > 0) DO
SET #V_DESIGNATION = SUBSTRING(#value,1, LOCATE(',',#value)-1);
SET #value = SUBSTRING(#value, LOCATE(',',#value) + 1);
select #V_DESIGNATION;
END WHILE;
END;
END$$
DELIMITER ;
Maybe create a temporary memory table with columns (key, value) if you want associative arrays. Having a memory table is the closest thing to having arrays in mysql
Here’s how I did it.
First, I created a function that checks whether a Long/Integer/whatever value is in a list of values separated by commas:
CREATE DEFINER = 'root'#'localhost' FUNCTION `is_id_in_ids`(
`strIDs` VARCHAR(255),
`_id` BIGINT
)
RETURNS BIT(1)
NOT DETERMINISTIC
CONTAINS SQL
SQL SECURITY DEFINER
COMMENT ''
BEGIN
DECLARE strLen INT DEFAULT 0;
DECLARE subStrLen INT DEFAULT 0;
DECLARE subs VARCHAR(255);
IF strIDs IS NULL THEN
SET strIDs = '';
END IF;
do_this:
LOOP
SET strLen = LENGTH(strIDs);
SET subs = SUBSTRING_INDEX(strIDs, ',', 1);
if ( CAST(subs AS UNSIGNED) = _id ) THEN
-- founded
return(1);
END IF;
SET subStrLen = LENGTH(SUBSTRING_INDEX(strIDs, ',', 1));
SET strIDs = MID(strIDs, subStrLen+2, strLen);
IF strIDs = NULL or trim(strIds) = '' THEN
LEAVE do_this;
END IF;
END LOOP do_this;
-- not founded
return(0);
END;
So now you can search for an ID in a comma-separated list of IDs, like this:
select `is_id_in_ids`('1001,1002,1003',1002);
And you can use this function inside a WHERE clause, like this:
SELECT * FROM table1 WHERE `is_id_in_ids`('1001,1002,1003',table1_id);
This was the only way I found to pass an "array" parameter to a PROCEDURE.
I'm surprised none of the answers mention ELT/FIELD.
ELT/FIELD works very similar to an array especially if you have static data.
FIND_IN_SET also works similar but doesn't have a built in complementary
function but it's easy enough to write one.
mysql> select elt(2,'AA','BB','CC');
+-----------------------+
| elt(2,'AA','BB','CC') |
+-----------------------+
| BB |
+-----------------------+
1 row in set (0.00 sec)
mysql> select field('BB','AA','BB','CC');
+----------------------------+
| field('BB','AA','BB','CC') |
+----------------------------+
| 2 |
+----------------------------+
1 row in set (0.00 sec)
mysql> select find_in_set('BB','AA,BB,CC');
+------------------------------+
| find_in_set('BB','AA,BB,CC') |
+------------------------------+
| 2 |
+------------------------------+
1 row in set (0.00 sec)
mysql> SELECT SUBSTRING_INDEX(SUBSTRING_INDEX('AA,BB,CC',',',2),',',-1);
+-----------------------------------------------------------+
| SUBSTRING_INDEX(SUBSTRING_INDEX('AA,BB,CC',',',2),',',-1) |
+-----------------------------------------------------------+
| BB |
+-----------------------------------------------------------+
1 row in set (0.01 sec)
Is an array variable really necessary?
I ask because I originally landed here wanting to add an array as a MySQL table variable. I was relatively new to database design and trying to think of how I'd do it in a typical programming language fashion.
But databases are different. I thought I wanted an array as a variable, but it turns out that's just not a common MySQL database practice.
Standard Practice
The alternative solution to arrays is to add an additional table, and then reference your original table with a foreign key.
As an example, let's imagine an application that keeps track of all the items every person in a household wants to buy at the store.
The commands for creating the table I originally envisioned would have looked something like this:
#doesn't work
CREATE TABLE Person(
name VARCHAR(50) PRIMARY KEY
buy_list ARRAY
);
I think I envisioned buy_list to be a comma-separated string of items or something like that.
But MySQL doesn't have an array type field, so I really needed something like this:
CREATE TABLE Person(
name VARCHAR(50) PRIMARY KEY
);
CREATE TABLE BuyList(
person VARCHAR(50),
item VARCHAR(50),
PRIMARY KEY (person, item),
CONSTRAINT fk_person FOREIGN KEY (person) REFERENCES Person(name)
);
Here we define a constraint named fk_person. It says that the 'person' field in BuyList is a foreign key. In other words, it's a primary key in another table, specifically the 'name' field in the Person table, which is what REFERENCES denotes.
We also defined the combination of person and item to be the primary key, but technically that's not necessary.
Finally, if you want to get all the items on a person's list, you can run this query:
SELECT item FROM BuyList WHERE person='John';
This gives you all the items on John's list. No arrays necessary!
This is my solution to use a variable containing a list of elements.
You can use it in simple queries (no need to use store procedures or create tables).
I found somewhere else on the site the trick to use the JSON_TABLE function (it works in mysql 8, I dunno of it works in other versions).
set #x = '1,2,3,4' ;
select c.NAME
from colors c
where
c.COD in (
select *
from json_table(
concat('[',#x,']'),
'$[*]' columns (id int path '$') ) t ) ;
Also, you may need to manage the case of one or more variables set to empty_string.
In this case I added another trick (the query does not return error even if x, y, or both x and y are empty strings):
set #x = '' ;
set #y = 'yellow' ;
select c.NAME
from colors
where
if(#y = '', 1 = 1, c.NAME = #y)
and if(#x = '', 1, c.COD) in (
select *
from json_table(
concat('[',if(#x = '', 1, #x),']'),
'$[*]' columns (id int path '$') ) t) ;
This works fine for list of values:
SET #myArrayOfValue = '2,5,2,23,6,';
WHILE (LOCATE(',', #myArrayOfValue) > 0)
DO
SET #value = ELT(1, #myArrayOfValue);
SET #STR = SUBSTRING(#myArrayOfValue, 1, LOCATE(',',#myArrayOfValue)-1);
SET #myArrayOfValue = SUBSTRING(#myArrayOfValue, LOCATE(',', #myArrayOfValue) + 1);
INSERT INTO `Demo` VALUES(#STR, 'hello');
END WHILE;
Both versions using sets didn't work for me (tested with MySQL 5.5). The function ELT() returns the whole set. Considering the WHILE statement is only avaible in PROCEDURE context i added it to my solution:
DROP PROCEDURE IF EXISTS __main__;
DELIMITER $
CREATE PROCEDURE __main__()
BEGIN
SET #myArrayOfValue = '2,5,2,23,6,';
WHILE (LOCATE(',', #myArrayOfValue) > 0)
DO
SET #value = LEFT(#myArrayOfValue, LOCATE(',',#myArrayOfValue) - 1);
SET #myArrayOfValue = SUBSTRING(#myArrayOfValue, LOCATE(',',#myArrayOfValue) + 1);
END WHILE;
END;
$
DELIMITER ;
CALL __main__;
To be honest, i don't think this is a good practice. Even if its realy necessary, this is barely readable and quite slow.
Isn't the point of arrays to be efficient? If you're just iterating through values, I think a cursor on a temporary (or permanent) table makes more sense than seeking commas, no? Also cleaner. Lookup "mysql DECLARE CURSOR".
For random access a temporary table with numerically indexed primary key. Unfortunately the fastest access you'll get is a hash table, not true random access.
Another way to see the same problem.
Hope helpfull
DELIMITER $$
CREATE PROCEDURE ARR(v_value VARCHAR(100))
BEGIN
DECLARE v_tam VARCHAR(100);
DECLARE v_pos VARCHAR(100);
CREATE TEMPORARY TABLE IF NOT EXISTS split (split VARCHAR(50));
SET v_tam = (SELECT (LENGTH(v_value) - LENGTH(REPLACE(v_value,',',''))));
SET v_pos = 1;
WHILE (v_tam >= v_pos)
DO
INSERT INTO split
SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(v_value,',',v_pos),',', -1);
SET v_pos = v_pos + 1;
END WHILE;
SELECT * FROM split;
DROP TEMPORARY TABLE split;
END$$
CALL ARR('1006212,1006404,1003404,1006505,444,');
If we have one table like that
mysql> select * from user_mail;
+------------+-------+
| email | user |
+------------+-------+-
| email1#gmail | 1 |
| email2#gmail | 2 |
+------------+-------+--------+------------+
and the array table:
mysql> select * from user_mail_array;
+------------+-------+-------------+
| email | user | preferences |
+------------+-------+-------------+
| email1#gmail | 1 | 1 |
| email1#gmail | 1 | 2 |
| email1#gmail | 1 | 3 |
| email1#gmail | 1 | 4 |
| email2#gmail | 2 | 5 |
| email2#gmail | 2 | 6 |
We can select the rows of the second table as one array with CONCAT function:
mysql> SELECT t1.*, GROUP_CONCAT(t2.preferences) AS preferences
FROM user_mail t1,user_mail_array t2
where t1.email=t2.email and t1.user=t2.user
GROUP BY t1.email,t1.user;
+------------+-------+--------+------------+-------------+
| email | user | preferences |
+------------+-------+--------+------------+-------------+
|email1#gmail | 1 | 1,3,2,4 |
|email2#gmail | 2 | 5,6 |
+------------+-------+--------+------------+-------------+
In MYSQL version after 5.7.x, you can use JSON type to store an array. You can get value of an array by a key via MYSQL.
Inspired by the function ELT(index number, string1, string2, string3,…),I think the following example works as an array example:
set #i := 1;
while #i <= 3
do
insert into table(val) values (ELT(#i ,'val1','val2','val3'...));
set #i = #i + 1;
end while;
Hope it help.
Here is an example for MySQL for looping through a comma delimited string.
DECLARE v_delimited_string_access_index INT;
DECLARE v_delimited_string_access_value VARCHAR(255);
DECLARE v_can_still_find_values_in_delimited_string BOOLEAN;
SET v_can_still_find_values_in_delimited_string = true;
SET v_delimited_string_access_index = 0;
WHILE (v_can_still_find_values_in_delimited_string) DO
SET v_delimited_string_access_value = get_from_delimiter_split_string(in_array, ',', v_delimited_string_access_index); -- get value from string
SET v_delimited_string_access_index = v_delimited_string_access_index + 1;
IF (v_delimited_string_access_value = '') THEN
SET v_can_still_find_values_in_delimited_string = false; -- no value at this index, stop looping
ELSE
-- DO WHAT YOU WANT WITH v_delimited_string_access_value HERE
END IF;
END WHILE;
this uses the get_from_delimiter_split_string function defined here: https://stackoverflow.com/a/59666211/3068233
I Think I can improve on this answer. Try this:
The parameter 'Pranks' is a CSV. ie. '1,2,3,4.....etc'
CREATE PROCEDURE AddRanks(
IN Pranks TEXT
)
BEGIN
DECLARE VCounter INTEGER;
DECLARE VStringToAdd VARCHAR(50);
SET VCounter = 0;
START TRANSACTION;
REPEAT
SET VStringToAdd = (SELECT TRIM(SUBSTRING_INDEX(Pranks, ',', 1)));
SET Pranks = (SELECT RIGHT(Pranks, TRIM(LENGTH(Pranks) - LENGTH(SUBSTRING_INDEX(Pranks, ',', 1))-1)));
INSERT INTO tbl_rank_names(rank)
VALUES(VStringToAdd);
SET VCounter = VCounter + 1;
UNTIL (Pranks = '')
END REPEAT;
SELECT VCounter AS 'Records added';
COMMIT;
END;
This method makes the searched string of CSV values progressively shorter with each iteration of the loop, which I believe would be better for optimization.
I would try something like this for multiple collections. I'm a MySQL beginner. Sorry about the function names, couldn't decide on what names would be best.
delimiter //
drop procedure init_
//
create procedure init_()
begin
CREATE TEMPORARY TABLE if not exists
val_store(
realm varchar(30)
, id varchar(30)
, val varchar(255)
, primary key ( realm , id )
);
end;
//
drop function if exists get_
//
create function get_( p_realm varchar(30) , p_id varchar(30) )
returns varchar(255)
reads sql data
begin
declare ret_val varchar(255);
declare continue handler for 1146 set ret_val = null;
select val into ret_val from val_store where id = p_id;
return ret_val;
end;
//
drop procedure if exists set_
//
create procedure set_( p_realm varchar(30) , p_id varchar(30) , p_val varchar(255) )
begin
call init_();
insert into val_store (realm,id,val) values (p_realm , p_id , p_val) on duplicate key update val = p_val;
end;
//
drop procedure if exists remove_
//
create procedure remove_( p_realm varchar(30) , p_id varchar(30) )
begin
call init_();
delete from val_store where realm = p_realm and id = p_id;
end;
//
drop procedure if exists erase_
//
create procedure erase_( p_realm varchar(30) )
begin
call init_();
delete from val_store where realm = p_realm;
end;
//
call set_('my_array_table_name','my_key','my_value');
select get_('my_array_table_name','my_key');
Rather than Saving data as a array or in one row only you should be making diffrent rows for every value received. This will make it much simpler to understand rather than putting all together.
Have you tried using PHP's serialize()?
That allows you to store the contents of a variable's array in a string PHP understands and is safe for the database (assuming you've escaped it first).
$array = array(
1 => 'some data',
2 => 'some more'
);
//Assuming you're already connected to the database
$sql = sprintf("INSERT INTO `yourTable` (`rowID`, `rowContent`) VALUES (NULL, '%s')"
, serialize(mysql_real_escape_string($array, $dbConnection)));
mysql_query($sql, $dbConnection) or die(mysql_error());
You can also do the exact same without a numbered array
$array2 = array(
'something' => 'something else'
);
or
$array3 = array(
'somethingNew'
);

Finding min and max value of the table in a constant time

I have a table which contains relative large data,
so that it takes too long for the statements below:
SELECT MIN(column) FROM table WHERE ...
SELECT MAX(column) FROM table WHERE ...
I tried index the column, but the performance still does not suffice my need.
I also thought of caching min and max value in another table by using trigger or event.
But my MySQL version is 5.0.51a which requires SUPER privilege for trigger and does not support event.
It is IMPOSSIBLE for me to have SUPER privilege or to upgrade MySQL.
(If possible, then no need to ask!)
How to solve this problem just inside MySQL?
That is, without the help of OS.
If your column is indexed, you should find min(column) near instantly, because that is the first value MySQL will find.
Same goes for max(column) on an indexed column.
If you cannot add an index for some reason the following triggers will cache the MIN and MAX value in a separate table.
Note that TRUE = 1 and FALSE = 0.
DELIMITER $$
CREATE TRIGGER ai_table1_each AFTER INSERT ON table1 FOR EACH ROW
BEGIN
UPDATE db_info i
SET i.minimum = LEAST(i.minimum, NEW.col)
,i.maximum = GREATEST(i.maximum, NEW.col)
,i.min_count = (i.min_count * (new.col < i.minumum))
+ (i.minimum = new.col) + (i.minimum < new.col)
,i.max_count = (i.max_count * (new.col > i.maximum))
+ (i.maximum = new.col) + (new.col > i.maximum)
WHERE i.tablename = 'table1';
END $$
CREATE TRIGGER ad_table1_each AFTER DELETE ON table1 FOR EACH ROW
BEGIN
DECLARE new_min_count INTEGER;
DECLARE new_max_count INTEGER;
UPDATE db_info i
SET i.min_count = i.min_count - (i.minimum = old.col)
,i.max_count = i.max_count - (i.maximum = old.col)
WHERE i.tablename = 'table1';
SELECT i.min_count INTO new_min_count, i.max_count INTO new_max_count
FROM db_info i
WHERE i.tablename = 'table1';
IF new_max_count = 0 THEN
UPDATE db_info i
CROSS JOIN (SELECT MAX(col) as new_max FROM table1) m
SET i.max_count = 1
,i.maximum = m.new_max;
END IF;
IF new_min_count = 0 THEN
UPDATE db_info i
CROSS JOIN (SELECT MIN(col) as new_min FROM table1) m
SET i.min_count = 1
,i.minimum = m.new_min;
END IF;
END $$
DELIMITER ;
The after update trigger will be some mix of the insert and delete triggers.