Alternative to using Prepared Statement in Trigger with MySQL - mysql

I'm trying to create a MySQL Before Insert trigger with the following code which would do what I want it to do if I could find a way to execute the prepared statement generated by the trigger.
Are the any alternative ways to execute prepared statements from within triggers? Thanks
BEGIN
SET #CrntRcrd = (SELECT AUTO_INCREMENT FROM information_schema.TABLES
WHERE TABLE_SCHEMA=DATABASE()
AND TABLE_NAME='core_Test');
SET #PrevRcrd = #CrntRcrd-1;
IF (NEW.ID IS NULL) THEN
SET NEW.ID = #CrntRcrd;
END IF;
SET #PrevHash = (SELECT Hash FROM core_Test WHERE Record=#PrevRcrd);
SET #ClmNms = (SELECT CONCAT('NEW.',GROUP_CONCAT(column_name
ORDER BY ORDINAL_POSITION SEPARATOR ',NEW.'),'')
FROM information_schema.columns
WHERE table_schema = DATABASE()
AND table_name = 'core_Test');
SET #Query = CONCAT("SET #Query2 = CONCAT_WS(',','",#PrevHash,"','", #CrntRcrd, "',", #ClmNms, ");");
PREPARE stmt1 FROM #Query;
EXECUTE stmt1;
DEALLOCATE PREPARE stmt1;
SET NEW.Hash = #Query2;
END
UPDATE / CLARIFICATION: The data will be stored in a table as below.
+------------+-----+------+----------------+
| Record (AI)| ID | Data | HASH |
+------------+-----+------+----------------+
| 1 | 1 | ASDF | =DHFBGKJSDFHBG | (Hash Col 1)
| 2 | 2 | NULL | =UEGFRYJKSDFHB | (Hash Col 1 + Col 2)
| 3 | 1 | VBNM | =VKJSZDFVHBFJH | (Hash Col 2 + Col 3)
| 4 | 4 | TYUI | =KDJFGNJBHMNVB | (Hash Col 3 + Col 4)
| 5 | 5 | ZXCV | =SDKVBCVJHBJHB | (Hash Col 4 + Col 5)
+------------+-----+------+----------------+
On each insert command the table will generate a Hash value for that row by appeding the pervious row's Hash value to a CONCAT() of the entire new row, then re-hashing the entire string. This will create a running record of Hash values for auditing purposes / use in another part of the application.
My constraints are that this has to be done before the INSERT as rows cannot be updated afterwards.
UPDATE: I'm currently using the following code until I can find a way to pass the column names to CONCAT dynamically:
BEGIN
SET #Record = (
SELECT AUTO_INCREMENT FROM information_schema.TABLES
WHERE TABLE_SCHEMA=DATABASE()
AND TABLE_NAME='core_Test' #<--- UPDATE TABLE_NAME HERE
);
SET #PrevRecrd = #Record-1;
IF (new.ID IS NULL) THEN
SET new.ID = #Record;
END IF;
SET #PrevHash = (
SELECT Hash FROM core_Test #<--- UPDATE TABLE_NAME HERE
WHERE Record=#PrevRecrd
);
SET new.Hash = SHA1(CONCAT_WS(',',#PrevHash, #Record,
/* --- UPDATE TABLE COLUMN NAMES HERE (EXCLUDE "new.Record" AND "new.Hash") --- */
new.ID, new.Name, new.Data
));
END

The short answer is that you can't use dynamic SQL in a TRIGGER.
I'm confused by the query of the auto_increment value, and assigning a value to the ID column. I don't understand why you need to set the value of the ID column. Isn't that the column that is defined as the AUTO_INCREMENT? The database will handle the assignment.
It's also not clear that your query is guaranteed to return unique values, especially when concurrent inserts are run. (I've not tested, so it might work.)
But the code is peculiar.
It looks as if what you're trying to accomplish is to get the value of a column from the most recently inserted row. I think there are some restrictions on querying the same table the trigger is defined on. (I know for sure there is in Oracle; MySQL may be more liberal.)
If I needed to do something like that, I would try something like this:
SELECT #prev_hash := t.hash AS prev_hash
FROM core_Test t
ORDER BY t.ID DESC LIMIT 1;
SET NEW.hash = #prev_hash;
But again, I'm not sure this will work (I would need to test). If it works on a simple case, that's not proof that it works all the time, in the case of concurrent inserts, in the case of an extended insert, et al.
I wrote the query the way I did so that it can make use of an index on the ID column, to do a reverse scan operation. If it doesn't use the index, I would try rewriting that query (probably as a JOIN, to get the best possible performance.
SELECT #prev_hash := t.hash AS prev_hash
FROM ( SELECT r.ID FROM core_Test r ORDER BY r.ID DESC LIMIT 1 ) s
JOIN core_Test t
ON t.ID = s.ID
Excerpt from MySQL 5.1 Reference Manual E.1 Restrictions on Stored Programs
SQL prepared statements (PREPARE, EXECUTE, DEALLOCATE PREPARE) can be
used in stored procedures, but not stored functions or
triggers. Thus, stored functions and triggers cannot use
dynamic SQL (where you construct statements as strings and then
execute them).
[sic]

Related

Update multiple fields in mysql by publishing single message to kafka

I have an requirement to update status as De Active in Mysql 'Table1' for last 10 days records through kafka connect.
how would I achieve to publish one record to kafka topic because mysql provides to perform select and update in single query.
DEMO example.
-- create stored procedure (once)
CREATE PROCEDURE execute_many_queries (queries_text TEXT)
BEGIN
REPEAT
SET #sql := SUBSTRING_INDEX(queries_text, ';', 1);
SET queries_text := TRIM(LEADING ';' FROM TRIM(LEADING #sql FROM queries_text));
PREPARE stmt FROM #sql;
EXECUTE stmt;
DROP PREPARE stmt;
UNTIL queries_text = '' END REPEAT;
END
-- create testing table
CREATE TABLE test (id INT, val INT);
-- execute 3 queries by 1 statement
CALL execute_many_queries ('INSERT INTO test VALUES (1,11), (2,22); UPDATE test SET val = 222 WHERE id = 2; SELECT * FROM test;');
id | val
-: | --:
1 | 11
2 | 222
-- execute more 2 queries by 1 statement
CALL execute_many_queries ('UPDATE test SET val = 111 WHERE id = 1; SELECT * FROM test;');
id | val
-: | --:
1 | 111
2 | 222
db<>fiddle here
Use with caution! no checks in SP - the queries must be error-free. And injection is possible.

MySQL duplicate data removal with loop

I have a table called Positions which has data like this:
Id PositionId
1 'a'
2 'a '
3 'b '
4 'b'
Some of them has spaces so my idea is to remove those spaces, this is not actual table just an example of a table which has much more data.
So i created procedure to iterate over PositionIds and compare them if trimed they match remove one of them:
CREATE PROCEDURE remove_double_positions()
BEGIN
DECLARE done INT DEFAULT 0;
DECLARE current VARCHAR(255);
DECLARE previous VARCHAR(255) DEFAULT NULL;
DECLARE positionCur CURSOR FOR SELECT PositionId FROM Positions ORDER BY PositionId;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1;
OPEN positionCur;
clean_duplicates: LOOP
FETCH positionCur INTO current;
IF done THEN
LEAVE clean_duplicates;
END IF;
IF previous LIKE current THEN
DELETE FROM Positions WHERE PositionId = current;
END IF;
SET previous = current;
END LOOP clean_duplicates;
CLOSE positionCur;
END
For some reason it shows that 2 rows were affected but actually deletes all 4 of them and i don't know the reason why, could you help me.
From the manual https://dev.mysql.com/doc/refman/5.7/en/string-comparison-functions.html#operator_like under the like operator - Per the SQL standard, LIKE performs matching on a per-character basis, thus it can produce results different from the = comparison operator:...In particular, trailing spaces are significant, which is not true for CHAR or VARCHAR comparisons performed with the = operator:
mysql> SELECT 'a' = 'a ', 'a' LIKE 'a ';
+------------+---------------+
| 'a' = 'a ' | 'a' LIKE 'a ' |
+------------+---------------+
| 1 | 0 |
+------------+---------------+
1 row in set (0.00 sec)
This is true when = or like is used in where or case.
Your procedure would work as desired if you amended the delete bit to
IF trim(previous) = trim(current) THEN
DELETE FROM Positions WHERE PositionId like current;
END IF;
Just some other solution without cursor and procedure. I've check it on ORACLE. Hope it helps.
DELETE FROM positions
WHERE id IN ( SELECT t1.id
FROM positions t1,
positions t2
WHERE t1.positionId = TRIM(t2.positionId)
AND t1.positionId != t2.positionId
);
UPDATE
There are some crasy things are going on with mysql. Some problem with blank at the end of a strong and this error 1093 error.
Now my solution checked with MySQL 5.5.9
CREATE TABLE positions (
id INT NOT NULL,
positionid VARCHAR(2) NOT NULL
);
INSERT INTO positions VALUES
( 1, 'a'),
( 2, 'a '),
( 3, 'b'),
( 4, 'b ');
DELETE FROM positions
WHERE id IN ( SELECT t3.id FROM
(SELECT t2.id
FROM positions t1,
positions t2
WHERE t1.positionid = t2.positionid
AND LENGTH(t1.positionid) = 1
AND length(t2.positionid) = 2
) t3
);
mysql> SELECT * from positions;
+----+------------+
| id | positionid |
+----+------------+
| 1 | a |
| 3 | b |
+----+------------+
2 rows in set (0.00 sec)
mysql>
This "double" from delete SQL will fix this error 1093
Hope this helps.

Remove all zero dates from MySQL database across all Tables

I have plenty of tables in MySQL which which contains zero date in dateTime column 0000-00-00 00:00:00
Using some sort of admin settings, Is it possible to disable zero dates and replace all zero with static value say 1-1-1900?
EDIT:
I am working on database migration which involves migrating more than 100 MySQL tables to SQL Server.
Can I avoid executing scripts on each table manually by setting up
database mode?
To change existings values you could use a query like this:
UPDATE tablename SET date_column = '1900-01-01' WHERE date_column = '0000-00-00';
If you want to automate the UPDATE query you can use a prepared statement:
SET #sql_update=CONCAT_WS(' ', 'UPDATE', CONCAT(_schema, '.', _table),
'SET', _column, '=', '\'1900-01-01\'',
'WHERE', _column, '=', '\'0000-00-00\'');
PREPARE stmt FROM #sql_update;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
And you can loop through all colums in all tables on the current schema that are declared as date:
SELECT
table_schema,
table_name,
column_name
FROM
information_schema.columns
WHERE
table_schema=DATABASE() AND data_type LIKE 'date%'
To loop through all columns you could use a stored procedure:
DELIMITER //
CREATE PROCEDURE update_all_tables() BEGIN
DECLARE done BOOLEAN DEFAULT FALSE;
DECLARE _schema VARCHAR(255);
DECLARE _table VARCHAR(255);
DECLARE _column VARCHAR(255);
DECLARE cur CURSOR FOR SELECT
CONCAT('`', REPLACE(table_schema, '`', '``'), '`'),
CONCAT('`', REPLACE(table_name, '`', '``'), '`'),
CONCAT('`', REPLACE(column_name, '`', '``'), '`')
FROM
information_schema.columns
WHERE
table_schema=DATABASE() AND data_type LIKE 'date%';
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done := TRUE;
OPEN cur;
columnsLoop: LOOP
FETCH cur INTO _schema, _table, _column;
IF done THEN
LEAVE columnsLoop;
END IF;
SET #sql_update=CONCAT_WS(' ', 'UPDATE', CONCAT(_schema, '.', _table),
'SET', _column, '=', '\'1900-01-01\'',
'WHERE', _column, '=', '\'0000-00-00\'');
PREPARE stmt FROM #sql_update;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
END LOOP columnsLoop;
CLOSE cur;
END//
DELIMITER ;
Please see an example here.
This is an old question but was running into a similar problem except I was trying to set the 0000-00-00 to NULL.
Was trying to query this
UPDATE tablename SET date_column = NULL WHERE date_column = '0000-00-00';
and was getting the following error :
Incorrect date value: '0000-00-00' for column 'date_column' at row 1
Turns out the following query without '' around the 0000-00-00 worked !
UPDATE tablename SET date_column = NULL WHERE date_column = 0000-00-00;
You can change existing values running that query
update your_table
set date_column = '1900-01-01'
where date_column = '0000-00-00'
And you can change the definition of your table to a specfic default value or null like this
ALTER TABLE your_table
CHANGE date_column date_column date NOT NULL DEFAULT '1900-01-01'
You have two options.
Option One - In the programming language of your choice (you can even do this with Stored Procedures):
Loop through your INFORMATION_SCHEMA, probably COLUMNS and build a query to get back the tables you need to affect, i.e.
-
SELECT TABLE_NAME FROM INFORMATION_SCHEMA.COLUMNS
WHERE COLUMN_NAME='date' AND TABLE_SCHEMA='<YOUR DB NAME>'
or maybe even better
SELECT TABLE_NAME,COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS
WHERE COLUMN_NAME in ('timestamp','date','datetime')
AND TABLE_SCHEMA='<YOUR DB NAME>'
Store results and then loop through them. Each loop, create a new query. In MySQL that would be a Stored Procedure with Prepared Statements, AKA:
-
#string = CONCAT("UPDATE ", #table_name, " SET ", #column_name, "='1-1-1900' WHERE ", #column_name, "=0000-00-00 00:00:00");
PREPARE stmt FROM #string;
EXECUTE stmt;
That wouldn't be too tough to write up.
Option Two - Another example, while certainly more low tech, may be no less effective. After doing a mysqldump and before doing your export, you can do a simple search-replace in the file. Vim or any other text editor would do this quite expertly and would allow you to replace 0000-00-00 00:00:00 with 1-1-1900. Because you are almost definitely not going to find situations where you DON'T want that to be replaced, this could be the easiest option for you. Just throwing it out there!
In my opinion, you could generate all updates the simplest way:
select
concat('UPDATE ',TABLE_NAME,' SET ',COLUMN_NAME,'=NULL WHERE ',COLUMN_NAME,'=0;')
from information_schema.COLUMNS
where TABLE_SCHEMA = 'DATABASE_NAME' and DATA_TYPE in ('datetime', 'date', 'time');
Just replace DATABASE_NAME to your DB name, and execute all updates.
Alter your Table as
ALTER TABLE `test_table`
CHANGE COLUMN `created_dt` `created_dt` date NOT NULL DEFAULT '1900-01-01';
but before Altering table you need to update the existing value as juergen d said
update test_table
set created_dt= '1900-01-01'
where created_dt= '0000-00-00'
You can update your table by filtering where dates are equals to 0 and you can define a default value to the column.
Prefix: You might want to check the concept of ETL in DataWareHousing, there are tools which should do the simple conversions for you, even Open Source ones like Kettle/Pentaho.
But this one is easy when you use any programming language capable of composing SQL queries. I have made an example in perl, but php or java would do also the job:
#!/usr/bin/perl
use strict;
use warnings;
use DBI;
my $user='geheim';
my $pass='secret';
my $dbh = DBI->connect( "dbi:mysql:host=localhost:database=to_convert:port=3306", $user, $pass ) or die $DBI::errstr;
# Prints out all the statements needed, might be checked before executed
my #tables = #{ $dbh->selectall_arrayref("show tables") };
foreach my $tableh ( #tables){
my $tabname = $tableh->[0];
my $sth=$dbh->prepare("explain $tabname");
$sth->execute();
while (my $colinfo = $sth->fetchrow_hashref){
if ($colinfo->{'Type'} =~ /date/i && $colinfo->{'Null'} =~ /yes/i){
print ("update \`$tabname\` set \`" . $colinfo->{'Field'} . "\` = '1990-01-01' where \`" . $colinfo->{'Field'} . "\` IS NULL; \n");
print ("alter table \`$tabname\` change column \`" . $colinfo->{'Field'} . "\` \`" . $colinfo->{'Field'} . "\` " . $colinfo->{'Type'} . " not null default '1990-01-01'; \n");
}
}
}
This does not change anything, but when the database has tables like:
localmysql [localhost]> explain dt;
+-------+------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+------+------+-----+---------+-------+
| a | date | YES | | NULL | |
+-------+------+------+-----+---------+-------+
1 row in set (0.00 sec)
localmysql [localhost]> explain tst
-> ;
+-------+----------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+----------+------+-----+---------+-------+
| id | int(11) | YES | | NULL | |
| atime | datetime | YES | | NULL | |
+-------+----------+------+-----+---------+-------+
2 rows in set (0.00 sec)
it produces the Statements:
update `dt` set `a` = '1990-01-01' where `a` IS NULL;
alter table `dt` change column `a` `a` date not null default '1990-01-01';
update `tst` set `atime` = '1990-01-01' where `atime` IS NULL;
alter table `tst` change column `atime` `atime` datetime not null default '1990-01-01';
This list can then be reviewed and executed as Statements.
Hope that Helps!
As this is for migration, I would suggest that you simply wrap your tables in views which does the conversion as you export the data. I have used the below concept moving data from MySQL to postgress which has the same problem.
Each table should be proxied by something like this;
CREATE VIEW migration_mytable AS
SELECT field1, field2,
CASE field3
WHEN '0000-00-00 00:00:00'
THEN '1900-01-01 00:00:00'
ELSE field3
END CASE AS field3
FROM mytable;
You should be able to write a script which generate this for you from the catalog, in case you have a great deal of tables to take care of.
You should then be able to import the data into your SqlServer table (using a bridge like this), and simply running a query like;
INSERT INTO sqlserver.mytable SELECT * FROM mysql.migration_mytable;

How can I simulate an array variable in MySQL?

It appears that MySQL doesn't have array variables. What should I use instead?
There seem to be two alternatives suggested: A set-type scalar and temporary tables. The question I linked to suggests the former. But is it good practice to use these instead of array variables? Alternatively, if I go with sets, what would be the set-based idiom equivalent to foreach?
Well, I've been using temporary tables instead of array variables. Not the greatest solution, but it works.
Note that you don't need to formally define their fields, just create them using a SELECT:
DROP TEMPORARY TABLE IF EXISTS my_temp_table;
CREATE TEMPORARY TABLE my_temp_table
SELECT first_name FROM people WHERE last_name = 'Smith';
(See also Create temporary table from select statement without using Create Table.)
You can achieve this in MySQL using WHILE loop:
SET #myArrayOfValue = '2,5,2,23,6,';
WHILE (LOCATE(',', #myArrayOfValue) > 0)
DO
SET #value = ELT(1, #myArrayOfValue);
SET #myArrayOfValue= SUBSTRING(#myArrayOfValue, LOCATE(',',#myArrayOfValue) + 1);
INSERT INTO `EXEMPLE` VALUES(#value, 'hello');
END WHILE;
EDIT:
Alternatively you can do it using UNION ALL:
INSERT INTO `EXEMPLE`
(
`value`, `message`
)
(
SELECT 2 AS `value`, 'hello' AS `message`
UNION ALL
SELECT 5 AS `value`, 'hello' AS `message`
UNION ALL
SELECT 2 AS `value`, 'hello' AS `message`
UNION ALL
...
);
Try using FIND_IN_SET() function of MySql
e.g.
SET #c = 'xxx,yyy,zzz';
SELECT * from countries
WHERE FIND_IN_SET(countryname,#c);
Note: You don't have to SET variable in StoredProcedure if you are passing parameter with CSV values.
Nowadays using a JSON array would be an obvious answer.
Since this is an old but still relevant question I produced a short example.
JSON functions are available since mySQL 5.7.x / MariaDB 10.2.3
I prefer this solution over ELT() because it's really more like an array and this 'array' can be reused in the code.
But be careful: It (JSON) is certainly much slower than using a temporary table. Its just more handy. imo.
Here is how to use a JSON array:
SET #myjson = '["gmail.com","mail.ru","arcor.de","gmx.de","t-online.de",
"web.de","googlemail.com","freenet.de","yahoo.de","gmx.net",
"me.com","bluewin.ch","hotmail.com","hotmail.de","live.de",
"icloud.com","hotmail.co.uk","yahoo.co.jp","yandex.ru"]';
SELECT JSON_LENGTH(#myjson);
-- result: 19
SELECT JSON_VALUE(#myjson, '$[0]');
-- result: gmail.com
And here a little example to show how it works in a function/procedure:
DELIMITER //
CREATE OR REPLACE FUNCTION example() RETURNS varchar(1000) DETERMINISTIC
BEGIN
DECLARE _result varchar(1000) DEFAULT '';
DECLARE _counter INT DEFAULT 0;
DECLARE _value varchar(50);
SET #myjson = '["gmail.com","mail.ru","arcor.de","gmx.de","t-online.de",
"web.de","googlemail.com","freenet.de","yahoo.de","gmx.net",
"me.com","bluewin.ch","hotmail.com","hotmail.de","live.de",
"icloud.com","hotmail.co.uk","yahoo.co.jp","yandex.ru"]';
WHILE _counter < JSON_LENGTH(#myjson) DO
-- do whatever, e.g. add-up strings...
SET _result = CONCAT(_result, _counter, '-', JSON_VALUE(#myjson, CONCAT('$[',_counter,']')), '#');
SET _counter = _counter + 1;
END WHILE;
RETURN _result;
END //
DELIMITER ;
SELECT example();
Dont know about the arrays, but there is a way to store comma-separated lists in normal VARCHAR column.
And when you need to find something in that list you can use the FIND_IN_SET() function.
I know that this is a bit of a late response, but I recently had to solve a similar problem and thought that this may be useful to others.
Background
Consider the table below called 'mytable':
The problem was to keep only latest 3 records and delete any older records whose systemid=1 (there could be many other records in the table with other systemid values)
It would be good if you could do this simply using the statement
DELETE FROM mytable WHERE id IN (SELECT id FROM `mytable` WHERE systemid=1 ORDER BY id DESC LIMIT 3)
However this is not yet supported in MySQL and if you try this then you will get an error like
...doesn't yet support 'LIMIT & IN/ALL/SOME subquery'
So a workaround is needed whereby an array of values is passed to the IN selector using variable. However, as variables need to be single values, I would need to simulate an array. The trick is to create the array as a comma separated list of values (string) and assign this to the variable as follows
SET #myvar = (SELECT GROUP_CONCAT(id SEPARATOR ',') AS myval FROM (SELECT * FROM `mytable` WHERE systemid=1 ORDER BY id DESC LIMIT 3 ) A GROUP BY A.systemid);
The result stored in #myvar is
5,6,7
Next, the FIND_IN_SET selector is used to select from the simulated array
SELECT * FROM mytable WHERE FIND_IN_SET(id,#myvar);
The combined final result is as follows:
SET #myvar = (SELECT GROUP_CONCAT(id SEPARATOR ',') AS myval FROM (SELECT * FROM `mytable` WHERE systemid=1 ORDER BY id DESC LIMIT 3 ) A GROUP BY A.systemid);
DELETE FROM mytable WHERE FIND_IN_SET(id,#myvar);
I am aware that this is a very specific case. However it can be modified to suit just about any other case where a variable needs to store an array of values.
I hope that this helps.
DELIMITER $$
CREATE DEFINER=`mysqldb`#`%` PROCEDURE `abc`()
BEGIN
BEGIN
set #value :='11,2,3,1,';
WHILE (LOCATE(',', #value) > 0) DO
SET #V_DESIGNATION = SUBSTRING(#value,1, LOCATE(',',#value)-1);
SET #value = SUBSTRING(#value, LOCATE(',',#value) + 1);
select #V_DESIGNATION;
END WHILE;
END;
END$$
DELIMITER ;
Maybe create a temporary memory table with columns (key, value) if you want associative arrays. Having a memory table is the closest thing to having arrays in mysql
Here’s how I did it.
First, I created a function that checks whether a Long/Integer/whatever value is in a list of values separated by commas:
CREATE DEFINER = 'root'#'localhost' FUNCTION `is_id_in_ids`(
`strIDs` VARCHAR(255),
`_id` BIGINT
)
RETURNS BIT(1)
NOT DETERMINISTIC
CONTAINS SQL
SQL SECURITY DEFINER
COMMENT ''
BEGIN
DECLARE strLen INT DEFAULT 0;
DECLARE subStrLen INT DEFAULT 0;
DECLARE subs VARCHAR(255);
IF strIDs IS NULL THEN
SET strIDs = '';
END IF;
do_this:
LOOP
SET strLen = LENGTH(strIDs);
SET subs = SUBSTRING_INDEX(strIDs, ',', 1);
if ( CAST(subs AS UNSIGNED) = _id ) THEN
-- founded
return(1);
END IF;
SET subStrLen = LENGTH(SUBSTRING_INDEX(strIDs, ',', 1));
SET strIDs = MID(strIDs, subStrLen+2, strLen);
IF strIDs = NULL or trim(strIds) = '' THEN
LEAVE do_this;
END IF;
END LOOP do_this;
-- not founded
return(0);
END;
So now you can search for an ID in a comma-separated list of IDs, like this:
select `is_id_in_ids`('1001,1002,1003',1002);
And you can use this function inside a WHERE clause, like this:
SELECT * FROM table1 WHERE `is_id_in_ids`('1001,1002,1003',table1_id);
This was the only way I found to pass an "array" parameter to a PROCEDURE.
I'm surprised none of the answers mention ELT/FIELD.
ELT/FIELD works very similar to an array especially if you have static data.
FIND_IN_SET also works similar but doesn't have a built in complementary
function but it's easy enough to write one.
mysql> select elt(2,'AA','BB','CC');
+-----------------------+
| elt(2,'AA','BB','CC') |
+-----------------------+
| BB |
+-----------------------+
1 row in set (0.00 sec)
mysql> select field('BB','AA','BB','CC');
+----------------------------+
| field('BB','AA','BB','CC') |
+----------------------------+
| 2 |
+----------------------------+
1 row in set (0.00 sec)
mysql> select find_in_set('BB','AA,BB,CC');
+------------------------------+
| find_in_set('BB','AA,BB,CC') |
+------------------------------+
| 2 |
+------------------------------+
1 row in set (0.00 sec)
mysql> SELECT SUBSTRING_INDEX(SUBSTRING_INDEX('AA,BB,CC',',',2),',',-1);
+-----------------------------------------------------------+
| SUBSTRING_INDEX(SUBSTRING_INDEX('AA,BB,CC',',',2),',',-1) |
+-----------------------------------------------------------+
| BB |
+-----------------------------------------------------------+
1 row in set (0.01 sec)
Is an array variable really necessary?
I ask because I originally landed here wanting to add an array as a MySQL table variable. I was relatively new to database design and trying to think of how I'd do it in a typical programming language fashion.
But databases are different. I thought I wanted an array as a variable, but it turns out that's just not a common MySQL database practice.
Standard Practice
The alternative solution to arrays is to add an additional table, and then reference your original table with a foreign key.
As an example, let's imagine an application that keeps track of all the items every person in a household wants to buy at the store.
The commands for creating the table I originally envisioned would have looked something like this:
#doesn't work
CREATE TABLE Person(
name VARCHAR(50) PRIMARY KEY
buy_list ARRAY
);
I think I envisioned buy_list to be a comma-separated string of items or something like that.
But MySQL doesn't have an array type field, so I really needed something like this:
CREATE TABLE Person(
name VARCHAR(50) PRIMARY KEY
);
CREATE TABLE BuyList(
person VARCHAR(50),
item VARCHAR(50),
PRIMARY KEY (person, item),
CONSTRAINT fk_person FOREIGN KEY (person) REFERENCES Person(name)
);
Here we define a constraint named fk_person. It says that the 'person' field in BuyList is a foreign key. In other words, it's a primary key in another table, specifically the 'name' field in the Person table, which is what REFERENCES denotes.
We also defined the combination of person and item to be the primary key, but technically that's not necessary.
Finally, if you want to get all the items on a person's list, you can run this query:
SELECT item FROM BuyList WHERE person='John';
This gives you all the items on John's list. No arrays necessary!
This is my solution to use a variable containing a list of elements.
You can use it in simple queries (no need to use store procedures or create tables).
I found somewhere else on the site the trick to use the JSON_TABLE function (it works in mysql 8, I dunno of it works in other versions).
set #x = '1,2,3,4' ;
select c.NAME
from colors c
where
c.COD in (
select *
from json_table(
concat('[',#x,']'),
'$[*]' columns (id int path '$') ) t ) ;
Also, you may need to manage the case of one or more variables set to empty_string.
In this case I added another trick (the query does not return error even if x, y, or both x and y are empty strings):
set #x = '' ;
set #y = 'yellow' ;
select c.NAME
from colors
where
if(#y = '', 1 = 1, c.NAME = #y)
and if(#x = '', 1, c.COD) in (
select *
from json_table(
concat('[',if(#x = '', 1, #x),']'),
'$[*]' columns (id int path '$') ) t) ;
This works fine for list of values:
SET #myArrayOfValue = '2,5,2,23,6,';
WHILE (LOCATE(',', #myArrayOfValue) > 0)
DO
SET #value = ELT(1, #myArrayOfValue);
SET #STR = SUBSTRING(#myArrayOfValue, 1, LOCATE(',',#myArrayOfValue)-1);
SET #myArrayOfValue = SUBSTRING(#myArrayOfValue, LOCATE(',', #myArrayOfValue) + 1);
INSERT INTO `Demo` VALUES(#STR, 'hello');
END WHILE;
Both versions using sets didn't work for me (tested with MySQL 5.5). The function ELT() returns the whole set. Considering the WHILE statement is only avaible in PROCEDURE context i added it to my solution:
DROP PROCEDURE IF EXISTS __main__;
DELIMITER $
CREATE PROCEDURE __main__()
BEGIN
SET #myArrayOfValue = '2,5,2,23,6,';
WHILE (LOCATE(',', #myArrayOfValue) > 0)
DO
SET #value = LEFT(#myArrayOfValue, LOCATE(',',#myArrayOfValue) - 1);
SET #myArrayOfValue = SUBSTRING(#myArrayOfValue, LOCATE(',',#myArrayOfValue) + 1);
END WHILE;
END;
$
DELIMITER ;
CALL __main__;
To be honest, i don't think this is a good practice. Even if its realy necessary, this is barely readable and quite slow.
Isn't the point of arrays to be efficient? If you're just iterating through values, I think a cursor on a temporary (or permanent) table makes more sense than seeking commas, no? Also cleaner. Lookup "mysql DECLARE CURSOR".
For random access a temporary table with numerically indexed primary key. Unfortunately the fastest access you'll get is a hash table, not true random access.
Another way to see the same problem.
Hope helpfull
DELIMITER $$
CREATE PROCEDURE ARR(v_value VARCHAR(100))
BEGIN
DECLARE v_tam VARCHAR(100);
DECLARE v_pos VARCHAR(100);
CREATE TEMPORARY TABLE IF NOT EXISTS split (split VARCHAR(50));
SET v_tam = (SELECT (LENGTH(v_value) - LENGTH(REPLACE(v_value,',',''))));
SET v_pos = 1;
WHILE (v_tam >= v_pos)
DO
INSERT INTO split
SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(v_value,',',v_pos),',', -1);
SET v_pos = v_pos + 1;
END WHILE;
SELECT * FROM split;
DROP TEMPORARY TABLE split;
END$$
CALL ARR('1006212,1006404,1003404,1006505,444,');
If we have one table like that
mysql> select * from user_mail;
+------------+-------+
| email | user |
+------------+-------+-
| email1#gmail | 1 |
| email2#gmail | 2 |
+------------+-------+--------+------------+
and the array table:
mysql> select * from user_mail_array;
+------------+-------+-------------+
| email | user | preferences |
+------------+-------+-------------+
| email1#gmail | 1 | 1 |
| email1#gmail | 1 | 2 |
| email1#gmail | 1 | 3 |
| email1#gmail | 1 | 4 |
| email2#gmail | 2 | 5 |
| email2#gmail | 2 | 6 |
We can select the rows of the second table as one array with CONCAT function:
mysql> SELECT t1.*, GROUP_CONCAT(t2.preferences) AS preferences
FROM user_mail t1,user_mail_array t2
where t1.email=t2.email and t1.user=t2.user
GROUP BY t1.email,t1.user;
+------------+-------+--------+------------+-------------+
| email | user | preferences |
+------------+-------+--------+------------+-------------+
|email1#gmail | 1 | 1,3,2,4 |
|email2#gmail | 2 | 5,6 |
+------------+-------+--------+------------+-------------+
In MYSQL version after 5.7.x, you can use JSON type to store an array. You can get value of an array by a key via MYSQL.
Inspired by the function ELT(index number, string1, string2, string3,…),I think the following example works as an array example:
set #i := 1;
while #i <= 3
do
insert into table(val) values (ELT(#i ,'val1','val2','val3'...));
set #i = #i + 1;
end while;
Hope it help.
Here is an example for MySQL for looping through a comma delimited string.
DECLARE v_delimited_string_access_index INT;
DECLARE v_delimited_string_access_value VARCHAR(255);
DECLARE v_can_still_find_values_in_delimited_string BOOLEAN;
SET v_can_still_find_values_in_delimited_string = true;
SET v_delimited_string_access_index = 0;
WHILE (v_can_still_find_values_in_delimited_string) DO
SET v_delimited_string_access_value = get_from_delimiter_split_string(in_array, ',', v_delimited_string_access_index); -- get value from string
SET v_delimited_string_access_index = v_delimited_string_access_index + 1;
IF (v_delimited_string_access_value = '') THEN
SET v_can_still_find_values_in_delimited_string = false; -- no value at this index, stop looping
ELSE
-- DO WHAT YOU WANT WITH v_delimited_string_access_value HERE
END IF;
END WHILE;
this uses the get_from_delimiter_split_string function defined here: https://stackoverflow.com/a/59666211/3068233
I Think I can improve on this answer. Try this:
The parameter 'Pranks' is a CSV. ie. '1,2,3,4.....etc'
CREATE PROCEDURE AddRanks(
IN Pranks TEXT
)
BEGIN
DECLARE VCounter INTEGER;
DECLARE VStringToAdd VARCHAR(50);
SET VCounter = 0;
START TRANSACTION;
REPEAT
SET VStringToAdd = (SELECT TRIM(SUBSTRING_INDEX(Pranks, ',', 1)));
SET Pranks = (SELECT RIGHT(Pranks, TRIM(LENGTH(Pranks) - LENGTH(SUBSTRING_INDEX(Pranks, ',', 1))-1)));
INSERT INTO tbl_rank_names(rank)
VALUES(VStringToAdd);
SET VCounter = VCounter + 1;
UNTIL (Pranks = '')
END REPEAT;
SELECT VCounter AS 'Records added';
COMMIT;
END;
This method makes the searched string of CSV values progressively shorter with each iteration of the loop, which I believe would be better for optimization.
I would try something like this for multiple collections. I'm a MySQL beginner. Sorry about the function names, couldn't decide on what names would be best.
delimiter //
drop procedure init_
//
create procedure init_()
begin
CREATE TEMPORARY TABLE if not exists
val_store(
realm varchar(30)
, id varchar(30)
, val varchar(255)
, primary key ( realm , id )
);
end;
//
drop function if exists get_
//
create function get_( p_realm varchar(30) , p_id varchar(30) )
returns varchar(255)
reads sql data
begin
declare ret_val varchar(255);
declare continue handler for 1146 set ret_val = null;
select val into ret_val from val_store where id = p_id;
return ret_val;
end;
//
drop procedure if exists set_
//
create procedure set_( p_realm varchar(30) , p_id varchar(30) , p_val varchar(255) )
begin
call init_();
insert into val_store (realm,id,val) values (p_realm , p_id , p_val) on duplicate key update val = p_val;
end;
//
drop procedure if exists remove_
//
create procedure remove_( p_realm varchar(30) , p_id varchar(30) )
begin
call init_();
delete from val_store where realm = p_realm and id = p_id;
end;
//
drop procedure if exists erase_
//
create procedure erase_( p_realm varchar(30) )
begin
call init_();
delete from val_store where realm = p_realm;
end;
//
call set_('my_array_table_name','my_key','my_value');
select get_('my_array_table_name','my_key');
Rather than Saving data as a array or in one row only you should be making diffrent rows for every value received. This will make it much simpler to understand rather than putting all together.
Have you tried using PHP's serialize()?
That allows you to store the contents of a variable's array in a string PHP understands and is safe for the database (assuming you've escaped it first).
$array = array(
1 => 'some data',
2 => 'some more'
);
//Assuming you're already connected to the database
$sql = sprintf("INSERT INTO `yourTable` (`rowID`, `rowContent`) VALUES (NULL, '%s')"
, serialize(mysql_real_escape_string($array, $dbConnection)));
mysql_query($sql, $dbConnection) or die(mysql_error());
You can also do the exact same without a numbered array
$array2 = array(
'something' => 'something else'
);
or
$array3 = array(
'somethingNew'
);

How do I delete blank rows in Mysql?

I do have a table with more than 100000 data elements, but there are almost 350 blank rows within. How do I delete this blank rows using phpmyadmin? Manually deleting is a tedious task.
The general answer is:
DELETE FROM table_name WHERE some_column = '';
or
DELETE FROM table_name WHERE some_column IS NULL;
See: http://dev.mysql.com/doc/refman/5.0/en/delete.html
More info when you post your tables!~
Also, be sure to do:
SELECT * FROM table_name WHERE some_column = '';
before you delete, so you can see which rows you are deleting! I think in phpMyAdmin you can even just do the select and then "select all" and delete, but I'm not sure. This would be pretty fast, and very safe.
I am doing the mysql operation in command prompt in windows. And the basic queries:
delete * from table_name where column=''
and
delete * from table_name where column='NULL'
doesn't work. I don't know whether it works in phpmyadmin sqlcommand builder. Anyway:
delete * from table_name where column is NULL
works fine.
I have a PHP script that automatically removes empty rows based on column data types.
That allows me to define "emptiness" differently for different column types.
e.g.
table
first_name (varchar) | last_name (varchar) | some_qty ( int ) | other_qty (decimal)
DELETE FROM `table` WHERE
(`first_name` IS NULL OR `first_name` = '')
AND
(`last_name` IS NULL OR `last_name` = '')
AND
(`some_qty` IS NULL OR `some_qty` = 0)
AND
(`other_qty` IS NULL OR `other_qty` = 0)
Since "0" values are meaningless in my system, I count them as empty. But I found out that if you do (first_name = 0) then you will always get true, because strings always == 0 in MySQL. So I tailor the definition of "empty" to the data type.
This procedure will delete any row for all columns that are null ignoring the primary column that may be set as an ID. I hope it helps you.
DELIMITER //
CREATE PROCEDURE DeleteRowsAllColNull(IN tbl VARCHAR(64))
BEGIN
SET #tbl = tbl;
SET SESSION group_concat_max_len = 1000000;
SELECT CONCAT('DELETE FROM `',#tbl,'` WHERE ',(REPLACE(group_concat(concat('`',COLUMN_NAME, '` is NULL')),',',' AND ')),';') FROM INFORMATION_SCHEMA.COLUMNS WHERE table_name = #tbl AND COLUMN_KEY NOT LIKE 'PRI' into #delete_all;
PREPARE delete_all FROM #delete_all;
EXECUTE delete_all;
DEALLOCATE PREPARE delete_all;
END //
DELIMITER ;
Execute the procedure like this.
CALL DeleteRowsAllColNull('your table');
I know this has already been answered and has got a tick, but I wrote a small function for doing this, and thought it might be useful to other people.
I call my function with an array so that I can use the same function for different tables.
$tableArray=array("Address", "Email", "Phone"); //This is the column names
$this->deleteBlankLines("tableName",$tableArray);
and here is the function which takes the array and builds the delete string
private function deleteBlankLines($tablename,$columnArray){
$Where="";
foreach($columnArray as $line):
$Where.="(`".$line."`=''||`".$line."` IS NULL) && ";
endforeach;
$Where = rtrim($Where, '&& ');
$query="DELETE FROM `{$tablename}` WHERE ".$Where;
$stmt = $this->db->prepare($query);
$stmt->execute();
}
You can use this function for multiple tables. You just need to send in a different table name and array and it will work.
My function will check for a whole row of empty columns or NULL columns at the same time. If you don't need it to check for NULL then you can remove that part.