How to use DELIMITER in mariadb using the .NET MySql connector? - mysql

I'm using MariaDB 10.2.12 and connecting using the .NET MySQL connector. The following trigger works fine in MySQL Workbench:
DELIMITER //
CREATE TRIGGER update_last_modified
BEFORE UPDATE ON users
FOR EACH ROW
BEGIN
DECLARE miscdataWithDate JSON;
IF JSON_CONTAINS_PATH(NEW.miscdata, 'all', '$.v1.lastModified2') THEN
SET NEW.miscdata = JSON_REPLACE(NEW.miscdata, '$.v1.lastModified2', UTC_TIMESTAMP());
ELSE
SET miscdataWithDate = JSON_SET('{"v1": {}}', '$.v1.lastModified2', UTC_TIMESTAMP());
SET NEW.miscdata = JSON_MERGE(NEW.miscdata, miscdataWithDate);
END IF;
END; //
DELIMITER ;
To run the command from C#/.NET, I used the following. I tried it with and without the final semicolon, in case the library is adding a semicolon:
using (var cmd = new MySqlCommand(#"CREATE TRIGGER update_last_modified
BEFORE INSERT ON users
FOR EACH ROW
BEGIN
DECLARE miscdataWithDate JSON;
IF JSON_CONTAINS_PATH(NEW.miscdata, 'all', '$.v1.lastModified') THEN
SET NEW.miscdata = JSON_REPLACE(NEW.miscdata, '$.v1.lastModified', UTC_TIMESTAMP());
ELSE
SET miscdataWithDate = JSON_SET('{""v1"": {}}', '$.v1.lastModified', UTC_TIMESTAMP());
SET NEW.miscdata = JSON_MERGE(NEW.miscdata, miscdataWithDate);
END IF;
END; //
DELIMITER ;", connection))
{
await cmd.ExecuteNonQueryAsync().ConfigureAwait(false);
}
When the trigger is defined (not called), the error is:
Unhandled Exception: System.AggregateException: One or more errors occurred. (You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '//
DELIMITER' at line 1) ---> MySql.Data.MySqlClient.MySqlException: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '//
DELIMITER' at line 1
If I simplify the query so it doesn't need DELIMITER set, it works. But even a very simple trigger with a custom delimiter fails.

When searching for how other people have successfully used delimiters with MySQL/MariaDB from .NET, I found the following article: https://dev.mysql.com/doc/connector-net/en/connector-net-tutorials-mysqlscript-delimiter.html
The example given uses MySqlScript instead of MySqlCommand, and so I believe that MySqlCommand simply doesn't support delimiters. Here is the updated code, which works fine:
MySqlScript script = new MySqlScript(connection, #"CREATE TRIGGER update_last_modified
BEFORE INSERT ON users
FOR EACH ROW
BEGIN
DECLARE miscdataWithDate JSON;
IF JSON_CONTAINS_PATH(NEW.miscdata, 'all', '$.v1.lastModified') THEN
SET NEW.miscdata = JSON_REPLACE(NEW.miscdata, '$.v1.lastModified', UTC_TIMESTAMP());
ELSE
SET miscdataWithDate = JSON_SET('{""v1"": {}}', '$.v1.lastModified', UTC_TIMESTAMP());
SET NEW.miscdata = JSON_MERGE(NEW.miscdata, miscdataWithDate);
END IF;
END; //");
script.Delimiter = "//";
await script.ExecuteAsync().ConfigureAwait(false);

Related

Execute MySQL script in grails app

I have an MySQL Script I want to execute in a controller when my Grails 3.0.9 application is running. I've tried it this way:
import groovy.sql.Sql
import grails.util.Holders
def void clearDatabase() {
String sqlFilePath = 'path/to/file/clear_database.sql'
String sqlString = new File(sqlFilePath).text
def sql = Sql.newInstance(Holders.config.dataSource.url, Holders.config.dataSource.username, Holders.config.dataSource.password, Holders.config.dataSource.driverClassName)
sql.execute(sqlString)
}
Thats how my clear_database.sql file looks like:
SET FOREIGN_KEY_CHECKS = 0;
TRUNCATE table_a;
TRUNCATE table_b;
TRUNCATE table_c;
SET FOREIGN_KEY_CHECKS = 1;
Thats the error message I get:
WARN org.hibernate.engine.jdbc.spi.SqlExceptionHelper - SQL Warning Code: 1064, SQLState: 42000
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'TRUNCATE table_a;
Is my MySQL syntax wrong or am I missing something else?
edit:
When I run the script manually it works. So I think the script is correct but the way I execute it not.
The problem was that sql.execute(sqlString) wants a GString and I've given it a normal String so it added quotes and the result was an incorrect MySQL syntax like described here.
That's how it works for me:
def sql = Sql.newInstance(Holders.config.dataSource.url, Holders.config.dataSource.username, Holders.config.dataSource.password, Holders.config.dataSource.driverClassName)
sql.execute "SET FOREIGN_KEY_CHECKS = 0;"
sql.execute "truncate ${Sql.expand("table_a")}"
sql.execute "truncate ${Sql.expand("table_b")}"
sql.execute "truncate ${Sql.expand("table_c")}"
sql.execute "SET FOREIGN_KEY_CHECKS = 1;"
There is no clear_database.sql file needed anymore.
Not sure but could you try: as your clear_database.sql
SET FOREIGN_KEY_CHECKS = 0;
TRUNCATE TABLE table_a;
TRUNCATE TABLE table_b;
TRUNCATE TABLE table_c;
SET FOREIGN_KEY_CHECKS = 1;
Good luck!!

Delphi 2010 : UniDAC vs Indy-MultiThread safety handleing method

I am doing develop Indy based application.
Server has several Indy TCP Server components.
So It works under multi-threads and handles mysql db.
I have faced one problem.
That is about the exceptions of MySQL DB in threads.
When serveral threads attack to same db table, then It says me like follows
UniQuery_Mgr: Duplicate field name 'id'
UniQuery_Mgr: Field 'grp_id' not found //of course grp_id field is really existed.
Assertion failure (C:\Program Files (x86)\unidac539src\Source\CRVio.pas, line 255)
Commands out of sync; You can't run this command now
ReceiveHeader: Net packets out of order: received[0], expected[1]
UniQuery_Mgr: Cannot perform this operation on a closed dataset
How to do I ? UniQuery_Mgr is TUniQuery component.
and my query handling code is normally like this
Code 1
sql := 'SELECT * FROM data_writed;';//for example
UniQuery_Mgr.SQL.Clear;
UniQuery_Mgr.SQL.Add(sql);
UniQuery_Mgr.ExecSQL;
Code 2
try
sql := 'SELECT * FROM gamegrp_mgr;';
UniQuery_Mgr.SQL.Clear;
UniQuery_Mgr.SQL.Add(sql);
UniQuery_Mgr.ExecSQL;
if UniQuery_Mgr.RecordCount > 0 then
begin
MAX_GAME_GROUP_COUNT := UniQuery_Mgr.RecordCount + 1;
UniQuery_Mgr.First;
i := 1;
while not UniQuery_Mgr.Eof do
begin
Game_Group_ID[i] := UniQuery_Mgr.FieldByName('grp_id').AsInteger;
Game_Game_ID[i] := UniQuery_Mgr.FieldByName('game_id').AsInteger;
UniQuery_Mgr.Next;
Inc(i);
end;
end;
except
on E : Exception do
begin
EGAMEMSG := Format('GAME group read error: <%s> # %s',[ E.ToString, DateTimeToStr(now)]);
Exit;
end;
end;
Code 3
try
sql := 'UPDATE data_writed SET write_gamegrp = ' + QuotedStr('0') + ';';
UniQuery_Mgr.SQL.Clear;
UniQuery_Mgr.SQL.Add(sql);
UniQuery_Mgr.ExecSQL;
except
on E : Exception do
begin
EGAMEMSG := Format('data updating error: <%s> # %s',[ E.ToString, DateTimeToStr(now)]);
Exit;
end;
end;
My handling DB components is bad ? Other thread-safe method is existed???

MySQL batch-file call flush tables for export

I use batch-file for copy database from server1 to server2.
Step 1: call stored procedure for FLUSH TABLES table1,table2, ..., table1000 FOR EXPORT;
Step 2: copy files .ibd and .cfg to temp directory and archive this
Step 3: unlock tables;
The problem is the first step - files .cfg are created and then removed, but unlock the tables is not called. Why? Files .cfg are created and immediately disappear, I do not have time to copy
.bat file command:
mysql -u %db_user% -p%db_password% %db_name% --default-character-set=utf8 < stored_proc_flush_tables.sql
file stored_proc_flush_tables.sql:
DROP PROCEDURE IF EXISTS stored_proc_flush_tables;
DELIMITER //
CREATE PROCEDURE stored_proc_flush_tables
(
)
BEGIN
DECLARE t_name BLOB;
DECLARE tmp_query BLOB;
DECLARE done_tables INT DEFAULT 0;
DECLARE cursor_tables CURSOR FOR
SELECT table_name FROM information_schema.tables WHERE table_schema=DB_NAME;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done_tables = 1;
SET #table_name = '';
SET #tmp_query = '';
OPEN cursor_tables;
tables_loop: LOOP
FETCH cursor_tables INTO t_name;
IF done_tables = 1 THEN
LEAVE tables_loop;
END IF;
SET #tmp_query = CONCAT_WS('', #tmp_query, ',', t_name);
END LOOP;
CLOSE cursor_tables;
SET #tmp_query = TRIM(LEADING ',' FROM #tmp_query);
SET #tmp_query = CONCAT_WS('', 'FLUSH TABLES', ' ', #tmp_query, ' ', 'FOR EXPORT');
PREPARE stmt FROM #tmp_query;
EXECUTE stmt;
END //
DELIMITER ;
call stored_proc_flush_tables();
Files .cfg are created and immediately disappear, I do not have time to copy them
Problem is that you end mysql session that makes FLUSH TABLES ... FOR EXPORT
before you try to copy files.
When mysql session/connection ends all locks unlocked and *.cfg is consired as temporal file is deleted.
So you should have program that makes FLUSH ... FOR EXPORT and keeps session
open and then copies files and after that releases table lock (or ends session).

mySQL function behaves differently on 2 servers which are essentially the same

Allright, here is a hard one...
I have a development server with mySQL 5.1.73 on which I wrote a function to normalize a string for searching purposes.
When moving the function to the production environment, same mySQL version, same major OS version (CentOS 6.5) newest patches, same major kernel version etc. The function stopped working.
Here is the function
CREATE DEFINER=`user`#`%` FUNCTION `normalize`(str VARCHAR(255)) RETURNS varchar(255) CHARSET utf8
BEGIN
DECLARE normstring VARCHAR(255);
DECLARE i INT;
SET i = 0;
SET normstring = '';
SET str = lower(str);
loop1: WHILE i < length(str) DO
CASE substring(str,i,1)
WHEN 'ä' THEN SET normstring = concat(normstring,'ae');
WHEN 'ö' THEN SET normstring = concat(normstring,'oe');
WHEN 'ü' THEN SET normstring = concat(normstring,'ue');
WHEN 'ß' THEN SET normstring = concat(normstring,'ss');
WHEN '/' THEN SET i = i + 1; ITERATE loop1;
WHEN '.' THEN SET i = i + 1; ITERATE loop1;
WHEN '-' THEN SET i = i + 1; ITERATE loop1;
WHEN '(' THEN SET i = i + 1; ITERATE loop1;
WHEN ')' THEN SET i = i + 1; ITERATE loop1;
WHEN ' ' THEN SET i = i + 1; ITERATE loop1;
WHEN '\'' THEN SET i = i + 1; ITERATE loop1;
WHEN '\\' THEN SET i = i + 1; ITERATE loop1;
ELSE SET normstring = concat(normstring,substring(str,i,1));
END CASE;
SET i = i + 1;
END WHILE;
RETURN normstring;
END$$
DELIMITER ;
On the development server this converts 'Mönßtär' to 'moensstaer', but on the production server it converts it to 'mönßtä'
Changing
SET i = 0; and WHILE i < length(str)
to
SET i = 1; and WHILE i <= length(str)
corrects the missing last character, so the result is 'mönßtär' but one server should not start counting with 0 the other one with 1, right?
And the production server leaves all special characters untouched.
I have compared all global variables, not only those explicitly set in my.cnf, and except timezone, password and symlink setings they are equal (yes I should correct those differences, but that should have nothing to do with my problem, right?)
Are there some compile-settings which can influence this behaviour, or some external libraries that mySQL uses?
I'll probably have to find a workaround for the problem - I plan to normalize in the application rather than the database - the function is too slow in large queries anyway - but it would have been nice to convert the existing data in the database. But I'm really curious as to what causes such strange behaviour.
Character-Set settings on both servers (from the running environment):
character_set_client........................ utf8
character_set_connection.................... utf8
character_set_database...................... utf8
character_set_filesystem.................... binary
character_set_results....................... utf8
character_set_server........................ utf8
character_set_system........................ utf8
collation_connection........................ utf8_unicode_ci
collation_database.......................... utf8_unicode_ci
collation_server............................ utf8_unicode_ci
I believe you have different character set between your local environment and production. Look into these articles on how to detect How do I find out the default server character set in mysql? and how to change Change server variable character_set_server.

MySQL Exception: Execute SQL Transaction

I am trying to commit a sql transaction to MySQL but I get myself past an MySQLSyntaxErrorException.
The code I am using is:
implicit connection =>
SQL("""
start transaction;
insert into projects(id_user, name, description) values({idUser}, {name}, {description});
set #last_id = last_insert_id();
insert into assigned(id_user, id_project) values({idUser}, #last_id);
commit;
""")
.on('idUser -> idUser,
'name -> project.name,
'description -> project.description
).execute()
The exception I get:
[MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'insert into projects(id_user, name, description) values(1, 'First inserted proje' at line 1]
I am starting to think that I can't execute such statements at all with Anorm.
You cannot use transaction that way. You have to understand that anorm is simply a wrapper around existing jdbc libraries. By default, when using the withConnection and SQL :
DB.withConnection { conn =>
SQL("...
}
Your query is transformed using a PreparedStatement. Meaning the ; chars are causing errors.
Thus, if you want to use transaction, you have to use anorm's mecanism for that.
DB.withTransaction { conn =>
SQL("...
}