We noticed that some of the queries we were running in our spring/java environment were coming back with truncated columns. The problem was group_concat_max_len was set too small.
I tried modifying our database definition .sql file to include SET SESSION:
DROP DATABASE IF EXISTS acmedb;
CREATE DATABASE acmedb;
USE acmedb;
SET SESSION group_concat_max_len = 6999;
CREATE TABLE...
However this is not going into effect after a db reload. I have to do a jdbctemplate execute() statement with this code for it to propogate. While that fixes the problem I was wondering if anyone can tell me why executing it via sql script does not do anything.
EDIT
In another attempt to fix the problem I tried dropping the following line in our Dao init() method:
this.jdbcTemplateObject.execute("SET SESSION group_concat_max_len = 6999 ");
This fixes the problem... sometimes. I think eventually the session expires and this change is lost. What are the rules on mysql set session in terms of longevity of the call? I could put this statement before every query is executed but that seems like a lot of unnecessary overhead.
Related
I've been trying to change the lock_wait_timeout variable in MySQL because the default value of 50 seconds is not suitable for my Django application.
I've used the following commands:
set innodb_lock_wait_timeout=2;
show variables like 'innodb_lock_wait_timeout';
Although the show variables command confirms that my changes have been made, the timeout is still 50 seconds when my Django app is trying to acquire a lock on a locked record.
This is the code snippet that I'm using to lock a particular record using Django:
form = Form.objects.select_for_update().filter(id = form_details[FORM_ID]).first()
I've tried restarting the MySQL service and even restarting my whole system, all to no avail.
MySQL variables are either session or global.
'set X=y' is setting the session variable X only. Django creates a new session so is unaffected.
Use:
set global innodb_lock_wait_timeout=2;
show global variables like 'innodb_lock_wait_timeout';
Global variables are copied to session variables at the start of the connection. If you want it to affect Django only you'll need to do something manually in your code at the beginning of the connection.
I have a table with a few variables (let's say tbl_param). I have to query tbl_param everytime I need to use one of them. I have a case where I need to use those variables inside triggers, so I have to query tbl_param each time those triggers are executed. But what if I am going to use the same variable for every user that connects to the db? It would be logical to set them only once, since it would not change often (only when the variable in question gets updated in tbl_param). I know I can set session variables, but it would not solve the problem as they would be acessible only for the duration of one connection. So when a new connection is made, I would need to query tbl_param again. Can I define, for instance, a new system variable that gets loaded when the server boots up and and that I could update it as tbl_param gets updated? Or is there another alternative?
There are system that can be defined in mysql.cnf (or mysql.ini) file; but this will require you to had file permissions on that file. On my local server (Ubuntu 20.04.2) it is in /etc/mysql/mariadb.conf.d/50-server.cnf. but this would not work on remote server; because we didn't have access to system files (etc).
I had found an alternative to this that would serve the purpose what you had in mind. SET a session variable (wait for it; i know session variables disappears in other sessions); but initialize it value to some value from a table. e.g initialize your session variable always on startup from a table (and update accordingly to table as required).
In case of using a PHP (MIS) to disable mysql triggers
To disable a trigger on some table for some specific record(s). instead of deleting the triggers and inserting the records and then recreating those triggers. just rewrite the trigger with a minor change. it would disable based on this session variable.
Then your MIS would always initialize a session variable to some value fetched from table. and based on this value skip or execute triggers.
I have a mysql view that requires several session variables to be set before querying it.
I have tried many ways to set that session variable but have been unable to.
Neither setting it in database connections, with xactions nor executing a set #variable:=value in the sql just before the query has worked for me.
The reason why I need that is because of using this workaround for a view with group by:
http://www.percona.com/blog/2010/05/19/a-workaround-for-the-performance-problems-of-temptable-views/
Do you know how can I set a session variable everytime I open a connection in Pentaho? I need that for CDE and Saiku
Thanks!
I am working on a PERL script that inserts multiple SQL statements.
As I am inserting 5000 rows in one insert so I have to increase the max_allowed_packet size.
When I run the script for the first time it gives an error
packet size bigger than max_allowed_packet but when it runs again it doesn't give this error.
I have set the autocommit=0 and execute commit after i execute the $dbh->do("SET global max_allowed_packet=134217728") or $logger->error("Error : $DBI::errstr");
Do i specify this when i am connecting to the database ?
Also it can be great if you can tell me an alternative to multiple insert statements.
P.S : I know I can make changes in the config files but i want to do it dynamically and i also know about prepare and execute statements.
i think what you want is like resolve this problem and keep your server running at the same time because if you would change the variable value in the config file then you have to restart the mysql server to make this change go live.
now it is clear from $dbh->do("SET global max_allowed_packet=134217728") or $logger->error("Error : $DBI::errstr"); that it is a dynamic variable.
what you have to do is, Go to your mysql-server console and write the following command:
SET GLOBAL max_allowed_packet=134217728;
Now,you are done with updating of the value of the variable. you can see the value of this variable by the following query
SHOW VARIABLES LIKE 'max%';
it will show you all the variables and their values with a prefix 'max'.
now you're done.
this information is best of my knowledge and hope this would also solve the issue.
I want to implement a batch MySQL script to do something in a database. The thing is that, for each master id that I have I want to insert 4 tuples. But this tuples should be added in a transaction which means if one of these 4 tuples is failed the transaction should be rollback. Then I need to have some catching mechanism to capture that the query is failed. I CAN ONLY USE PURE MYSQL neither PHP, nor PERL etc. Even I cannot create any store procedure to do that. In Microsoft SQL Server there is ##error variable that solved my problem but in MYSQL we do not have any system variables showing the error code.
how can I do that?
Cheers,
This is an ugly workaround, but it worked for me when I was trying to import a batch of SQL queries and wrap the entire thing within a transaction, so that I could roll back if any of the SQL queries errored.
Since the size of the batch was massive, a SQL procedure with condition handler was not an option either.
You have to do this manually, so it really isn't a solution unless you are batching:
First, make sure your entire batch is stored in an SQL file. The SQL file should only contain the batch queries, and no transaction control queries.
Then start up a MySQL command line client and type in transaction commands manually:
mysql> SET AUTOCOMMIT = 0;
mysql> START TRANSACTION;
Then tell the command line client to run the batch file:
mysql> SOURCE path/to/file.sql
After that you can simply manually COMMIT; or ROLLBACK; depending on how happy you are with the result of your queries.
This is such a kludge, though. Anyone have a better approach?