I come across this in sqlx docs:
On most databases, statements will actually be prepared behind the scenes whenever a query is executed. However, you may also explicitly prepare statements for reuse elsewhere with sqlx.DB.Prepare():
Although I can't find proof that databases actually prepare every query.
So is it true, should I use prepare manually?
MySQL and PostgreSQL definitely don't prepare every query. You can execute queries directly, without doing a prepare & execute sequence.
The Go code in the sqlx driver probably does this, but it's elective, done to make it simpler to code the Go interface when you pass args.
You don't need to use the Prepare() func manually, unless you want to reuse the prepared query, executing it multiple times with different args.
Related
I have multiple databases with the same structure but different unrelated names, and I want to run this query on all of them:
SELECT FROM <dbname>.`cms_users` WHERE `email` LIKE '%admin.something%'
I searched a bit in the information_schema and found a table called SCHEMATA that contains the databases' names which is ultimately what I need to run the query above and I can do so manually by just replacing the database name myself and creating the query. However I want to do it automatically using a mysql loop but I'm not very certain how can I do this and how can I concatenate the database name to the query and run it.
My pseudo code for this would be as follows:
array dbnames= select `SCHEMA_NAME` from `information_schema`.`SCHEMATA`;
loop start on dbnames
SELECT FROM dbnames[index].`cms_users` WHERE `email` LIKE '%admin.bilsi%';
loop end
Any help to put this or a better logic into mysql syntax? Thanks.
SQL Server and other RDBMS products allow you do scripting in the console. You can use anything you can use in stored procedures. MySQL is unfortunately much more limited and does not allow flow control constructs outside of stored procedures and functions. That means no loops, no if-statements, no cursors. You can use variables, but only the ones that start with #.
Furthermore, if you do a loop, you'll be sending multiple result-sets back to the client. If you're just running queries from a console, this is fine. If the results are something you intend for a program to use, this may not be desirable (it may not be desirable in either case).
If you are doing this in a one-off sort of way, get a list of databases manually, and then use copy and paste to build a query using UNION ALL, like so:
SELECT FROM `first_db`.`cms_users` WHERE `email` LIKE '%admin.bilsi%'
UNION ALL
SELECT FROM `second_db`.`cms_users` WHERE `email` LIKE '%admin.bilsi%'
UNION ALL
SELECT FROM `third_db`.`cms_users` WHERE `email` LIKE '%admin.bilsi%';
If you expect the number of databases to be changing and you don't want to have to update your query, or you are sending it from a program, you can use dynamic SQL. This means building a query in a string variable and then submitting it using MySQL's prepared statement functionality.
On the console, you can use something like this (see: http://dev.mysql.com/doc/refman/5.0/en/sql-syntax-prepared-statements.html):
SELECT GROUP_CONCAT(CONCAT("SELECT FROM `", SCHEMA_NAME, "`.`cms_users` WHERE `email` LIKE '%admin.bilsi%'") SEPARATOR ' UNION ALL ')
INTO #stmt_sql
FROM INFORMATION_SCHEMA.SCHEMATA
WHERE SCHEMA_NAME NOT IN('mysql', 'test', 'tmp', 'information_schema', 'sys', 'performance_schema');
PREPARE stmt FROM #stmt_sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
This generates the query I showed above using information from the INFORMATION_SCHEMA pseudo-database, namely, the list of databases by name (which MySQL incorrectly calls schemas). The rest is just the boilerplate code needed to prepare and execute a prepared statement, as per the linked documentation.
There are other ways, but they are even more tedious and won't buy you much.
my query such as having mysql variable declaration
SET #var1=0, #var2=0;
these variables are used in the select query
which works fantastic in phpmyadmin
but then if i write it as query in yii doesnt work
throws exception doesnt not execute but then if i remove
SET #var1=0, #var2=0;
then query executes but with no data fetched from db because it requires the set variables to fetch the result
how do i declare the set values of mysql in yii?is there any way out
As long as you reuse the same CDbCommand, you can issue multiple queries to the DB using the same connection. That will do what you need (and is what phpMyAdmin does).
Your problem is that you're doing two queries on different connections to the DB and your #vars aren't lasting between connections.
If you have set statements, you're probably writing something that is a little more procedural than a single sql statement is designed to deliver.
I would look at write in a stored procedure to the the job (http://forums.mysql.com/read.php?98,358569). Although they are a bit old school - they will probably do what you want quite effectively.
For our system we are using multiple databases with the same structure. For example when we have 1000 customers, there will be 1000 databases. We've chosen to give each customers his own database, so we can delete all his data at once without any hassle.
Now I have to update the database structure several times a year. So I began to write a stored procedure which loops through all schemas. But I got stuck with executing a dynamic USE statement.
My code is as follows:
DECLARE V_SCHEMA VARCHAR(100);
SET V_SCHEMA = 'SomeSchemaName';
SET #QUERYSTRING = CONCAT('USE ', V_SCHEMA);
PREPARE S FROM #QUERYSTRING;
EXECUTE S;
DEALLOCATE PREPARE S;
When I execute this code I get an error which says Error Code: 1295. This command is not supported in the prepared statement protocol yet. So I assume that I cannot change the active database in a procedure.
I have searched the internet, but the only thing I found was creating a string of each alter query and prepare/execute/deallocate it. I hope there is a better solution for this. I could write a shell script that loops through the schemas and executes a SQL file on them, but I prefer a stored procedure that takes care of this.
Does anyone know how to make this work?
Thank you for your help!
EDIT: I use the latest stable version of MySQL 5.6
If there are some known databases, then try to write a CASE.
Otherwise, do not execute USE statement using prepared statements; instead, build other statements (SELECT, INSERT, UPDATE, ...) with full name - <database name> + '.' + <object name>, and execute them using prepared statements.
If you put your structure changes into a stored procedure in a temporary schema, you can do this within a Workbench SQL window.
You can build your iteration script using a query against the information_schema, e.g.
SELECT GROUP_CONCAT(CONCAT('USE ',schema_name,'; CALL tmp.Upgrade')
SEPARATOR ';\n') AS BldCode
FROM information_schema.schemata
WHERE schema_name NOT IN
('information_schema', 'performance_schema', 'mysql', 'sakila', 'world', 'tmp')
Since you cannot execute this as a prepared statement, you can copy the SQL result into a new SQL window, and run that.
Please note that the structure changes stored procedure would need to operate on the current schema, rather than specifying schemas.
I'm not a DB expert, but I've inherited some of the responsibility for a fairly large production MySQL DB from a guy who seems to have been somewhat to severely incompetent but with the occasional flares of brilliance. Most issues I've been able to sort out myself, but this one has me stumped and I haven't seen anything addressing it anywhere:
Is there any sane reason to have a stored procedure that is nothing more than a wrapper for a prepared statement?
Something along these lines:
CREATE PROCEDURE foo_search (IN searchParam1 int, IN searchParam2 varchar(255))
BEGIN
SET #local1 = searchParam1;
SET #local2 = searchParam2;
PREPARE stmt FROM
'...'; --Fairly complex nested select statement
EXECUTE stmt USING #local1, #local2;
END
And that's it. My understanding of prepared statements is that their benefit lies in sanitizing input (already handled by the PHP framework we use), and reducing communication back-and-forth (compromised by being within a stored proc).
Is this pure and simple pointless insanity as it appears, or am I missing something?
Mike listed two good reasons. Here are a couple more:
Reduces the amount of network traffic as CALL foo_search(searchParam1, searchParam2) will be less data than sending the entire SQL statement each time.
Preparing the statement on the server may improve performance. After benchmarking several different methods, we found that for complex SQL statements, preparing the statement only once, on the server, performed the best. Here's an example:
CREATE PROCEDURE foo_search (IN searchParam1 int, IN searchParam2 varchar(255))
BEGIN
SET #local1 = searchParam1;
SET #local2 = searchParam2;
IF ISNULL(#foo_search_prepared) THEN
SET #foo_search_prepared = TRUE;
PREPARE stmt FROM
'...'; --Fairly complex nested select statement
END IF;
EXECUTE stmt USING #local1, #local2;
END
There could be.
Perhaps the guy was using a DB connection library that doesn't support prepared statements, so he just wanted get an equivalent effect.
Perhaps he wanted to call that stored procedure from a trigger.
I have many tens of thousands of rows of data that need to be inserted into a MySQL InnoDB table from a remote client. The client (Excel VBA over MySQL ODBC connector via ADO) can either generate a CSV and perform a LOAD DATA LOCAL INFILE, or else can prepare an enormous INSERT INTO ... VALUES (...), (...), ... statement and execute that.
The former requires some rather ugly hacks to overcome Excel's inability to output Unicode CSV natively (it only writes CSV in the system locale's default codepage, which in many cases is a single-byte character set and therefore quite limited); but the MySQL documentation suggests it could be 20 times faster than the latter approach (why?), which also "feels" as though it might be less stable due to the extremely long SQL command.
I have not yet been able to benchmark the two approaches, but I would be very interested to hear thoughts on likely performance/stability issues.
I'm thinking maybe a hybrid solution would work well here... As in...
First create a prepared statement for performance
PREPARE stmt1 FROM 'INSERT INTO table (column1, column2, ...) VALUES (?, ?, ...)';
Observe that the ? marks are actual syntax - you use a question mark wherever you intend to eventually use a value parsed from the CSV file.
Write a procedure or function that opens the .CSV file and enters into a loop that reads the contents one row at a time (one record at a time), storing the values of the parsed columns in separate variables.
Then, within this loop, just after reading a record into local variables, you set the values in the prepared statement to your current record in local variables, as in...
SET #a = 3;
SET #b = 4;
There should be the same number of SET statements as there are columns in the CSV file. If not, you have missed something. The order is extremely important as you must set the values according to the position of the ? marks in the prepared statement. This means you will have to ensure the SET statements match column for column with the columns in your INSERT statement.
After setting all the parameters for the prepared statement, you then execute it.
EXECUTE stmt1 USING #a, #b;
This then is the end of the loop. Just after exiting the loop (after reaching end of file of the CSV), you must release the prepared statement, as in...
DEALLOCATE PREPARE stmt1;
Important things to keep in mind are ...
Make sure you prepare the INSERT statement before entering into the loop reading records, and make sure you DEALLOCATE the statement after exiting the loop.
Prepared statements allow the database to pre-compile and optimize the statement one time, then execute it multiple times with changing parameter values. This should result in a nice performance increase.
I am not certain about MySQL, but some databases also let you specify a number of rows to cache before a prepared statement actually executes across the network - if this is possible with MySQL, doing so will allow you to tell the database that although you are calling execute on the statement for every row read from the CSV, that the database should batch up the statements up to the specified number of rows, and only then execute across the network. In this way performance is greatly increased as the database may batch up 5 or 10 INSERTS and execute them using only one round trip over the network instead of one per row.
Hope this helps and is relevant. Good Luck!
Rodney