Use same mysqli prepared statement for different queries? - mysql

Throughout some testings; a little question popped up. When I usually code database updates; I usually do this via callbacks which I code in PHP; to which I simply pass a given mysqli connection object as function argument. Executing all queries of for example three queries across the same single connection proved to be much faster than if closing and reopening a DB connection for each query of a given query sequence. This also works easily with SQL transactions, the connection can be passed along to callbacks without any issues.
My question is; can you also do this with prepared statement objects ? What I mean is, considering we successfully established a $conn object, representing the mysqli connection, is stuff like this legit? :
function select_users( $users_id, $stmt ) {
$sql = "SELECT username FROM users where ID = ?";
mysqli_stmt_prepare( $stmt, $sql );
mysqli_stmt_bind_param( $stmt, "i", $users_id );
mysqli_stmt_execute( $stmt );
return mysqli_stmt_get_result( $stmt );
}
function select_labels( $artist, $stmt ) {
$sql = "SELECT label FROM labels where artist = ?";
mysqli_stmt_prepare( $stmt, $sql );
mysqli_stmt_bind_param( $stmt, "s", $artist );
mysqli_stmt_execute( $stmt );
return mysqli_stmt_get_result( $stmt );
}
$stmt = mysqli_stmt_init( $conn );
$users = select_users( 1, $stmt );
$rappers = select_labels( "rapperxyz", $stmt );
or is it bad practice; and you should rather use:
$stmt_users = mysqli_stmt_init( $conn );
$stmt_rappers = mysqli_stmt_init( $conn );
$users = select_users( 1, $stmt_users );
$rappers = select_labels( "rapperxyz", $stmt_rappers );
During the testing; I noticed that the method by using a single statement object passed along callbacks works for server calls where I call like 4 not too complicated DB queries via the 4 according callbacks in a row.
When I however do a server call with like 10 different queries, sometimes (yes, only sometimes; for pretty much the same data used across the different executions; so this seems to be weird behavior to me) I get the error "Commands out of sync; you can't run this command now" and some other weird errors I've never experienced, like the amount of variables not matching the amount of parameters; although they prefectly do after checking them all. The only way to fix this I found after some research was indeed by using different statement objects for each callback. So, I just wondered; should you actually ALWAYS use ONE prepared statement object for ONE query, which you then may execute N times in a row?

Yes.
The "commands out of sync" error is because MySQL protocol is not like http. You can't send requests any time you want. There is state on the server-side (i.e. mysqld) that is expecting a certain sequence of requests. This is what's known as a stateful protocol.
Compare with a protocol like ftp. You can do an ls in an ftp client, but the list of files you get back depends on the current working directory. If you were sharing that ftp client connection among multiple functions in your app, you don't know that another function hasn't changed the working directory. So you can't be sure the file list you get from ls represents the directory you thought you were in.
In MySQL too, there's state on the server-side. You can only have one transaction open at a time. You can only have one query executing at a time. The MySQL client does not allow you to execute a new query where there are still rows to be fetched from an in-progress query. See Commands out of sync in the MySQL doc on common errors.
So if you pass your statement handle around to some callback functions, how can that function know it's safe to execute the statement?
IMO, the only safe way to use a statement is to use it immediately.

Related

Pass custom variables in MySQL connection

I am setting up a MySQL connection (in my case PDO but it shouldn't matter) in a REST API.
The REST API uses an internal authentication (username / password). There are multiple user groups accessing the REST API, e.g. customers, IT, backend, customer service. They all use the same MySQL connection in the end because they also use the same end points most of the time.
In the MySQL database I would like to save the user who is responsible for a change in a data set.
I would like to implement this on the MySQL layer through a trigger. So, I have to pass the user information from the REST API to this trigger somehow. There are some MySQL calls like CURRENT_USER() or status that allow to query for meta-information. My idea was to somehow pass additional information in the connection string to MySQL, so that I don't have to use different database users but I am still able to retrieve this information from within the trigger.
I have done some research and don't think it is possible, but since it would facilitate my task a lot, I still wanted to ask on SO if someone did know a solution for my problem.
I would set a session variable on connect.
Thanks to the comment from #Álvaro González for reminding me about running a command on PDO init.
The suggestion of adding data to a temp table isn't necessary. It's just as good to set one or more session variables, assuming you just need a few scalars.
$pdo = new PDO($dsn, $user, $password, [
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
PDO::MYSQL_ATTR_INIT_COMMAND => "SET #myvar = 'myvalue', #myothervar = 'othervalue'"
]);
It's also possible to set session variables at any time after connect, with a call to $pdo->exec().
$pdo->exec("SET #thirdvar = 1234");
You can read session variables in your SQL queries:
$stmt = $pdo->query("SELECT #myvar, #myothervar");
foreach ($stmt->fetchAll(PDO::FETCH_ASSOC) as $row) {
print_r($row);
}
You can also read session variables in triggers:
CREATE TRIGGER mytrig BEFORE INSERT ON mytable
FOR EACH ROW
SET NEW.somecolumn = #myvar;

Perl/MySQL Relationship Query

I have the following perl code that will eventually be a webpage:
my($dbh) = DBI->connect("DBI:mysql:host=dbsrv;database=database","my_sqlu","my_sqlp") or die "Canny Connect";
my($sql) = "SELECT * FROM hardware where srv_name = \"$srv_name\"";
my($sth) = $dbh->prepare($sql);
$sth->execute();
$sth->bind_col( 1, \my($db_id));
$sth->bind_col( 2, \my($db_srv_name));
$sth->bind_col( 5, \my($db_site));
$sth->fetchrow();
$sth->finish ();
my($sql) = "SELECT sites.\`site_code\`, sites.\`long_name\` FROM \`hardware\` JOIN \`sites\` ON \`sites\`.id=\`hardware\`.\`site\` where \`hardware\`.\`id\`=\'$db_id\'";
my($sth) = $dbh->prepare($sql);
$sth->execute();
$sth->bind_col( 1, \my($db_site_code));
$sth->bind_col( 2, \my($db_long_name));
$sth->fetchrow();
$sth->finish ();
$dbh->disconnect;
print "$db_site_code<br>$db_long_name";
The query above does work however what I'm trying to find out is there any way I can run one SQL query and get the db_site_code and db_long_name from the sites DB without running the 2nd query? The hardware DB has the foreign key 'id' in the sites Db.
When you read anything about relational DBs they all say it's by far the most efficient method of getting data from your database but I just can't see how this is any quicker than just running 2 select queries. What I've done above would surely take longer than "select from hardware where srv_name = $srv_name" then "select from sites where id = db_site_id"? Any comments are greatly appreciated.
Here's an example of how to do this with placeholders as well as a combined query. If I understand your DB correctly, you can just omit the first query and add the server name instead of the ID in the second query. I might be mistaken there, but my example will still be of value for the Perl suggestions.
use strict;
use warnings;
use DBI;
# Create DB connection
my $dbh = DBI->connect("DBI:mysql:host=dbsrv;database=database","my_sqlu","my_sqlp")
or die "Cannot connect to database";
# Create the statement handle
my $sth = $dbh->prepare(<<'SQLQUERY') or die $dbh->errstr;
SELECT s.site_code, s.long_name
FROM hardware h
JOIN sites s ON s.id=h.site
WHERE h.srv_name=?
SQLQUERY
$sth->execute('Server Name'); # There's the parameter
my $res = $sth->fetchrow_hashref; # $res now has a hash ref with the first row
print "$res->{'site_code'}<br>$res->{'long_name'}";
There were a few issues with your code I'd like to point out to you:
You should always use strict and use warnings. They make your life easier!
You can leave the parens ( and ) out with my. Saves you keystrokes and makes your code more readable.
You can (but do not have to, this is preference!) leave out the parens after method calls that do not have arguments. Decide this for yourself.
As was already pointed out, always use placeholders with DBI. They are very simple. Now you don't have to escape the " with backslashes. Instead, just use ?.
Once you've combined your query, you can put it in a heredoc (<<'SQLQUERY'). It's a string that lasts from the next line to the delimiter (SQLQUERY). That way, your query is easier to read.
You can use one of the ref-fetchrow-methods to get all your result's columns into one hash. I used $sth->fetchrow_hashref because I find it most convenient. You've got the complete row and all the columns are named hash keys.
If called in a small scope (like a short sub), you don't need to finish a statement handle. It will be finished and destroyed by Perl automatically once it goes out of scope.
Another thing about performance: If this is just run occasionally, don't worry about it. You can profile your queries with DBI::Profile to see which way it is faster, but you should only do that if you really need to.
In my experience, especially with very huge queries and a very busy database, two or three queries are a lot better than a single big one because they do not take over the servers resources. But again, that is something you need to profile and benchmark (if the need arises).
Aside from #tadman's recommendation to use placeholders, I'd tag this as a sql question as well, but your solution is to simply add
srv_name = \"$srv_name\"
to your second where clause, so that your statement is:
"SELECT sites.\`site_code\`, sites.\`long_name\` FROM \`hardware\` JOIN \`sites\` ON \`sites\`.id=\`hardware\`.\`site\` where \`hardware\`.\`id\`=\'$db_id\'";
I strongly second #tadman's suggestion though -- use prepared statements and/or placeholders whenever possible.

how to compress data using perl dbi with mysql

I am using perl dbi connection to pull data remotely. I am looking to compress the data if possible.
Is there a way to do a file compression over the net with perl dbi for mysql?
Here is my snippet for getting data:
my $sth = $dbh->prepare("SELECT UUID(), '$node', 1, 2, 3, 4, vts FROM $tblist");
$sth->execute();
while (my($uid, $hostnm,$1,$2,$3,$upd,$vts) = $sth->fetchrow_array() ) {
print $gzip_fh "rec^A$uid^Ehost^A$hostnm^E1^A$1^E2^A$2^E3^A$3^E4^A$upd^Evts^A$vts^D";
}
$sth->finish;
Best option will be to use prepared statements; with this, once you have compiled the statement once you just have to send the arguments over the wire each time you need to run the queries.
The example here shows how to perform a simple prepare, if you are going to utilize the same query over and over again you should keep your $sth which you can continue to call $sth->execute() on with new variables. The reason this cuts down on your network traversal is because you're not sending the query each time you run $sth->execute($var), instead you're just passing an ID for the prepared statement and the variables that go into the ? placeholders.
$sth = $dbh->prepare("select * from table where column=?");
$sth->execute($var);

Why do our queries get stuck on the state "Writing to net" in MySql?

We have a lot of queries
select * from tbl_message
that get stuck on the state "Writing to net". The table has 98k rows.
The thing is... we aren't even executing any query like that from our application, so I guess the question is:
What might be generating the query?
...and why does it get stuck on the state "writing to net"
I feel stupid asking this question, but I'm 99,99% sure that our application is not executing a query like that to our database... we are however executing a couple of querys to that table using WHERE statement:
SELECT Count(*) as StrCount FROM tbl_message WHERE m_to=1960412 AND m_restid=948
SELECT Count(m_id) AS NrUnreadMail FROM tbl_message WHERE m_to=2019422 AND m_restid=440 AND m_read=1
SELECT * FROM tbl_message WHERE m_to=2036390 AND m_restid=994 ORDER BY m_id DESC
I have searched our application several times for select * from tbl_message but haven't found anything... But still our query-log on our mysql server is full of Select * from tbl_message queries
Since applications don't magically generate queries as they like, I think that it's rather likely that there's a misstake somewhere in your application that's causing this. Here's a few suggestions that you can use to track it down. I'm guessing that your using PHP, since your using MySQL, so I'll use that for my examples.
Try adding comments in front of all your queries in the application, like this:
$sqlSelect = "/* file.php, class::method() */";
$sqlSelect .= "SELECT * FROM foo ";
$sqlSelect .= "WHERE criteria";
The comment will show up in your query log. If you're using some kind database api wrapper, you could potentially add these messages automatically:
function query($sql)
{
$backtrace = debug_backtrace();
// The function that executed the query
$prev = $backtrace[1];
$newSql = sprintf("/* %s */ ", $prev["function"]);
$newSql .= $sql;
mysql_query($newSql) or handle_error();
}
In case you're not using a wrapper, but rather executing the queries directly, you could use the runkit extension and the function runkit_function_rename to rename mysql_query (or whatever you're using) and intercept the queries.
There are (at least) two data retrieval modes for mysql. With the c api you either call mysql_store_result() or mysql_use_result().
mysql_store_result() returns when all result data is transferred from the MySQL server to your process' memory, i.e. no data has to be transferred for further calls to mysql_fetch_row().
However, by using mysql_use_result() each record has to be fetched individually if and when mysql_fetch_row() is called. If your application does some computing that takes longer than the time period specified in net_write_timeout between two calls to mysql_fetch_row() the MySQL server considers your connection to be timed out.
Temporarily enable the query log by putting
log=
into your my.cnf file, restart mysql and watch the query log for those mystery queries (you don't have to give the log a name, it'll assume one from the host value).

Why does my INSERT sometimes fail with "no such field"?

I've been using the following snippet in developements for years. Now all of a sudden I get a DB Error: no such field warning
$process = "process";
$create = $connection->query
(
"INSERT INTO summery (process) VALUES($process)"
);
if (DB::isError($create)) die($create->getMessage($create));
but it's fine if I use numerics
$process = "12345";
$create = $connection->query
(
"INSERT INTO summery (process) VALUES($process)"
);
if (DB::isError($create)) die($create->getMessage($create));
or write the value directly into the expression
$create = $connection->query
(
"INSERT INTO summery (process) VALUES('process')"
);
if (DB::isError($create)) die($create->getMessage($create));
I'm really confused ... any suggestions?
It's always better to use prepared queries and parameter placeholders. Like this in Perl DBI:
my $process=1234;
my $ins_process = $dbh->prepare("INSERT INTO summary (process) values(?)");
$ins_process->execute($process);
For best performance, prepare all your often-used queries right after opening the database connection. Many database engines will store them on the server during the session, much like small temporary stored procedures.
Its also very good for security. Writing the value into an insert string yourself means that you must write the correct escape code at each SQL statement. Using a prepare and execute style means that only one place (execute) needs to know about escaping, if escaping is even necessary.
Ditto what Zan Lynx said about placeholders. But you may still be wondering why your code failed.
It appears that you forgot a crucial detail from the previous code that worked for you for years: quotes.
This (tested) code works fine:
my $thing = 'abcde';
my $sth = $dbh->prepare("INSERT INTO table1 (id,field1)
VALUES (3,'$thing')");
$sth->execute;
But this next code (lacking the quotation marks in the VALUES field just as your first example does) produces the error you report because VALUES (3,$thing) resolves to VALUES (3,abcde) causing your SQL server to look for a field called abcde and there is no field by that name.
my $thing = 'abcde';
my $sth = $dbh->prepare("INSERT INTO table1 (id,field1)
VALUES (3,$thing)");
$sth->execute;
All of this assumes that your first example is not a direct quote of code that failed as you describe and therefore not what you intended. It resolves to:
"INSERT INTO summery (process) VALUES(process)"
which, as mentioned above causes your SQL server to read the item in the VALUES set as another field name. As given, this actually runs on MySQL without complaint and will fill the field called 'process' with NULL because that's what the field called 'process' contained when MySQL looked there for a value as it created the new record.
I do use this style for quick throw-away hacks involving known, secure data (e.g. a value supplied within the program itself). But for anything involving data that comes from outside the program or that might possibly contain other than [0-9a-zA-Z] it will save you grief to use placeholders.