As we know, MySQL supports Multiple Statement Queries, i.e., we can execute two or more statements separated by a semicolon with only one function call. This can be done using, for example, the PHP function mysqli_multi_query().
Now I have a question. I want to execute the following two statements in one call, but the call does not return for a very long time. I wonder whether these two statements will cause dead blocks.
If so, how should I resolve it?
update users set user_openid='' where user_openid='12345';
update users set user_openid='23456', user_fakeid='34567' where user_login='cifer';
Related
Could an SQL injection be performed to find data about a MySQL database where the table in the original query doesn't exist?
For example, consider this query in PHP:
mysqli_multi_query("INSERT INTO users (id, email_address) VALUES (NULL, '$email');");
If the table users didn't exist, could an injection still be performed? The main issue I can see is the entire query operation will throw an error if I try to inject something like the following:
'); SHOW TABLES; --
This is more for my education than anything in practical use. Any ideas?
This won't do SQL injection even if the table does exist. mysql_query() doesn't perform multiple queries, so injecting a ; to start a second query will not work. Most other MySQL APIs are similar -- you have to use something like mysqli_multi_query() to be able to perform multiple queries in a single call.
But if the first query fails for some reason, none of the following queries will be executed. So in your example, if there's no user table, you can't inject into it.
This is not very interesting, though. SQL-injection is generally done to queries that are working properly except for not properly protecting against injection. So there's no likely situation where you'd inject into a query that's accessing a nonexistent table.
I have one sql function, let's say theFunction(item_id). It takes an item id and computes one value as its return. I read one table from the DB and I suppose to compute a new value to append for each row by this function given the item_id particular to taht row. Which desing block would do this form me with the following SQL (if not wrong).
select thsFunction(item_id);
I assume that the block gives me item_id of each row as a variable.
You can use another table input step, and have it accept fields from previous steps and execute for every row (both config options are at the bottom of the step's window).
Beware that this is a rather slow implementation. Each query is executed separately and as such each row requires a round trip to the database.
Alternatively, you can use the Row SQL Script. I believe it allows you to pass all SQL statements in a single trip to the database.
An SQL function is probably much more efficient to run in the database, for all rows at once, in stead of making a separate call into the database from PDI for each row to execute the function. So if performance is at all a relevant concern, I'd suggest a whole different strategy:
Write your rows to a table in the database. End your transformation here.
On the job level, first execute your transformation from above, then execute the function in an "Execute SQL script..." component, giving it an SQL command somewhat like "UPDATE my_temp_table set target_col = theFunction(item_id)".
Continue your job with the remaining steps in a new transformation, starting from that table as input.
This of course presupposes that you don't have too many other threads going on, but if your transofrmation is simple and linear -- or at least if it can be made single-linear at this particular step -- it may be possible to split it up into two parts before and after this SQL call.
So I'm building a C program that connects to a mySQL database. Everything worked perfectly. Then, to save on number of queries, I decided that I would like to execute 10 statements at a time. I set the "CLIENT_MULTI_STATEMENTS" flag in the connection, and separated my statements by semicolons.
When I execute the first batch of 10 statements, it succeeds and mysql_real_query() returns a 0.
When I try the second batch though, it returns a "1" and doesn't work. Nowhere can I find what this "1" error code means, so I was hoping someone may have run into this problem before.
Please note that these are all UPDATE statements, and so I have no need for result sets, it's just several straight-up calls to mysql_real_query().
It's not clear from the documentation whether the errors this function can cause are returned or not, but it should be possible to obtain the actual error using mysql_error().
My guess is that you still have to loop through the result sets whether you're interested in them or not.`
Are these prepared statements? If that's the case then you can't use CLIENT_MULTI_STATEMENTS.
Also, note (from http://dev.mysql.com/doc/refman/5.5/en/c-api-multiple-queries.html) that:
After handling the result from the
first statement, it is necessary to
check whether more results exist and
process them in turn if so. To support
multiple-result processing, the C API
includes the mysql_more_results() and
mysql_next_result() functions. These
functions are used at the end of a
loop that iterates as long as more
results are available. Failure to
process the result this way may result
in a dropped connection to the server.
You have to walk over all the results, regardless of whether or not you care about the values.
I have a complicated SELECT query that filters on a time range, and I want this time range (start and end dates) to be specifiable using user-supplied parameters. So I can use a stored procedure to do this, and the return is a multiple-row result set. The problem I'm having is how to deal with this result set afterwards. I can't do something like:
SELECT * FROM (CALL stored_procedure(start_time, end_time))
even though the stored procedure is just a SELECT that takes parameters. Server-side prepared statement also don't work (and they're not persistent either).
Some suggest using temporary tables; the reason that's not an ideal solution is that 1) I don't want to specify the table schema and it seems that you have to, and 2) the lifetime of the temporary table would only be limited to a invocation of the query, it doesn't need to persist beyond that.
So to recap, I want something like a persistent prepared statement server-side, whose return is a result set that MySQL can manipulate as if it was a subquery. Any ideas? Thanks.
By the way, I'm using MySQL 5.0. I know it's a pretty old version, but this feature doesn't seem to exist in any more recent version. I'm not sure whether SELECT-ing from a stored procedure is possible in other SQL engines; switching is not an option at the moment, but I'd like to know whether it's possible anyway, in case we decide to switch in the future.
Selecting from functions is possible in other engines. For instance, Oracle allows you to write a function that returns a table of user defined type. You can define result sets in the function, fill them by using queries or even using a combination of selects and code. Eventually, the result set can be returned from the function, and you can continue to query on that by using:
select * from table(FunctionToBeCalls(parameters));
The only disadvantage, is that this result set is not indexed, so it might be slow if the function is used within a complex query.
In MySQL nothing like this is possible. There is no way to use a result set from a procedure directly in a select query. You can return single values from a function and you can use OUT or INOUT parameters to you procedure to return values from.
But entire result sets is not possible. Filling a temporary table within you procedure is the closest you will get.
Is there any way to debug a SQL Server 2008 query?
Yes. You can use the T-SQL debugger:
http://technet.microsoft.com/en-us/library/cc646008.aspx
Here is how you step through a query: http://technet.microsoft.com/en-us/library/cc646018.aspx
What exactly do you mean by debug?
Are you seeing incorrect data?
Are you getting unexpected duplicate rows?
What I usually do is start with a known set of data, usually one or two rows if possible, and comment out all joins and where conditions.
Introduce each additional element in your query one at a time starting with joins.
At each step, you should know how many records you are expecting.
As soon as you introduce something like a join or where condition that does not match your prediction, you know you have found the problem statement.
If it is a stored procedure with variables and such, you can always PRINT the values of your variables at different points.
If you want to only execute to a particular point in a stored procedure, then you may RETURN at any time and halt processing.
If you have temp tables that must get destroyed between executions while debugging your procedure, a common trick I use is to create a label like-
cleanup:
then at whatever point I want to bail, I can goto cleanup (I know goto is horrible, but it works great when debugging sprocs)
Yes, you need to set a breakpoint.
Frankly I find the debugging to be virtually useless. I often don't want to see the variables, but rather the records I would be inserting to a table or updating or deleting.
What I do is this when I havea complex sp to debug.
First I create a test variable. I set it =1 when I want to test. This will ensure that All actions iteh transaction are rolled back at the end (don't want to actually change the datbase until you are sure the proc is doing what you want.) by making sure the commit statement requires the test variable to be set to 0.
At the end of the proc I generally have a if test = 1 begin
END statement and between the begin and end, I put the select statments for all the things I want to see the values of. This might include any table variables or temp tables, the records in a particular table after the insert, the records I deleted or whatever else I feel I need to see. If I am testing mulitple times, I might comment out references to tables that I know are right and concentrate only on the ones I've changed that go around.
Now I can see what the effect of my proc is and the changes are rolled back in case they weren't right. To actually commit the changes (and not see the intermediate steps) , I simply change the value of the test variable.
If I use dynamic SQL, I also havea debug variable that instead of executing the dynamic SQl simply prints the results out to the screen. I find all this far more useful in debugging a complex script than breakpoints that show me the value of variables.