So I'm building a C program that connects to a mySQL database. Everything worked perfectly. Then, to save on number of queries, I decided that I would like to execute 10 statements at a time. I set the "CLIENT_MULTI_STATEMENTS" flag in the connection, and separated my statements by semicolons.
When I execute the first batch of 10 statements, it succeeds and mysql_real_query() returns a 0.
When I try the second batch though, it returns a "1" and doesn't work. Nowhere can I find what this "1" error code means, so I was hoping someone may have run into this problem before.
Please note that these are all UPDATE statements, and so I have no need for result sets, it's just several straight-up calls to mysql_real_query().
It's not clear from the documentation whether the errors this function can cause are returned or not, but it should be possible to obtain the actual error using mysql_error().
My guess is that you still have to loop through the result sets whether you're interested in them or not.`
Are these prepared statements? If that's the case then you can't use CLIENT_MULTI_STATEMENTS.
Also, note (from http://dev.mysql.com/doc/refman/5.5/en/c-api-multiple-queries.html) that:
After handling the result from the
first statement, it is necessary to
check whether more results exist and
process them in turn if so. To support
multiple-result processing, the C API
includes the mysql_more_results() and
mysql_next_result() functions. These
functions are used at the end of a
loop that iterates as long as more
results are available. Failure to
process the result this way may result
in a dropped connection to the server.
You have to walk over all the results, regardless of whether or not you care about the values.
Related
I want to fetch some data from my SQL server in R. The way I'm doing this is,
rs=dbSendQuery(con,"myquery")
data=fetch(rs,n=-1)
this works perfectly for a small table. However, for a bigger table, the fetch command says,
Warning message:
In fetch(ms, n = -1) : error while fetching rows
The problem still remains even if I restrict my rows (n=10). So, I'm not sure if it's a timeout problem or what.
What might be the case?
data shows,
1] creator ratio
<0 rows> (or 0-length row.names)
There are couple of points I want to mention which can help OP in identifying and fixing problem.
1) Do not use fetch. Instead use dbFetch. The quote from R-help suggests as
fetch() is provided for compatibility with older DBI clients - for all
new code you are strongly encouraged to use dbFetch()
2) Execute your query from Query Editor in SQL Server Management Studio and check for performance. Fine tune tables used query for indexes. Once ready and happy try it from R
3) If query is selecting many columns then it would be good to first try selecting just one or two columns.
4) I hope you freeing resources and closing connection in later part of your code. It can be done like:
# Free all resources
dbClearResult(rs)
# Close connection
dbDisconnect(con)
When I enable server-side prepared statments via useServerPrepStmts jdbc flag, result set update operations fail after the first request for a given query with:
Result Set not updatable.This result set must come from a statement
that was created with a result set type of ResultSet.CONCUR_UPDATABLE,
the query must select only one table, can not use functions and must
select all primary keys from that table
When I disable server-side prepared statements, result set update operations work flawlessly.
Since the query involves only 1 table, has a primary key, returns a single row, and no functions are involved, what must be happening is that the prepared statement is created with ResultSet.CONCUR_READ_ONLY and then cached server-side. Subsequent requests for the same query will draw the prepared statement from the cache and then, even though the client sends ResultSet.CONCUR_UPDATABLE for rs.updateRow(), concurrency is still set to ResultSet.CONCUR_READ_ONLY on the server.
If I am correct in above assumption, how does one override the server-side cache in this case? (everything else is fine with prepared statement caching, just result set row operations are affected).
Linux (CentOS 5.7) with:
mysql-connector-java 5.1.33
mysql 5.6.20
EDIT
not relevant I notice that the first query, which always succeeds, has this in the query log: SET SQL_SELECT_LIMIT=1, and all subsequent queries fail with this: SET SQL_SELECT_LIMIT=DEFAULT. Not sure if this is the cause of the problem, or just a side effect. Guess I'll try to manually setFetchSize on the client and see if that makes a difference...
Workaround is to append FOR UPDATE to ResultSet.CONCUR_READ_ONLY select statement on the client with ResultSet.CONCUR_UPDATABLE concurrency for the new prepared statement. This allows for server statement caching while still being able to modify a JDBC ResultSet.
Side note:
the select ... for update statement itself does not appear to be eligible for caching; i.e. query log shows Prepare and Execute lines on every request.
As we know, MySQL supports Multiple Statement Queries, i.e., we can execute two or more statements separated by a semicolon with only one function call. This can be done using, for example, the PHP function mysqli_multi_query().
Now I have a question. I want to execute the following two statements in one call, but the call does not return for a very long time. I wonder whether these two statements will cause dead blocks.
If so, how should I resolve it?
update users set user_openid='' where user_openid='12345';
update users set user_openid='23456', user_fakeid='34567' where user_login='cifer';
My program restores a MySQL database from SQL file. If I wanted to display progress of SQL execution in my program, I would need to know the number of SQL statements in the file. How can I do this in MySQL? (The queries may consist of mysql specific multi-row insert statements)
I could use either MySQL command line tools or the Python API. You're welcome to post solutions for other DBMS too.
The simple (and easy) way: Add PRINT statements to your SQL script file, displaying progess messages.
The advantage (apart from the obvious 'it's hard to parse multi-statement constructs') is that you get precise control over the progress. For example, some statements might take much longer to run than others so you would need to weight them.
I wouldn't think of progress in terms of number of statements executed. What I do is print out feedback that specific tasks have been started and completed, such as 'Synchronising Table 'blah'', 'Updating Stored Procedure X' etc
The naive solution is to count the number of semicolons in the file (or any other character used as delimited in the file).
It usually works pretty well, except when the data you are inserting has many semicolons and then you have to start dealing with actual parsing of the SQLs, which is a headache.
Is there any way to debug a SQL Server 2008 query?
Yes. You can use the T-SQL debugger:
http://technet.microsoft.com/en-us/library/cc646008.aspx
Here is how you step through a query: http://technet.microsoft.com/en-us/library/cc646018.aspx
What exactly do you mean by debug?
Are you seeing incorrect data?
Are you getting unexpected duplicate rows?
What I usually do is start with a known set of data, usually one or two rows if possible, and comment out all joins and where conditions.
Introduce each additional element in your query one at a time starting with joins.
At each step, you should know how many records you are expecting.
As soon as you introduce something like a join or where condition that does not match your prediction, you know you have found the problem statement.
If it is a stored procedure with variables and such, you can always PRINT the values of your variables at different points.
If you want to only execute to a particular point in a stored procedure, then you may RETURN at any time and halt processing.
If you have temp tables that must get destroyed between executions while debugging your procedure, a common trick I use is to create a label like-
cleanup:
then at whatever point I want to bail, I can goto cleanup (I know goto is horrible, but it works great when debugging sprocs)
Yes, you need to set a breakpoint.
Frankly I find the debugging to be virtually useless. I often don't want to see the variables, but rather the records I would be inserting to a table or updating or deleting.
What I do is this when I havea complex sp to debug.
First I create a test variable. I set it =1 when I want to test. This will ensure that All actions iteh transaction are rolled back at the end (don't want to actually change the datbase until you are sure the proc is doing what you want.) by making sure the commit statement requires the test variable to be set to 0.
At the end of the proc I generally have a if test = 1 begin
END statement and between the begin and end, I put the select statments for all the things I want to see the values of. This might include any table variables or temp tables, the records in a particular table after the insert, the records I deleted or whatever else I feel I need to see. If I am testing mulitple times, I might comment out references to tables that I know are right and concentrate only on the ones I've changed that go around.
Now I can see what the effect of my proc is and the changes are rolled back in case they weren't right. To actually commit the changes (and not see the intermediate steps) , I simply change the value of the test variable.
If I use dynamic SQL, I also havea debug variable that instead of executing the dynamic SQl simply prints the results out to the screen. I find all this far more useful in debugging a complex script than breakpoints that show me the value of variables.