how to close resultset in RMySQL? - mysql

I used RMySQL for import database, sometimes when I try to close the connection, I receive the following error:
Error in mysqlCloseConnection(conn, ...) :
connection has pending rows (close open results set first)
I have no other ways of correcting this other than restarting the computer, anything I can do so solve this? Thanks!

We can use the method dbClearResult.
Example:
dbClearResult(dbListResults(conn)[[1]])

As Multiplexer noted, you are probably doing it wrong by leaving parts of the result set behind.
DBI and the accessor packages like RMySQL have documentation that is a little challenging at times. I try to remind myself to use dbGetQuery() which grabs the whole result set at once. Here is a short snippet from the CRANberries code:
sql <- paste("select count(*) from packages ",
"where package='", curPkg, "' ",
"and version='", curVer, "';", sep="")
nb <- dbGetQuery(dbcon, sql)
After this I can close without worries (or do other operations).

As explained in previous answers, you get this error because RMysql didn't return all the results of the query.
I had this problem when the results where over 500 ,using :
my_result <- fetch( dbSendQuery(con, query))
looking at the documentation for fetch I found that you can specify the number of records retrieved :
n = maximum number of records to retrieve per fetch. Use n = -1 or n = Inf to retrieve all pending records.
Solutions :
1- set the number of record to infinity :
my_result <- fetch( dbSendQuery(con, query), n=Inf)
2- use dbGetQuery :
my_result <- dbGetQuery(con, query)

rs<- dbGetQuery(dbcon, sql)
data<-dbFetch(rs)
dbClearResult(rs)
last line removed the following error when continuing querying
Error in .local(conn, statement, ...) :
connection with pending rows, close resultSet before continuing

You need to close the resultset before closing the connection.
If you try to close the connection before closing the resultset which has pending rows then sometimes it lead to hang the machine.
I don't know much about rmysql but try to close the resultset first.

You have to remember about result's set yourself. In example below you have how to close/clear results and how to take the rows affected. To solve your problem use last line of code on variable which takes results from any of yours sent statement or query. :)
statementRes <- DBI::dbSendStatement(conn = db,
"CREATE TABLE IF NOT EXISTS great_dupa_test (
taxonomy_id INTERGER NOT NULL,
scientific_name TEXT);")
DBI::dbGetRowsAffected(statementRes)
DBI::dbClearResult(statementRes)

When I used this in R. It worked for me!
Just run the command of dbConnect(). I just reconnected the db.

Related

RODBC ERROR: Could not SQLExecDirect in mysql

I have been trying to write an R script to query Impala database. Here is the query to the database:
select columnA, max(columnB) from databaseA.tableA where columnC in (select distinct(columnC) from databaseB.tableB ) group by columnA order by columnA
When I run this query manually (read: outside the Rscript via impala-shell), I am able to get the table contents. However, when the same is tried via the R script, I get the following error:
[1] "HY000 140 [Cloudera][ImpalaODBC] (140) Unsupported query."
[2] "[RODBC] ERROR: Could not SQLExecDirect 'select columnA, max(columnB) from databaseA.tableA where columnC in (select distinct(columnC) from databaseB.tableB ) group by columnA order by columnA'
closing unused RODBC handle 1
Why does the query fail when tried via R? and how do I fix this? Thanks in advance :)
Edit 1:
The connection script looks as below:
library("RODBC");
connection <- odbcConnect("Impala");
query <- "select columnA, max(columnB) from databaseA.tableA where columnC in (select distinct(columnC) from databaseB.tableB ) group by columnA order by columnA";
data <- sqlQuery(connection,query);
You need to install the relevant drivers, please look at the following link
I had the same issue, all i had to do was update the ODBC drivers.
Also if you can update your odbcConnect with the username and password
connection <- odbcConnect("Impala");
to
connection <- odbcConnect("Impala", uid="root", pwd="password")
The RODBC package is quirky: if there's no row updated/deleted in the query execution it will throw an error.
So before using sqlDelete to delete rows, or using sqlUpdate to update values, first check if there's at least one row that will be deleted/updated by querying COUNT(*).
I've had no problem after implementing the check, for Oracle SQL 12g.
An alternative would be to use a staging table for the new batch of data, and use sqlQuery to execute a MERGE command. RODBC won't complaint if there's zero row merged.
This might also be due to an error in your sql query itself. For example, I got this error when I missed an 'in' in the following generalized statement. Example:
stringstuff <- someDT$columnyouwanttouse
somestring <- toString(sprintf("'%s'", stringstuff))
RESULTS <- sqlQuery(con, paste0("select
fling as flam
and toot **in** (",somestring,")
limit 30
;"))
I got the error you did when I left out the 'in', so double check your syntax.
This error message can arise if the table doesn't exist in the database.
A few sensible checks:
Check for typos in the table name in your query
See if you can run the same query on the same database via another sql client
Talk to your data base administrator to confirm that the table does exist
Re-installing the RODBC package did the trick for me!
I had a similar problem. After unnisntalling the R version 4.2.1 and install the R version 4.1.3 the problem was solved.

Writing Lengthy SQL queries in R

I am researching how to read in data from a server directly to a data frame in R. In the past I have written SQL queries that were over 50 lines long (with all the selects and joins). Any advice on how to write long queries in R? Is there some way to write the query elsewhere in R, then paste it in to the "sqlQuery" part of the code?
Keep long SQL queries in .sql files and read them in using readLines + paste with collapse='\n'
my_query <- paste(readLines('your_query.sql'), collapse='\n')
results <- sqlQuery(con, my_query)
You can paste any SQL query into R as is and then simply replace the newlines + spaces with a single space. For instance:
## Connect ot DB
library(RODBC)
con <- odbcConnect('MY_DB')
## Paste the query as is (you can have how many spaces and lines you want)
query <-
"
SELECT [Serial Number]
,[Series]
,[Customer Name]
,[Zip_Code]
FROM [dbo].[some_db]
where [Is current] = 'Yes' and
[Serial Number] LIKE '5%' and
[Series] = '20'
order by [Serial Number]
"
## Simply replace the new lines + spaces with a space and you good to go
res <- sqlQuery(con, gsub("\\n\\s+", " ", query))
close(con)
Approach with separate .sql (most sql or nosql engines) files can be trouble if one prefer to edit code in one file.
As far as someone using RStudio (or other tool where code folding can be customized), simplifying can be done using parenthesis. I prefer using {...} and fold the code.
For example:
query <- {'
SELECT
user_id,
format(date,"%Y-%m") month,
product_group,
product,
sum(amount_net) income,
count(*) number
FROM People
WHERE
date > "2015-01-01" and
country = "Canada"
GROUP BY 1,2,3;'})
Folding a query can be even done within function (folding long argument), or in other situations where our code extends to inconvenient sizes.
I had this issue trying to run a 17 line SQL query through RODBC and tried #arvi1000's solution but no matter what I did it would produce an error message and not execute more than one line of the .sql file. Tried variations of the value for collapse and different ways for reading in the file. Spent 90 minutes trying to get it to work.. Suspect RODBC might behave differently with multi-line queries on different platforms or with different versions of MySQL or ODBC settings.
Anyway, the following loop arrangement may not be as elegant but it works and is possibly more robust:
channel <- odbcConnect("mysql_odbc", uid="username", pwd="password")
sqlString<-readLines("your_query.sql")
for (i in 1:length(sqlString)) {
print(noquote(sqlString[i]))
sqlQuery(channel, as.name(sqlString[i]))
}
In my script, all except the last lines were doing joins, creating temporary tables etc, only the last line had a SELECT statment and produced an output. .sql file was tidy with only one query per line, no comments or newline characters within the query. It seems that this loop runs all the code, but the output is possibly lost in the scope somewhere, so the one SELECT statement needs to be repeated outside the loop.

Wordnet MySQL statement doesn't complete

I'm using the Wordnet SQL database from here: http://wnsqlbuilder.sourceforge.net
It's all built fine and users with appropriate privileges have been set.
I'm trying to find synonyms of words and have tried to use the two example statements at the bottom of this page: http://wnsqlbuilder.sourceforge.net/sql-links.html
SELECT synsetid,dest.lemma,SUBSTRING(src.definition FROM 1 FOR 60) FROM wordsXsensesXsynsets AS src INNER JOIN wordsXsensesXsynsets AS dest USING(synsetid) WHERE src.lemma = 'option' AND dest.lemma <> 'option'
SELECT synsetid,lemma,SUBSTRING(definition FROM 1 FOR 60) FROM wordsXsensesXsynsets WHERE synsetid IN ( SELECT synsetid FROM wordsXsensesXsynsets WHERE lemma = 'option') AND lemma <> 'option' ORDER BY synsetid
However, they never complete. At least not in any reasonable amount of time and I have had to cancel all of the queries. All other queries seem to work find and when I break up the second SQL example, I can get the individual parts to work and complete in reasonable times (about 0.40 seconds)
When I try and run the full statement however, the MySQL command line client just hangs.
Is there a problem with this syntax? What is causing it to take so long?
EDIT:
Output of "EXPLAIN SELECT ..."
Output of "EXPLAIN EXTENDED ...; SHOW WARNINGS;"
I did more digging and looking into the various statements used and found the problem was in the IN command.
MySQL repeats the statement for every single row in the database. This is the cause of the hang, as it had to run through hundreds of thousands of records.
My remedy to this was to split the command into two separate database calls first getting the synsets, and then dynamically creating a bound SQL string to look for the words in the synsets.

Erlang mysql example

Just wondering if anyone could give a working example of using the erlang-mysql module (http://code.google.com/p/erlang-mysql-driver/).
I am new to erlang and I am trying to replace some old scripts with a few erlang batch processes. I am able to connect to the DB and even complete a query, but I am not sure how I use the results. Here is what I currently have:
-include("../include/mysql.hrl").
...
mysql:start_link(p1, "IP-ADDRESS", "erlang", "PASSWORD", "DATABASE"),
Result1 = mysql:fetch(p1, <<"SELECT * FROM users">>),
io:format("Result1: ~p~n", [Result1]),
...
I also have a prepared statement that I am also using to get just one row (if it exists) and it would be helpful to know how to access the results on that as well
This is described in the source code of mysql.erl:
Your result will be {data, MySQLRes}.
FieldInfo = mysql:get_result_field_info(MysqlRes), where FieldInfo is a list of {Table, Field, Length, Name} tuples.
AllRows = mysql:get_result_rows(MysqlRes), where AllRows is a list of lists, each representing a row.
you should check the count of rows,
then execute:
eg:
RowLen = erlang:length(Row),
if
RowLen > 0 ->
{success};
true ->
{failed, "Row is null"}
end.
After trying to use the ODBC module that comes with Erlang/OTP, and running into problems, I recommend the mysql/otp driver. I replaced ODBC with it in just a few hrs and it works fine.
They have good documentation so I will not add examples here.

Why do our queries get stuck on the state "Writing to net" in MySql?

We have a lot of queries
select * from tbl_message
that get stuck on the state "Writing to net". The table has 98k rows.
The thing is... we aren't even executing any query like that from our application, so I guess the question is:
What might be generating the query?
...and why does it get stuck on the state "writing to net"
I feel stupid asking this question, but I'm 99,99% sure that our application is not executing a query like that to our database... we are however executing a couple of querys to that table using WHERE statement:
SELECT Count(*) as StrCount FROM tbl_message WHERE m_to=1960412 AND m_restid=948
SELECT Count(m_id) AS NrUnreadMail FROM tbl_message WHERE m_to=2019422 AND m_restid=440 AND m_read=1
SELECT * FROM tbl_message WHERE m_to=2036390 AND m_restid=994 ORDER BY m_id DESC
I have searched our application several times for select * from tbl_message but haven't found anything... But still our query-log on our mysql server is full of Select * from tbl_message queries
Since applications don't magically generate queries as they like, I think that it's rather likely that there's a misstake somewhere in your application that's causing this. Here's a few suggestions that you can use to track it down. I'm guessing that your using PHP, since your using MySQL, so I'll use that for my examples.
Try adding comments in front of all your queries in the application, like this:
$sqlSelect = "/* file.php, class::method() */";
$sqlSelect .= "SELECT * FROM foo ";
$sqlSelect .= "WHERE criteria";
The comment will show up in your query log. If you're using some kind database api wrapper, you could potentially add these messages automatically:
function query($sql)
{
$backtrace = debug_backtrace();
// The function that executed the query
$prev = $backtrace[1];
$newSql = sprintf("/* %s */ ", $prev["function"]);
$newSql .= $sql;
mysql_query($newSql) or handle_error();
}
In case you're not using a wrapper, but rather executing the queries directly, you could use the runkit extension and the function runkit_function_rename to rename mysql_query (or whatever you're using) and intercept the queries.
There are (at least) two data retrieval modes for mysql. With the c api you either call mysql_store_result() or mysql_use_result().
mysql_store_result() returns when all result data is transferred from the MySQL server to your process' memory, i.e. no data has to be transferred for further calls to mysql_fetch_row().
However, by using mysql_use_result() each record has to be fetched individually if and when mysql_fetch_row() is called. If your application does some computing that takes longer than the time period specified in net_write_timeout between two calls to mysql_fetch_row() the MySQL server considers your connection to be timed out.
Temporarily enable the query log by putting
log=
into your my.cnf file, restart mysql and watch the query log for those mystery queries (you don't have to give the log a name, it'll assume one from the host value).