PetaPoco Should I use MultipleActiveResultSets=True? - sql-server-2008

From time to time we receive the following database connection error from PetaPoco in an ASP.NET MVC 4 app:
There is already an open DataReader associated with this Command which must be closed first.;
System.Data; at System.Data.SqlClient.SqlInternalConnectionTds.ValidateConnectionForExecute(SqlCommand command)...
It seems like this happens as we get more load to the system.
Some suggestions we found as we researched were:
Do a PetaPoco Fetch instead of a Query
Add MultipleActiveResultSets=True to our connection string
Can someone with PetaPoco experience verify that these suggestions would help?
Any other suggestions to avoid the Exception would be appreciated.
Update 06/10/2013 We changed the Query to a Fetch and we have seen some improvement however we still sometimes see the error.
Does anyone know what drawbacks changing the connection string to MultipleActiveResultSets=True might have?

Be sure that you are creating the PetaPoco DB per request (not a static).
See: how to create a DAL using petapoco
Update 06/10/2013 All Fetch methods calls the Query method (see the source)
So changing one for the other has no effect on the error.
The drawbacks are listed on the MSDN and includes warnings with:
Statement Interleaving
Session Cache
Thread Safety
Connection Pooling
Parallel Execution
I have tried it personally and didn't got any drawbacks (depends on your app), but didn't get rid of the errors also.
The only thing that you can do to remove the error, it's follow your request code to find where in the code the statement is called twice, and then use other DB connection in that function.
Also, you can catch the error and then create a new db connection and try with that new one.
Sorry but not magic bullet here.

Related

How to address Entity Framework Open DataReader Issue

After getting this error :
MySqlException: There is already an open DataReader associated with
this Connection which must be closed first.
I was unable to request or get result sets because i was querying while EF was still lazy loading other stuff that i had previously requested.
Found many possible solutions to address this issue which i have shared as an answer below.
If you don't mention the type loading in EF configuration, EF will by default use Lazy Loading.
There are various ways to over come the 'Connection is open issue':
By adding MARS to your EF connection string, please also read this before jumping into it.
Use 'USING' statement, but for this you need to create a new entity object every time it gets disposed.
Convert your result to Generics types or into local object types in my case i converted it ToList() which helped address my issue and i was able to request a new result set from the context.
I have a base class that provides me with the context object, which is why i didn't use Using statement to create new context every-time i wanted to query the context.
feel free to edit any mistakes, still learning about EF and its behavior.

Handling error timeout expired. Max pool size connection was reached in vs 2005

Sorry for this mainstream post. after doing a long digging to this problem, i've never really found the solution yet. many cause that could trigger this problem.
so i bring up this post to ask for some understanding.
I'm manipulating database.. populating them into array, list, compare, and then storing back them again in database. in all of that process i use many query like ExecuteScalar and MysqlCommand, resulting in creating a lot of new mysqlcommand in module.vb
I'm using vb.net and xampp mysql as data server..
and i have 2 database: they're same but different in size..
the problem is: when i'm testing the 2nd database (the smaller size in records amount)
my program works well without delay or timeout period error`
but when i'm changing the datasource with my 1st database (with 4900 records), the timeout expired popped up directly, causing my vs 2005 become not responding
as per my research on google, i've found few explanation for this error
vs 2005 still has a bug, and that is timeout expired. the solution is to upgrade vs version
the error raised because attempting to open the same connection on the same server.
this error message is not exactly mean what it says. it tells you that the connection was full, but in reality the connection's slot is still available
i'm using too much mysqlcommand variable as i've in module.vb as many as 50 mysqlcomand variable !!!
my personal opinion: i can't apply the first solution.. my program will have a lot of error if i were to upgrading it into vs 2010 or higher
for the second solution: i don't really understand what's that mean. i think that's because i'm trying to open the same connection (example CMD_open1. executereader) again, but that same connection is already open and hasn't been closed
in my program i've already ensure that everytime i'm using CMD.execute reader or executenonquery or execute scalar, i add the CMD name.connection.dispose(), to close the connection properly between open another new one
so my question now is
did all my personal speculation correct? if not, please tell me the correct ones
based on my problem, personally i think the cause is i'm using too much mysqlcommand, even i did close the connection everytime i used. is that correct? what is the right explanation for this?
what is the proper solution that i could apply to fix my problem?
thx for reading. i really crave for the answer for this problem.
here's the ss of my module.vb that contains lot of mysqlcommand variables
Looks like you need some serious refactoring.
Databases-related objects are expensive. The more you instantiate, the more you pay in performance penalties. They are also scarce. You will run out eventually (sooner rather than later). You need to be very judicious in your use of them and be sure to clean them up appropriately when you're finished.
Many programmers put all of the database-related objects into (essentially) a single function which handles all the cleanup, etc.:
Private Function RunQuery( ByVal procName As String ) As DataTable
Using l_connection = new MySqlConnection("connection info")
Using l_command = New MySqlCommand( procName )
l_command.CommandType = StoredProcedure
' ... etc ...
Return l_results
End Using
End Using
End Function
The Using blocks ensure that the database objects are cleaned up properly after each call (calling the appropriate Close and Dispose functions).
But it sounds like you've already got some good advice about what the potential issues might be. The list you provided is pretty much what I would recommend. It's up to you to test and debug to figure out which one of those is causing the problem.
Also, mainstream support for Visual Studio 2005 ended in 2011. If you can't be bothered to upgrade your tools to fix known bugs (because it'll be too much work for you), then there's not much more to say on the subject other than, "Don't complain when you run into bugs."

Sequel DB Connection PoolTimeout Error

I have been unable to determine what the cause of the following Sequel::PoolTimeout error is coming from in a Ruby script I have written:
Sequel::PoolTimeout: Sequel::PoolTimeout
hold at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/connection_pool/threaded.rb:100
hold at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/connection_pool/threaded.rb:93
synchronize at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/database/connecting.rb:234
execute at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/adapters/jdbc.rb:258
execute at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/dataset/actions.rb:793
fetch_rows at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/adapters/jdbc.rb:671
each at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/dataset/actions.rb:143
single_record at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/dataset/actions.rb:583
single_value at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/dataset/actions.rb:591
get at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/dataset/actions.rb:250
empty? at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/dataset/actions.rb:153
scrap at /Users/username/projectname/processers/get_category.rb:46
each at org/jruby/RubyArray.java:1617
each_with_index at org/jruby/RubyEnumerable.java:920
scrap at /Users/username/projectname/processers/get_category.rb:44
scrap at /Users/username/projectname/processers/get_category.rb:32
I have tried this with both MRI and JRuby with exactly the same results.
As per the instructions on the Sequel gem here, I have attempted to raise the pool_timeout limit as follows:
DB = Sequel.connect("jdbc:mysql://localhost/project_db?user=USERNAME&password=PASSWD&max_connections=10&pool_timeout=120")
It seems as though the max_connections and pool_timeout may not be recognized, however I'm not seeing any other way to pass these args on into the connection.
The actual code that is in question here is:
if DB[:products].where(url: url.to_s).empty?
I have seen the code work just fine for a little bit, but without fail it fails either right away or after a couple minutes without any reproducibility in terms of when it occurs. I am starting to suspect that this is a MySQL config issue or something causing the localhost DBMS to have some prolonged delays, although, again, I cannot manually reproduce a visible timeout that I can tell with manual queries, etc.
Any ideas on this issue as to why the timout would keep happening or, more particularly, how to resolve it either via feeding Sequel proper settings (perhaps I have a malformed arg list) or modifying MySQL's /etc/my.cnf for such a scenario?
The Sequel jdbc adapter passes the connection string directly to JDBC, it doesn't parse out embedded options. You need to do:
DB = Sequel.connect("jdbc:mysql://localhost/project_db?user=USERNAME&password=PASSWD", :max_connections=>10, :pool_timeout=>120)

Sporadic MySQL connection errors for 5-10 seconds

I have a small portal website, with 7-8000 visitors/day, I do all the SQL queries and coding.
I find that sometimes the website can't connect to the database - instead for 5-10 seconds it shows the message 'Cannot connect to mysql...'. This is mysql_connect error, after 15 second everything returns to normal for a few hours. No login change, no hosting problems. I put mysql_close() on the footer of website, but the issue still occurs.
What could be the cause of this kind of error? Where should I search to find the problems and solve them? Could it be to many connection too the page?
I can only provide you a few general tips but I hope they help anyway:
You say you get cannot connect to mysql... and nothing else. That sounds like you have some code at "database_conection.php" that explicitly prints such message on connection error. PHP default errors tend to be more verbose and include error codes, file names, line number...
Wherever you call mysql_connect() you can enhance error handling quite easily:
Test the return value of the function.
On error, call mysql_error() to obtain the exact error message will all the details.
Log all the details you need to identify the piece of code that triggered the error. For instance, debug_backtrace() can tell you the precise function call chain.
PHP offers several error handling directives to fine tune what to do on error. Have a look at display_errors, log_errors and error_reporting.
I'm not sure about how you expect mysql_close() to help but this function requires a working connection and it simply closes it. Furthermore, I suppose your footer is close to the end of the script, where the connection will be closed automatically anyway.
If you are using a shared hosting account your site will not be the only user of the MySQL server. If it's simply timing-out due to high load it doesn't need to be your site's fault necessarily.

Use single Elmah.axd for multiple applications with single DB log

We have a single SQL Log for storing errors from multiple applications. We have disabled the elmah.axd page for each one of our applications and would like to have a new application that specifically displays errors from all of the apps that report errors to the common SQL log.
As of now, even though the application for all errors is using the common SQL log, it only displays errors from the current application. Has anyone done this before? What within the elmah code might need to be tweaked?
I assume by "SQL Log" you mean MSSQL Server... If so, probably the easiest way of accomplishing what you want would be to edit the stored procedures created in the SQL Server database that holds your errors.
To get the error list, the ELMAH dll calls the ELMAH_GetErrorsXML proc with the application name as a parameter, then the proc filters the return with a WHERE [Application] = #Application clause.
Just remove the WHERE clause from the ELMAH_GetErrorsXML proc, and all errors should be returned regardless of application.
To get a single error record properly, you'll have to do the same with the ELMAH_GetErrorXML proc, as it also filters by application.
This, of course, will affect any application retrieving errors out of this particular database, but I assume in your case you'll only ever have the one, so this should be good.
CAVEAT: I have not tried this, so I can't guarantee the results...
It's not a problem to override the default Elmah handler factory so that it will filter Elmah logs by applications. I wrote a sample app that shows how to do it with MySql: http://diagnettoolkit.codeplex.com/releases/view/103931. You may as well check a post on my blog where I explain how it works.
Yes, it easily works. However you can't see app name in Elmah/Default.aspx. I haven't found if it is confugurable - just display one column more.