Sporadic MySQL connection errors for 5-10 seconds - mysql

I have a small portal website, with 7-8000 visitors/day, I do all the SQL queries and coding.
I find that sometimes the website can't connect to the database - instead for 5-10 seconds it shows the message 'Cannot connect to mysql...'. This is mysql_connect error, after 15 second everything returns to normal for a few hours. No login change, no hosting problems. I put mysql_close() on the footer of website, but the issue still occurs.
What could be the cause of this kind of error? Where should I search to find the problems and solve them? Could it be to many connection too the page?

I can only provide you a few general tips but I hope they help anyway:
You say you get cannot connect to mysql... and nothing else. That sounds like you have some code at "database_conection.php" that explicitly prints such message on connection error. PHP default errors tend to be more verbose and include error codes, file names, line number...
Wherever you call mysql_connect() you can enhance error handling quite easily:
Test the return value of the function.
On error, call mysql_error() to obtain the exact error message will all the details.
Log all the details you need to identify the piece of code that triggered the error. For instance, debug_backtrace() can tell you the precise function call chain.
PHP offers several error handling directives to fine tune what to do on error. Have a look at display_errors, log_errors and error_reporting.
I'm not sure about how you expect mysql_close() to help but this function requires a working connection and it simply closes it. Furthermore, I suppose your footer is close to the end of the script, where the connection will be closed automatically anyway.
If you are using a shared hosting account your site will not be the only user of the MySQL server. If it's simply timing-out due to high load it doesn't need to be your site's fault necessarily.

Related

How to resolve Microsoft Access Error 3043

My company uses a shared MS Access database, with a back end stored on a server and a front end copied onto users desktops.
Recently, our IT department moved us to a new server without giving us any notice, and now our database keeps crashing.
Every 20-40 minutes, users get an error message that says:
Error 3043 Your network access was interrupted. To continue, close the database, and then open it again.
If they close and reopen, it does work. However, I'd like to stop this from happening, since it typically happens when they are in the middle of something and have to re-do everything.
I've already spoken with our IT consultants and they see no issue with our server/network, nor do they know anything about Access and therefore are no help.
Does anyone have any experience with this or have any ideas that may help me repair my database?
Thanks in advance.
Here are some thoughts:
It sounds very much like (short) network interruptions. MS Access doesn't like these at all, in particular it doesn't recover from a broken connection (even if very short) until you restart the frontend.
Network interruptions during write operations on Access backends are the prime cause of backend database corruption. Consider yourself lucky if you haven't experienced that yet. But you should backup and Compact&Repair the backend often (!) .
You can prevent backend corruptions by moving the backend to a server database, e.g. SQL Server Express (free). Errors will still occur ("ODBC call failed" instead of error 3043), but they will only affect the frontends.
You can probably work around all errors by changing the frontend from bound forms to unbound forms. This is a major undertaking.
I don't think there is anything you can do with the backend to prevent the errors.
If this database has value to your company, and IT says there is no problem, I suggest you escalate the problem to someone who can make IT look closer into the issue.
(How to do so would be a separate question, perhaps on SuperUser.)

Is it possible to trap the "Access is in an inconsistent state" error?

I have an Access 2013 database split across a network that is mainly used via Citrix. I keep getting the error message that the database is in an inconsistent state and I don't know why. I created a query to capture the user name and machine id as a auto-exec macro so I can go back and ask users what happened etc. But what I'd like to know is if it would be possible to know which user first encountered this error? Can I trap the error somehow and know which user "caused" it? I have a feeling that this error happens prior to the auto_exec macro firing but I live in hope.
What I am hoping to be able to do is get with the Citrix team and see if they have a corresponding error or something in their logs.
.. sadly they are all sharing the same front end. It's only being used
for read-only lookup purposes. I wanted each user to have their own
copy but IT disagreed with me.
The only way it could work reliably, is if the accdb file itself is marked as Read-Only, and that would probably leave your application useless.
I've been through this with a client running a huge Citrix setup (40000+ employees) for an application with a priority. IT had for a reason a strict view on security, but though quite cooperative, they were of little help.
However, I got it solved by a VB script. It worked in the first attempt and so well, that I wrote up a description here:
Deploy and update a Microsoft Access application in a Citrix environment
The great thing is, that you probably won't need IT to do anything for you.

Sequel DB Connection PoolTimeout Error

I have been unable to determine what the cause of the following Sequel::PoolTimeout error is coming from in a Ruby script I have written:
Sequel::PoolTimeout: Sequel::PoolTimeout
hold at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/connection_pool/threaded.rb:100
hold at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/connection_pool/threaded.rb:93
synchronize at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/database/connecting.rb:234
execute at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/adapters/jdbc.rb:258
execute at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/dataset/actions.rb:793
fetch_rows at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/adapters/jdbc.rb:671
each at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/dataset/actions.rb:143
single_record at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/dataset/actions.rb:583
single_value at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/dataset/actions.rb:591
get at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/dataset/actions.rb:250
empty? at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/dataset/actions.rb:153
scrap at /Users/username/projectname/processers/get_category.rb:46
each at org/jruby/RubyArray.java:1617
each_with_index at org/jruby/RubyEnumerable.java:920
scrap at /Users/username/projectname/processers/get_category.rb:44
scrap at /Users/username/projectname/processers/get_category.rb:32
I have tried this with both MRI and JRuby with exactly the same results.
As per the instructions on the Sequel gem here, I have attempted to raise the pool_timeout limit as follows:
DB = Sequel.connect("jdbc:mysql://localhost/project_db?user=USERNAME&password=PASSWD&max_connections=10&pool_timeout=120")
It seems as though the max_connections and pool_timeout may not be recognized, however I'm not seeing any other way to pass these args on into the connection.
The actual code that is in question here is:
if DB[:products].where(url: url.to_s).empty?
I have seen the code work just fine for a little bit, but without fail it fails either right away or after a couple minutes without any reproducibility in terms of when it occurs. I am starting to suspect that this is a MySQL config issue or something causing the localhost DBMS to have some prolonged delays, although, again, I cannot manually reproduce a visible timeout that I can tell with manual queries, etc.
Any ideas on this issue as to why the timout would keep happening or, more particularly, how to resolve it either via feeding Sequel proper settings (perhaps I have a malformed arg list) or modifying MySQL's /etc/my.cnf for such a scenario?
The Sequel jdbc adapter passes the connection string directly to JDBC, it doesn't parse out embedded options. You need to do:
DB = Sequel.connect("jdbc:mysql://localhost/project_db?user=USERNAME&password=PASSWD", :max_connections=>10, :pool_timeout=>120)

PetaPoco Should I use MultipleActiveResultSets=True?

From time to time we receive the following database connection error from PetaPoco in an ASP.NET MVC 4 app:
There is already an open DataReader associated with this Command which must be closed first.;
System.Data; at System.Data.SqlClient.SqlInternalConnectionTds.ValidateConnectionForExecute(SqlCommand command)...
It seems like this happens as we get more load to the system.
Some suggestions we found as we researched were:
Do a PetaPoco Fetch instead of a Query
Add MultipleActiveResultSets=True to our connection string
Can someone with PetaPoco experience verify that these suggestions would help?
Any other suggestions to avoid the Exception would be appreciated.
Update 06/10/2013 We changed the Query to a Fetch and we have seen some improvement however we still sometimes see the error.
Does anyone know what drawbacks changing the connection string to MultipleActiveResultSets=True might have?
Be sure that you are creating the PetaPoco DB per request (not a static).
See: how to create a DAL using petapoco
Update 06/10/2013 All Fetch methods calls the Query method (see the source)
So changing one for the other has no effect on the error.
The drawbacks are listed on the MSDN and includes warnings with:
Statement Interleaving
Session Cache
Thread Safety
Connection Pooling
Parallel Execution
I have tried it personally and didn't got any drawbacks (depends on your app), but didn't get rid of the errors also.
The only thing that you can do to remove the error, it's follow your request code to find where in the code the statement is called twice, and then use other DB connection in that function.
Also, you can catch the error and then create a new db connection and try with that new one.
Sorry but not magic bullet here.

Nodejs + db-mysql Segmentation Fault

I don't know what causes it, but I have a Node app that keeps crashing. The console says Segmentation Fault, and it looks like it happens when two Mysql objects are instantiated (using db-mysql module), which becomes very common when 10+ users are using my site (I don't post the link to the app because I'm afraid the load would crash it ;) if it can be useful I'll post it).
Do you guys have any clue? My packages are up to date. Do you have a better package to use with Mysql (assuming it's where the problem lies)? Do you also encounter Segfault issues using Nodejs (I guess not, bcs stability is one of the main advantges of Node)?
I [think] I was definitely doing something wrong: cerating a new MySQL object and connecting to the DB every time I had a reaquest. Instead, I stored the MySQL object and run a single query for each... query. Working fine so far.