I have been unable to determine what the cause of the following Sequel::PoolTimeout error is coming from in a Ruby script I have written:
Sequel::PoolTimeout: Sequel::PoolTimeout
hold at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/connection_pool/threaded.rb:100
hold at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/connection_pool/threaded.rb:93
synchronize at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/database/connecting.rb:234
execute at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/adapters/jdbc.rb:258
execute at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/dataset/actions.rb:793
fetch_rows at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/adapters/jdbc.rb:671
each at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/dataset/actions.rb:143
single_record at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/dataset/actions.rb:583
single_value at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/dataset/actions.rb:591
get at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/dataset/actions.rb:250
empty? at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/dataset/actions.rb:153
scrap at /Users/username/projectname/processers/get_category.rb:46
each at org/jruby/RubyArray.java:1617
each_with_index at org/jruby/RubyEnumerable.java:920
scrap at /Users/username/projectname/processers/get_category.rb:44
scrap at /Users/username/projectname/processers/get_category.rb:32
I have tried this with both MRI and JRuby with exactly the same results.
As per the instructions on the Sequel gem here, I have attempted to raise the pool_timeout limit as follows:
DB = Sequel.connect("jdbc:mysql://localhost/project_db?user=USERNAME&password=PASSWD&max_connections=10&pool_timeout=120")
It seems as though the max_connections and pool_timeout may not be recognized, however I'm not seeing any other way to pass these args on into the connection.
The actual code that is in question here is:
if DB[:products].where(url: url.to_s).empty?
I have seen the code work just fine for a little bit, but without fail it fails either right away or after a couple minutes without any reproducibility in terms of when it occurs. I am starting to suspect that this is a MySQL config issue or something causing the localhost DBMS to have some prolonged delays, although, again, I cannot manually reproduce a visible timeout that I can tell with manual queries, etc.
Any ideas on this issue as to why the timout would keep happening or, more particularly, how to resolve it either via feeding Sequel proper settings (perhaps I have a malformed arg list) or modifying MySQL's /etc/my.cnf for such a scenario?
The Sequel jdbc adapter passes the connection string directly to JDBC, it doesn't parse out embedded options. You need to do:
DB = Sequel.connect("jdbc:mysql://localhost/project_db?user=USERNAME&password=PASSWD", :max_connections=>10, :pool_timeout=>120)
Related
While I am just learning MySQL and MySQLWorkbench, and have perhaps have done something boneheaded, I cannot find a reference to this.
Suddenly, no matter what line of code or what query I run, it outputs the same response even if I query for tables disconnected to the response. The database tests connected. I have run the use command. I have tried to google hack this and found nothing close to my situation. It was running just fine. I did not change the database. I was just running some very basic SELECT queries.
Any ideas?
Did you check your installation ?
I don't know what causes it, but I have a Node app that keeps crashing. The console says Segmentation Fault, and it looks like it happens when two Mysql objects are instantiated (using db-mysql module), which becomes very common when 10+ users are using my site (I don't post the link to the app because I'm afraid the load would crash it ;) if it can be useful I'll post it).
Do you guys have any clue? My packages are up to date. Do you have a better package to use with Mysql (assuming it's where the problem lies)? Do you also encounter Segfault issues using Nodejs (I guess not, bcs stability is one of the main advantges of Node)?
I [think] I was definitely doing something wrong: cerating a new MySQL object and connecting to the DB every time I had a reaquest. Instead, I stored the MySQL object and run a single query for each... query. Working fine so far.
For optimizing system performance, we are storing a few static tables on RAM (copies of which does exist on the hard-drive as well -- on the MyISAM). Now, as we all know, when the server re-starts all data on RAM gets deleted. Hence to avoid that we created an init file that has 4 SQL statements.
Please note that each SQL statement exists on a separate line, ended with a semi-colon (;) and there are no comments anywhere --- so from my limited knowledge, I believe that I have avoided making some basic mistakes. However, when I re-start MySQL manually from the command line to test it, I see that the memory tables are empty. There are no issues with the initfile itself, because when I execute the initfile manually from the command line, the data gets populated without any issues.
Any help in terms of resolving this will be much appreciated!
Thanks!
Udayan
Something is not right here.
Just to check, I tried restarting my local mysql server using /etc/init.d/mysql restart. And, it started up as running by the mysql user (not root).
So, we will need the following to try to figure this out because I am just about positive that the problem is either that the file has the wrong permissions or it is in a location that the user running mysqld does not have access to.
What version of Linux are you running?
What is the version of mySQL that you have installed?
Is 'init-file=' in the right section of my.cnf?
What is the output of 'ps -ef | grep mysqld'?
What is the output of 'ls -lrt /tmp/initfile.sql'?
What did you mean by 'There are no issues with the initfile itself,
because when I execute the initfile manually from the command line,
the data gets populated without any issues.'?
I cannot help but think that it is a permissions problem. So the fourth and fifth answers are the ones that I am most interested in.
You should add all of these answers to your question - so that people have everything they need to help you solve your problem.
Appreciate all your suggestions but I have figured out what the issues were.
In the SQL file I needed to mention which database that init-file should populate.
I had trailing semi-colons in the SQL statements -- apparently that is not a good idea.
Once I made these two small changes, everything started working fine.
Again, thanks for the pointers!
Udayan
I have a small portal website, with 7-8000 visitors/day, I do all the SQL queries and coding.
I find that sometimes the website can't connect to the database - instead for 5-10 seconds it shows the message 'Cannot connect to mysql...'. This is mysql_connect error, after 15 second everything returns to normal for a few hours. No login change, no hosting problems. I put mysql_close() on the footer of website, but the issue still occurs.
What could be the cause of this kind of error? Where should I search to find the problems and solve them? Could it be to many connection too the page?
I can only provide you a few general tips but I hope they help anyway:
You say you get cannot connect to mysql... and nothing else. That sounds like you have some code at "database_conection.php" that explicitly prints such message on connection error. PHP default errors tend to be more verbose and include error codes, file names, line number...
Wherever you call mysql_connect() you can enhance error handling quite easily:
Test the return value of the function.
On error, call mysql_error() to obtain the exact error message will all the details.
Log all the details you need to identify the piece of code that triggered the error. For instance, debug_backtrace() can tell you the precise function call chain.
PHP offers several error handling directives to fine tune what to do on error. Have a look at display_errors, log_errors and error_reporting.
I'm not sure about how you expect mysql_close() to help but this function requires a working connection and it simply closes it. Furthermore, I suppose your footer is close to the end of the script, where the connection will be closed automatically anyway.
If you are using a shared hosting account your site will not be the only user of the MySQL server. If it's simply timing-out due to high load it doesn't need to be your site's fault necessarily.
Recently, I switched to Python 3 (3.1 on a FreeBSD system), and i would like to work with MySQL databases.
First i tried to use pymysql3-0.4, but it failed when i used SUM in my query with this error:
, TypeError("Cannot convert b'46691486' to Decimal",))
Then i tried oursql-0.9.2, but it seems it has no unix socket support (the documentation write otherwise but it doesn't recognize the socket protocol.)
Last i decided to give a chance to mypysql-0.5.5 but the installation is failed.
Could you recommend me a properly working MySQL driver for Python 3, or at least solve one of these problems? I would be very greatfull.
The oursql documentation is a little tricky. :$ There is a list of Connection's parameters, but it doesn't contain the unix_socket parameter. If i set that and the the protocol parameter the whole thing is just work fine :)
If someone has trouble with inserting (get _statment charset AttributeError): https://bugs.launchpad.net/oursql/+bug/669184 change the lines in oursql.c with the code in the report, and rebuild it. (it will be fixed in 0.9.3)