I'm writing a web app using Snap 0.6 and the Snaplet-hdbc infrastructure. In the backend, I'm using HDBC-mysql to connect to MySQL. But when running the app, it gets a "Command out of sync, you can't run this command now" error from MySQL. I'am using withTransaction' for each query. After some googling, it seems that MySQL doesn't support multiple query. But how to avoid it using HDBC?
After some investigation, I found the solution. Don't use SELECT statement in withTransaction or with commit. And don't use query' for SELECT. In my opinion, HDBC-mysql is using the mysqlclient library, so you can't issue a new query when the data of last query is still unused or freed. Due to the laziness of haskell, if you run a SELECT in withTransaction, the data is unused until your code need it, so when the withTransaction function calls commit, it will result in the "Command out of sync" error. For query', maybe it returns the number of rows selected, but the data selected out is buffered by the mysqlclient library, so it's the problem.
Related
While I am just learning MySQL and MySQLWorkbench, and have perhaps have done something boneheaded, I cannot find a reference to this.
Suddenly, no matter what line of code or what query I run, it outputs the same response even if I query for tables disconnected to the response. The database tests connected. I have run the use command. I have tried to google hack this and found nothing close to my situation. It was running just fine. I did not change the database. I was just running some very basic SELECT queries.
Any ideas?
Did you check your installation ?
I have encountered a strange behavior in mysql, using dbi in Perl.
At the end of a perl program, I issue a mysql UPDATE command to a table. The command is executed using $dbh->execute(); and autocommit is turned on.
After the execute, the program issues $dbh->disconnect(); and exits.
The perl program runs as part of a script. Immediately when the perl program has stopped, another script executes. This script looks as the table that was updated, and here is when things become confusing to me.
Sometimes script 2 reads the old data in the table. Sometimes it sees what was just updated. I cannot understand how the initial perl program can do the $dbh->execute(); and yet it seems that the mysql table is updated several seconds later.
Any insight would be helpful! Cheers in advance.
Turns out the problem was never with either mysql or Perl.
The problem was that the two scripts were running as a script called by a crontab job. Unless specified, crontab did not run using the bash shell.
See
https://askubuntu.com/questions/117978/script-doesnt-run-via-crontab-but-works-fine-standalone
for more information.
I have been unable to determine what the cause of the following Sequel::PoolTimeout error is coming from in a Ruby script I have written:
Sequel::PoolTimeout: Sequel::PoolTimeout
hold at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/connection_pool/threaded.rb:100
hold at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/connection_pool/threaded.rb:93
synchronize at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/database/connecting.rb:234
execute at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/adapters/jdbc.rb:258
execute at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/dataset/actions.rb:793
fetch_rows at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/adapters/jdbc.rb:671
each at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/dataset/actions.rb:143
single_record at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/dataset/actions.rb:583
single_value at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/dataset/actions.rb:591
get at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/dataset/actions.rb:250
empty? at /Users/username/.rvm/gems/jruby-1.7.4#all/gems/sequel-4.2.0/lib/sequel/dataset/actions.rb:153
scrap at /Users/username/projectname/processers/get_category.rb:46
each at org/jruby/RubyArray.java:1617
each_with_index at org/jruby/RubyEnumerable.java:920
scrap at /Users/username/projectname/processers/get_category.rb:44
scrap at /Users/username/projectname/processers/get_category.rb:32
I have tried this with both MRI and JRuby with exactly the same results.
As per the instructions on the Sequel gem here, I have attempted to raise the pool_timeout limit as follows:
DB = Sequel.connect("jdbc:mysql://localhost/project_db?user=USERNAME&password=PASSWD&max_connections=10&pool_timeout=120")
It seems as though the max_connections and pool_timeout may not be recognized, however I'm not seeing any other way to pass these args on into the connection.
The actual code that is in question here is:
if DB[:products].where(url: url.to_s).empty?
I have seen the code work just fine for a little bit, but without fail it fails either right away or after a couple minutes without any reproducibility in terms of when it occurs. I am starting to suspect that this is a MySQL config issue or something causing the localhost DBMS to have some prolonged delays, although, again, I cannot manually reproduce a visible timeout that I can tell with manual queries, etc.
Any ideas on this issue as to why the timout would keep happening or, more particularly, how to resolve it either via feeding Sequel proper settings (perhaps I have a malformed arg list) or modifying MySQL's /etc/my.cnf for such a scenario?
The Sequel jdbc adapter passes the connection string directly to JDBC, it doesn't parse out embedded options. You need to do:
DB = Sequel.connect("jdbc:mysql://localhost/project_db?user=USERNAME&password=PASSWD", :max_connections=>10, :pool_timeout=>120)
I'm planning to debug Joomla site by entering each query and it's query execution time to a database table. I have more than 10 models which have different queries. I'm pretty sure that all the queries go through a single place/class before executing but I have no idea where/what the place/class is.
My issue is, Is there a central place I can edit to log the database query and the execution time of a SQL query? I mean like edit a core file just to log every SQL query & it's execution time.
How can I get it done?
Have you considered using Joomla's built-in System Debug?
Rather than trying to do this programmatically with brute force, it seems it would be far easier and less intrusive to use a proper SQL benchmarking tool such as MySQL Benchmark Suite Another possible non-brute-force option might be Toad World
If you wanted to stay away from third-party tools, a slow query log might be the place to start.
If you really want to do it via joomla (hack):
Goto joomla's database driver, for 3.3 that is: libraries/joomla/database/driver.php
Remove the setDebug function (in case some component set it to 0)
At start of file change $debug = false; into $this->debug = true;
Now, every query gets logged together with profile information.
I don't know what causes it, but I have a Node app that keeps crashing. The console says Segmentation Fault, and it looks like it happens when two Mysql objects are instantiated (using db-mysql module), which becomes very common when 10+ users are using my site (I don't post the link to the app because I'm afraid the load would crash it ;) if it can be useful I'll post it).
Do you guys have any clue? My packages are up to date. Do you have a better package to use with Mysql (assuming it's where the problem lies)? Do you also encounter Segfault issues using Nodejs (I guess not, bcs stability is one of the main advantges of Node)?
I [think] I was definitely doing something wrong: cerating a new MySQL object and connecting to the DB every time I had a reaquest. Instead, I stored the MySQL object and run a single query for each... query. Working fine so far.