asynchronous queries with MySQL/Perl DBI - mysql

It is possible that my question title is misleading, but here goes --
I am trying out a prototype app which involves three MySQL/Perl Dancer powered web apps.
The user goes to app A which serves up a Google maps base layer. On document ready, app A makes three jQuery ajax calls -- two to app B like so
http://app_B/points.json
http://app_B/polys.json
and one to app C
http://app_C/polys.json
Apps B and C query the MySQL database via DBI, and serve up json packets of points and polys that are rendered in the user's browser.
All three apps are proxied through Apache to Perl Starman running via plackup started like so
$ plackup -E production -s Starman -w 10 -p 5000 path/to/app_A/app.pl
$ plackup -E production -s Starman -w 10 -p 5001 path/to/app_B/app.pl
$ plackup -E production -s Starman -w 10 -p 5002 path/to/app_C/app.pl
From time to time, I start getting errors back from the apps called via Ajax. The initial symptoms were
{"error":"Warning caught during route
execution: DBD::mysql::st fetchall_arrayref
failed: fetch() without execute() at
<path/to/app_B/app.pm> line 79.\n"}
The offending lines are
71> my $sql = qq{
72> ..
73>
74>
75> };
76>
77> my $sth = $dbh->prepare($sql);
78> $sth->execute();
79> my $res = $sth->fetchall_arrayref({});
This is bizarre... how can execute() not take place above? Perl doesn't have a habit of jumping over lines, does it? So, I turned on DBI_TRACE
$DBI_TRACE=2=logs/dbi.log plackup -E production -p 5001 -s Starman -w
10 -a bin/app.pl
And, following is what stood out to me as the potential culprit in the log file
> Handle is not in asynchronous mode error 2000 recorded: Handle is
> not in asynchronous mode
> !! ERROR: 2000 CLEARED by call to fetch method
What is going on? Basically, as is, app A is non-functional because the other apps don't return data "reliably" -- I put that in quotes because they do work correctly occasionally, so I know I don't have any logic or syntax errors in my code. I have some kind of intrinsic plumbing errors.
I did find the following on DBD::mysql about ASYNCHRONOUS_QUERIES and am wondering if this is the cause and the solution of my problem. Essentially, if I want async queries, I have to add {async => 1} to my $dbh-prepare(). Except, I am not sure if I want async true or false. I tried it, it and it doesn't seem to help.
I would love to learn what is going on here, and what is the right way to solve this.

How are you managing your database handles? If you are opening a connection before starman forks your code then multiple children may be trying to share one database handle and are confusing MySQL. You can solve this problem by always running a DBI->connect in your methods that talk to the database, but that can be inefficient. Many people switch over to some sort of connection pool, but I have no direct experience with any of them.

Related

Perl gives "gzip: stdout: Broken pipe" error when opening gzipped files, but only if connecting to a DB

Consider the following program, running on a Linux machine, which opens a gzipped input file:
#!/usr/bin/env perl
open (my $fileHandle, "-|", "/bin/zcat $ARGV[0]");
my $ff = <$fileHandle>;
close($fileHandle);
That works as expected (it does nothing, but prints no error):
$ bar.pl file.gz
$
Now, if I use the same code but previously connect to a MySQL database, gzip will complain (you can run the code directly, this is an open DB and the credentials will work):
#!/usr/bin/env perl
use DBI;
use strict;
use warnings;
my $dsn = "DBI:mysql:database=hg19;host=genome-mysql.cse.ucsc.edu";
my $db = DBI->connect($dsn, 'genomep', 'password');
my $dbResults = $db->prepare("show tables");
my $ret = $dbResults->execute();
$dbResults->finish();
$db->disconnect();
open (my $fileHandle, "-|", "/bin/zcat $ARGV[0]");
my $ff = <$fileHandle>;
close($ff);
Running the above gives:
$ foo.pl file.gz
gzip: stdout: Broken pipe
This is obviously part of a much more complicated program, but I've managed to trim it down to this silly snippet that reproduces the issue.
What's going on? Why does connecting to a DB affect how gzip behaves? Note that everything seems to work (in the actual program, I do something useful with the gzipped data) but why am I getting that error message?
It turns out this behavior is specific to (slightly) older versions of Perl and/or DBI. On the machines where it failed, I have:
Ubuntu
Perl 5, version 22, subversion 1 (v5.22.1) built for x86_64-linux-gnu-thread-multi
DBI 1.634
DBD 4.033
gzip 1.6
However, on another two machines it did work. These had:
Ubuntu
Perl 5, version 26, subversion 1 (v5.26.1) built for x86_64-linux-gnu-thread-multi
DBI 1.640
DBD 4.033
gzip 1.6
And
Arch Linux
Perl 5, version 30, subversion 0 (v5.30.0) built for x86_64-linux-thread-multi
DBI 1.642
DBD 4.050
gzip 1.10
At least here, it appears that the MySQL libraries (probably) are masking (ignoring) SIGPIPE, and that's what you're seeing. Comparing strace outputs, I see a line like this in the MySQL run:
rt_sigaction(SIGPIPE, {sa_handler=SIG_IGN, sa_mask=[PIPE], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7f78bdf16840}, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=0}, 8) = 0
And it turns out you can duplicate the behavior easily w/o MySQL:
$SIG{PIPE} = 'IGNORE';
open (my $fileHandle, "-|", "/bin/zcat $ARGV[0]");
my $ff = <$fileHandle>;
close($ff);
Or, alternatively, you can reset the signal to the default handler to make the message go away, even after connecting to MySQL by setting it to DEFAULT instead of IGNORE.
This is, by the way, documented behavior of the MySQL library:
To avoid aborting the program when a connection terminates, MySQL blocks SIGPIPE on the first call to mysql_library_init(), mysql_init(), or mysql_connect().
(It may also depend on the gzip version; maybe some versions of gzip set up signal handlers on init.)
Ultimately, what you're seeing is that if gzip gets a SIGPIPE, it just exits. If it gets an error back from write (because SIGPIPE is ignored), it prints an error message.
Most probably the following is happening:
gzip tries to write to the pipe, the program on your side is not reading up to eof, the closes the pipe. Gzip then receives a SIGPIPE, and dies with this error message. Can you confirm that this is taking place?

MySQL login-path issues with clustercheck script used in xinetd

default: on
# description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
disable = no
flags = REUSE
socket_type = stream
type = UNLISTED
port = 9200
wait = no
user = root
server = /usr/bin/mysqlclustercheck
log_on_failure += USERID
only_from = 0.0.0.0/0
#
# Passing arguments to clustercheck
# <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>"
# Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local"
# Compatibility: server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local"
# 55-to-56 upgrade: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.extra"
#
# recommended to put the IPs that need
# to connect exclusively (security purposes)
per_source = UNLIMITED
}
/etc/xinetd.d #
It is kind of strange that script works fine when run manually when it runs using /etc/xinetd.d/ , it is not working as expected.
In mysqlclustercheck script, instead of using --user= and passord= syntax, I am using --login-path= syntax
script runs fine when I run using command line but status for xinetd was showing signal 13. After debugging, I have found that even simple command like this is not working
mysql_config_editor print --all >>/tmp/test.txt
We don't see any output generated when it is run using xinetd ( mysqlclustercheck)
Have you tried the following instead of /usr/bin/mysqlclustercheck?
server = /usr/bin/clustercheck
I am wondering if you could test your binary location with the linux which command.
A long time ago since this question was asked, but it just came to my attention.
First of all as mentioned, Percona Cluster Control script is called clustercheck, so make sure you are using the correct name and correct path.
Secondly, since the server script runs fine from command line, it seems to me that the path of mysql client command is not known by the xinetd when it runs the Cluster Control script.
Since the mysqlclustercheck script as it is offered from Percona, it uses only the binary name mysql without specifying the absolute path I suggest you do the following:
Find where mysql client command is located on your system:
ccloud#gal1:~> sudo -i
gal1:~ # which mysql
/usr/local/mysql/bin/mysql
gal1:~ #
then edit script /usr/bin/mysqlclustercheck and in the following line:
MYSQL_CMDLINE="mysql --defaults-extra-file=$DEFAULTS_EXTRA_FILE -nNE --connect-timeout=$TIMEOUT \
place the exact path of mysql client command you found in the previous step.
I also see that you are not using MySQL connection credentials for connecting to MySQL server. mysqlclustercheck script as it is offered from Percona, it uses User/Password in order to connect to MySQL server.
So normally, you should execute the script in the command line like:
gal1:~ # /usr/sbin/clustercheck haproxy haproxyMySQLpass
HTTP/1.1 200 OK
Content-Type: text/plain
Where haproxy/haproxyMySQLpass is the MySQL connection user/pass for HAProxy monitoring user.
Additionally, you should specify them to your script's xinetd settings like:
server = /usr/bin/mysqlclustercheck
server_args = haproxy haproxyMySQLpass
Last but not least, the signal 13 you are getting is because you try to write something in a script run by xinetd. If for example in your mysqlclustercheck you try to add a statement like
echo "debug message"
you probably going to see the broken pipe signal (13 in POSIX).
Finally, I had issues with this script using SLES 12.3 and I finally manage to run it not as 'nobody' but as 'root'.
Hope it helps

Communicating with interactive processes via Ruby popen

I've been messing around with IO#popen and different programs, and having some trouble with interactive processes.
Here's a stripped down version of the script:
def test(command, string)
IO.popen(command, 'a+') do |pipe|
puts "Prompt: #{pipe.read(5)}" # Just to show whether data is read in
pipe.puts string
end
end
I'm seeing various behavior with a few different interactive processes, and trying to understand why.
$ test('pt-kill --user user --ask-pass --print', 'password')
=> This successfully reads in the prompt, and the password is successfully written
to the script. Works as desired. (This is a perl script from Percona)
$ test('telnet', 'quit')
=> Blocks indefinitely trying to read the prompt. In the process of hacking around,
found that calling 'pipe.close_write' prior to the read would allow the read to
complete. Why?
$ test('mysql -u user -p -e "SELECT 1 FROM DUAL", 'password')
=> Echoes full prompt to the screen, but is still blocking on the first read.
Adding a 'pipe.close_write' does nothing.
I've been trying to understand the differences, but am at a loss. Anyone have an explanation?

How can I view live MySQL queries?

How can I trace MySQL queries on my Linux server as they happen?
For example I'd love to set up some sort of listener, then request a web page and view all of the queries the engine executed, or just view all of the queries being run on a production server. How can I do this?
You can log every query to a log file really easily:
mysql> SHOW VARIABLES LIKE "general_log%";
+------------------+----------------------------+
| Variable_name | Value |
+------------------+----------------------------+
| general_log | OFF |
| general_log_file | /var/run/mysqld/mysqld.log |
+------------------+----------------------------+
mysql> SET GLOBAL general_log = 'ON';
Do your queries (on any db). Grep or otherwise examine /var/run/mysqld/mysqld.log
Then don't forget to
mysql> SET GLOBAL general_log = 'OFF';
or the performance will plummet and your disk will fill!
You can run the MySQL command SHOW FULL PROCESSLIST; to see what queries are being processed at any given time, but that probably won't achieve what you're hoping for.
The best method to get a history without having to modify every application using the server is probably through triggers. You could set up triggers so that every query run results in the query being inserted into some sort of history table, and then create a separate page to access this information.
Do be aware that this will probably considerably slow down everything on the server though, with adding an extra INSERT on top of every single query.
Edit: another alternative is the General Query Log, but having it written to a flat file would remove a lot of possibilities for flexibility of displaying, especially in real-time. If you just want a simple, easy-to-implement way to see what's going on though, enabling the GQL and then using running tail -f on the logfile would do the trick.
Even though an answer has already been accepted, I would like to present what might even be the simplest option:
$ mysqladmin -u bob -p -i 1 processlist
This will print the current queries on your screen every second.
-u The mysql user you want to execute the command as
-p Prompt for your password (so you don't have to save it in a file or have the command appear in your command history)
i The interval in seconds.
Use the --verbose flag to show the full process list, displaying the entire query for each process. (Thanks, nmat)
There is a possible downside: fast queries might not show up if they run between the interval that you set up. IE: My interval is set at one second and if there is a query that takes .02 seconds to run and is ran between intervals, you won't see it.
Use this option preferably when you quickly want to check on running queries without having to set up a listener or anything else.
Run this convenient SQL query to see running MySQL queries. It can be run from any environment you like, whenever you like, without any code changes or overheads. It may require some MySQL permissions configuration, but for me it just runs without any special setup.
SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != 'Sleep';
The only catch is that you often miss queries which execute very quickly, so it is most useful for longer-running queries or when the MySQL server has queries which are backing up - in my experience this is exactly the time when I want to view "live" queries.
You can also add conditions to make it more specific just any SQL query.
e.g. Shows all queries running for 5 seconds or more:
SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != 'Sleep' AND TIME >= 5;
e.g. Show all running UPDATEs:
SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != 'Sleep' AND INFO LIKE '%UPDATE %';
For full details see: http://dev.mysql.com/doc/refman/5.1/en/processlist-table.html
strace
The quickest way to see live MySQL/MariaDB queries is to use debugger. On Linux you can use strace, for example:
sudo strace -e trace=read,write -s 2000 -fp $(pgrep -nf mysql) 2>&1
Since there are lot of escaped characters, you may format strace's output by piping (just add | between these two one-liners) above into the following command:
grep --line-buffered -o '".\+[^"]"' | grep --line-buffered -o '[^"]*[^"]' | while read -r line; do printf "%b" $line; done | tr "\r\n" "\275\276" | tr -d "[:cntrl:]" | tr "\275\276" "\r\n"
So you should see fairly clean SQL queries with no-time, without touching configuration files.
Obviously this won't replace the standard way of enabling logs, which is described below (which involves reloading the SQL server).
dtrace
Use MySQL probes to view the live MySQL queries without touching the server. Example script:
#!/usr/sbin/dtrace -q
pid$target::*mysql_parse*:entry /* This probe is fired when the execution enters mysql_parse */
{
printf("Query: %s\n", copyinstr(arg1));
}
Save above script to a file (like watch.d), and run:
pfexec dtrace -s watch.d -p $(pgrep -x mysqld)
Learn more: Getting started with DTracing MySQL
Gibbs MySQL Spyglass
See this answer.
Logs
Here are the steps useful for development proposes.
Add these lines into your ~/.my.cnf or global my.cnf:
[mysqld]
general_log=1
general_log_file=/tmp/mysqld.log
Paths: /var/log/mysqld.log or /usr/local/var/log/mysqld.log may also work depending on your file permissions.
then restart your MySQL/MariaDB by (prefix with sudo if necessary):
killall -HUP mysqld
Then check your logs:
tail -f /tmp/mysqld.log
After finish, change general_log to 0 (so you can use it in future), then remove the file and restart SQL server again: killall -HUP mysqld.
I'm in a particular situation where I do not have permissions to turn logging on, and wouldn't have permissions to see the logs if they were turned on. I could not add a trigger, but I did have permissions to call show processlist. So, I gave it a best effort and came up with this:
Create a bash script called "showsqlprocesslist":
#!/bin/bash
while [ 1 -le 1 ]
do
mysql --port=**** --protocol=tcp --password=**** --user=**** --host=**** -e "show processlist\G" | grep Info | grep -v processlist | grep -v "Info: NULL";
done
Execute the script:
./showsqlprocesslist > showsqlprocesslist.out &
Tail the output:
tail -f showsqlprocesslist.out
Bingo bango. Even though it's not throttled, it only took up 2-4% CPU on the boxes I ran it on. I hope maybe this helps someone.
From a command line you could run:
watch --interval=[your-interval-in-seconds] "mysqladmin -u root -p[your-root-pw] processlist | grep [your-db-name]"
Replace the values [x] with your values.
Or even better:
mysqladmin -u root -p -i 1 processlist;
This is the easiest setup on a Linux Ubuntu machine I have come across. Crazy to see all the queries live.
Find and open your MySQL configuration file, usually /etc/mysql/my.cnf on Ubuntu. Look for the section that says “Logging and Replication”
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
log = /var/log/mysql/mysql.log
Just uncomment the “log” variable to turn on logging. Restart MySQL with this command:
sudo /etc/init.d/mysql restart
Now we’re ready to start monitoring the queries as they come in. Open up a new terminal and run this command to scroll the log file, adjusting the path if necessary.
tail -f /var/log/mysql/mysql.log
Now run your application. You’ll see the database queries start flying by in your terminal window. (make sure you have scrolling and history enabled on the terminal)
FROM http://www.howtogeek.com/howto/database/monitor-all-sql-queries-in-mysql/
Check out mtop.
I've been looking to do the same, and have cobbled together a solution from various posts, plus created a small console app to output the live query text as it's written to the log file. This was important in my case as I'm using Entity Framework with MySQL and I need to be able to inspect the generated SQL.
Steps to create the log file (some duplication of other posts, all here for simplicity):
Edit the file located at:
C:\Program Files (x86)\MySQL\MySQL Server 5.5\my.ini
Add "log=development.log" to the bottom of the file. (Note saving this file required me to run my text editor as an admin).
Use MySql workbench to open a command line, enter the password.
Run the following to turn on general logging which will record all queries ran:
SET GLOBAL general_log = 'ON';
To turn off:
SET GLOBAL general_log = 'OFF';
This will cause running queries to be written to a text file at the following location.
C:\ProgramData\MySQL\MySQL Server 5.5\data\development.log
Create / Run a console app that will output the log information in real time:
Source available to download here
Source:
using System;
using System.Configuration;
using System.IO;
using System.Threading;
namespace LiveLogs.ConsoleApp
{
class Program
{
static void Main(string[] args)
{
// Console sizing can cause exceptions if you are using a
// small monitor. Change as required.
Console.SetWindowSize(152, 58);
Console.BufferHeight = 1500;
string filePath = ConfigurationManager.AppSettings["MonitoredTextFilePath"];
Console.Title = string.Format("Live Logs {0}", filePath);
var fileStream = new FileStream(filePath, FileMode.Open, FileAccess.ReadWrite, FileShare.ReadWrite);
// Move to the end of the stream so we do not read in existing
// log text, only watch for new text.
fileStream.Position = fileStream.Length;
StreamReader streamReader;
// Commented lines are for duplicating the log output as it's written to
// allow verification via a diff that the contents are the same and all
// is being output.
// var fsWrite = new FileStream(#"C:\DuplicateFile.txt", FileMode.Create);
// var sw = new StreamWriter(fsWrite);
int rowNum = 0;
while (true)
{
streamReader = new StreamReader(fileStream);
string line;
string rowStr;
while (streamReader.Peek() != -1)
{
rowNum++;
line = streamReader.ReadLine();
rowStr = rowNum.ToString();
string output = String.Format("{0} {1}:\t{2}", rowStr.PadLeft(6, '0'), DateTime.Now.ToLongTimeString(), line);
Console.WriteLine(output);
// sw.WriteLine(output);
}
// sw.Flush();
Thread.Sleep(500);
}
}
}
}
In addition to previous answers describing how to enable general logging, I had to modify one additional variable in my vanilla MySql 5.6 installation before any SQL was written to the log:
SET GLOBAL log_output = 'FILE';
The default setting was 'NONE'.
Gibbs MySQL Spyglass
AgilData launched recently the Gibbs MySQL Scalability Advisor (a free self-service tool) which allows users to capture a live stream of queries to be uploaded to Gibbs. Spyglass (which is Open Source) will watch interactions between your MySQL Servers and client applications. No reconfiguration or restart of the MySQL database server is needed (either client or app).
GitHub: AgilData/gibbs-mysql-spyglass
Learn more: Packet Capturing MySQL with Rust
Install command:
curl -s https://raw.githubusercontent.com/AgilData/gibbs-mysql-spyglass/master/install.sh | bash
If you want to have monitoring and statistics, than there is a good and open-source tool Percona Monitoring and Management
But it is a server based system, and it is not very trivial for launch.
It has also live demo system for test.

MySQL driver segfaulting under mod_perl - where to look for issue

I have a webapp that segfaults when the database in restarted and it tries to use the old connections. Running it under gdb --args apache -X leads to the following output:
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1212868928 (LWP 16098)]
0xb7471c20 in mysql_send_query () from /usr/lib/libmysqlclient.so.15
I've checked that the drivers and database are all up to date (DBD::mysql 4.0008, MySQL 5.0.32-Debian_7etch6-log).
Annoyingly I can't reproduce this with a trivial script:
use DBI;
use Test::More tests => 2;
my $dbh = DBI->connect( "dbi:mysql:test", 'root' );
sub test_db {
my ($number) = $dbh->selectrow_array("select 1 ");
return $number;
}
is test_db, 1, "connected to db";
warn "restart db now";
getc;
is test_db, 1, "connected to db";
Which gives the following:
ok 1 - connected to db
restart db now at dbd-mysql-test.pl line 23.
DBD::mysql::db selectrow_array failed: MySQL server has gone away at dbd-mysql-test.pl line 17.
not ok 2 - connected to db
# Failed test 'connected to db'
# at dbd-mysql-test.pl line 26.
# got: undef
# expected: '1'
This behaves correctly, telling me why the request failed.
What stumps me is that it is segfaulting, which it shouldn't do. As it only appears to happen when the whole app is running (which uses DBIx::Class) it is hard to reduce it to a test case.
Where should I start to look to debug this? Has anyone else seen this?
UPDATE: further prodding showed that it being under mod_perl was a red herring. Having reduced it to a simple test script I've now posted to the DBI mailing list. Thanks for your answers.
What this probably means is that there's a difference between your mod_perl environment and the one you were testing via your script. Some things to check:
Was your mod_perl compiled with the same version of Perl
Are the #INC's the same for both
Are you using threads in your mod_perl setup? I don't believe DBD::mysql is completely thread-safe.
I've seen this problem, but I'm not sure it had the same cause as yours. Are you by chance using a certain module for sending mails (forgot the name, sorry) from your application? When we had the problem in a project, after days of debugging we found that this mail module was doing strange things with open file descriptors, then forked off another process which called the console tool sendmail, which again did strange things with file descriptors. I guess one of the file descriptors it messed around with was the connection to the database, but I'm still not sure about that. The problem disappeared when we switched to another module for sending mails. Maybe it's worth a look for you too.
If you're getting a segfault, do you have a core file greated? If not, check ulimit -c. If that returns 0, your system won't create core files and you'll have to change that. If you do have a core file, you can use gdb or similar tools to debug it. It's not particularly fun, but it's possible. The start of the command will look something like:
gbd /usr/bin/httpd core
There are plenty of tutorials for debugging core files scattered about the Web.
Update: Just found a reference for ensuring you get core dumps from mod_perl. That should help.
This is a known problem in old DBD::mysql. Upgrade it (4.008 is not up to date).
There's a simple test script attached to https://rt.cpan.org/Public/Bug/Display.html?id=37027
that will trigger this bug.