How to optimize this script performing INSERTS into a database? - mysql

So i already complete a script that will insert data into mysql table and move those file into a directory until all files are none. There around 51 files and it took around 9 sec to complete the execution. So my question is . is there a better way to speed up the execution process?
the codes are
our $DIR="/home/aimanhalim/LOG";
our $FILENAME_REGEX = "server_performance_";
# mariaDB config hash
our %db_config = ( "username"=>"root", "password"=> "", "db"=>"Top_Data", "ip" => "127.0.0.1", "port" => "3306");
main();
exit;
sub main()
{
my $start = time();
print "Searching file $FILENAME_REGEX in $DIR...\n";
opendir (my $dr , $DIR) or die "<ERROR> Cannot open dir: $DIR \n";
while( my $file = readdir $dr )
{
print "file in $DIR: [$file]\n";
next if (($file eq ".") || ($file eq "..") || ($file eq "DONE"));
#Opening The File in the directory
open(my $file_hndlr, "<$DIR/$file");
#Making Variables.
my $line_count = 0;
my %data = ();
my $dataRef = \%data;
my $move = "$DIR/$file";
print "$file\n";
while (<$file_hndlr>)
{
my $line = $_;
chomp($line);
print "line[$line_count] - [$line]\n";
if($line_count == 0)
{
# get load average from line 0
($dataRef) = get_load_average($line,$dataRef);
print Dumper($dataRef);
}
elsif ($line_count == 2)
{
($dataRef) = get_Cpu($line,$dataRef);
print Dumper($dataRef);
}
$line_count++;
}
#insert db
my ($result) = insert_record($dataRef,\%db_config,$file);
my $Done_File="/home/aimanhalim/LOG/DONE";
sub insert_record(){
my($data,$db_config,$file)=#_;
my $result = -1; # -1 fail; 0 - succ
# connect to db
# connect to MySQL database
my $dsn = "DBI:mysql:database=".$db_config->{'db'}.";host=".$db_config->{'ip'}.";port=".$db_config->{'port'};
my $username = $db_config->{'username'};
my $password = $db_config->{'password'};
my %attr = (PrintError=>0,RaiseError=>1 );
my $dbh = DBI->connect($dsn,$username,$password,\%attr) or die $DBI::errstr;
print "We Have Successfully Connected To The Database \n";
$stmt->execute(#param_bind);
****this line is insert data statement***
$stmt->finish();
print "The Data Has Been Inserted Successfully\n";
$result = 0;
return($result);
# commit
$dbh->commit();
# return succ / if fail rollback and return fail
$dbh->disconnect();
}
exit;
editted
so pretty much this is my code with some sniping here and there.
i tried to put the 'insert_record' below the comment #insert db but i dont think that do anything :U

You are connecting to the database for every file that you want to insert (if I read your code correctly, there seems to be a closing curly brace missing, it won't actually compile). Opening new database connections is (comparably) slow.
Open the connection once, before inserting the first file and re-use it for subsequent inserts into the database. Close the connection after your last file was inserted into the database. This should give you a noticable speed up.
(Depending on the amount of data, 9 seconds might actually not be too bad; but since there is no information on that, it's hard to say).

Related

write into a csv file in multiple cells

I am coding in perl, how can you write into a csv file multiple variables and put each one in a separate cell in the same line.
this a part of my Code:
#!/usr/bin/perl
use feature qw(say);
use strict;
use warnings;
use constant BUFSIZE => 6;
my $year += 1900;
my $input_file = 'path\ZONE0.txt';
my $outputfile = 'path\outputfile.csv';
open (my $BIN, "<:raw", $input_file) or die "can't open the file $input_file: $!";
my $buffer;
open(FH, '>>', $outputfile) or die $!;
while (1) {
my $bytes_read = sysread $BIN, $buffer, BUFSIZE;
die "Could not read file $input_file: $!" if !defined $bytes_read;
last if $bytes_read <= 0;
my #decimal= map { unpack "C", $_ } split //, $buffer;
my $start= $decimal[0];
my $DevType = $decimal[1];
my #hexDevType = sprintf("0x%x", $DevType);
my #DevUID =($decimal[5], $decimal[4], $decimal[3], $decimal[2]);
my #hexDevUID = map { sprintf("0x%x",$_) } #DevUID;
print FH $start, ' ' , print FH $DevType,' ', #hexDevUID , "\n";
}
close $BIN;
this results in puting all the variable next to each other in one cell, which is not what I want. can you help me separate the variables.
CSV files don't have cells. I suspect you're opening the file in a spreadsheet program.
The secret of a CSV file is that the values are separated by commas. So you need to put commas between any values that you want to appear in separate cells in your spreadsheet.
It looks like your data is in #hexDevUID. The simplest way is to turn that into a comma-separated string using join():
join(',', #hexDevUID)
But the more robust approach will be to use Text::CSV_XS.
Bellow is modified OPs code which does not utilize any CVS modules for output.
Added error handling code for read error and insufficient number of read bytes for further processing.
use strict;
use warnings;
use feature 'say';
use constant BUFSIZE => 6;
my($buffer,$bytes_read);
my $infile = shift || 'path\ZONE0.txt';
my $outfile = 'path\outputfile.csv';
open my $in, '<:raw', $infile
or die "Can't open $infile: $!";
open my $out, '+>>', $outfile
or die "Can't open $outfile: $!";
do {
$bytes_read = sysread $in, $buffer, BUFSIZE;
die "Error: read from $infile: $!" unless defined $bytes_read;
error_handler($bytes_read) unless $bytes_read == 6;
my #decimal = map { ord } split //, $buffer;
my($start,$DevType) = #decimal[0,1];
my #hexDevUID = map { sprintf("0x%02x",$_) } #decimal[5,4,3,2];
say $out join(',',($start,$DevType,#hexDevUID));
} while ( $bytes_read );
sub error_handler {
my $bytes = shift;
close $out;
close $in;
say "
Error: called error_handler(\$read_bytes)
Action: Emergency file closure to preserve data
Cause: Read insufficient $bytes bytes
" unless $bytes == 0;
exit $bytes ? 1 : 0;
}
The loop can be rewritten with use of unpack like following
do {
$bytes_read = sysread $in, $buffer, BUFSIZE;
die "Error: read from $infile: $!" unless defined $bytes_read;
error_handler($bytes_read) unless $bytes_read == 6;
my($start,$DevType,#devUID) = unpack('CCC4',$buffer);
my #hexDevUID = reverse map { sprintf "0x%02x", $_ } #devUID;
say $out join(',',($start,$DevType,#hexDevUID));
} while ( $bytes_read );

Compare 2 CSV Huge CSV Files and print the differences to another csv file using perl

I have 2 csv files of multiple fields(approx 30 fields), and huge size ( approx 4GB ).
File1:
EmployeeName,Age,Salary,Address
Vinoth,12,2548.245,"140,North Street,India"
Vivek,40,2548.245,"140,North Street,India"
Karthick,10,10.245,"140,North Street,India"
File2:
EmployeeName,Age,Salary,Address
Vinoth,12,2548.245,"140,North Street,USA"
Karthick,10,10.245,"140,North Street,India"
Vivek,40,2548.245,"140,North Street,India"
I want to compare these 2 files and report the differences into another csv file. In the above example, Employee Vivek and Karthick details are present in different row numbers but still the record data is same, so it should be considered as match. Employee Vinoth record should be considered as a mismatch since there is a mismatch in the address.
Output diff.csv file can contain the mismatched record from the File1 and File 2 as below.
Diff.csv
EmployeeName,Age,Salary,Address
F1, Vinoth,12,2548.245,"140,North Street,India"
F2, Vinoth,12,2548.245,"140,North Street,USA"
I've written the code so far as below. After this I'm confused which option to choose whether a Binary Search or any other efficient way to do this. Could you please help me?
My approach
1. Load the File2 in memory as hashes of hashes.
2.Read line by line from File1 and match it with the hash of hashes in memory.
use strict;
use warnings;
use Text::CSV_XS;
use Getopt::Long;
use Data::Dumper;
use Text::CSV::Hashify;
use List::BinarySearch qw( :all );
# Get Command Line Parameters
my %opts = ();
GetOptions( \%opts, "file1=s", "file2=s", )
or die("Error in command line arguments\n");
if ( !defined $opts{'file1'} ) {
die "CSV file --file1 not specified.\n";
}
if ( !defined $opts{'file2'} ) {
die "CSV file --file2 not specified.\n";
}
my $file1 = $opts{'file1'};
my $file2 = $opts{'file2'};
my $file3 = 'diff.csv';
print $file2 . "\n";
my $csv1 =
Text::CSV_XS->new(
{ binary => 1, auto_diag => 1, sep_char => ',', eol => $/ } );
my $csv2 =
Text::CSV_XS->new(
{ binary => 1, auto_diag => 1, sep_char => ',', eol => $/ } );
my $csvout =
Text::CSV_XS->new(
{ binary => 1, auto_diag => 1, sep_char => ',', eol => $/ } );
open( my $fh1, '<:encoding(utf8)', $file1 )
or die "Cannot not open '$file1' $!.\n";
open( my $fh2, '<:encoding(utf8)', $file2 )
or die "Cannot not open '$file2' $!.\n";
open( my $fh3, '>:encoding(utf8)', $file3 )
or die "Cannot not open '$file3' $!.\n";
binmode( STDOUT, ":utf8" );
my $f1line = undef;
my $f2line = undef;
my $header1 = undef;
my $f1empty = 'false';
my $f2empty = 'false';
my $reccount = 0;
my $hash_ref = hashify( "$file2", 'EmployeeName' );
if ( $f1empty eq 'false' ) {
$f1line = $csv1->getline($fh1);
}
while (1) {
if ( $f1empty eq 'false' ) {
$f1line = $csv1->getline($fh1);
}
if ( !defined $f1line ) {
$f1empty = 'true';
}
if ( $f1empty eq 'true' ) {
last;
}
else {
## Read each line from File1 and match it with the File 2 which is loaded as hashes of hashes in perl. Need help here.
}
}
print "End of Program" . "\n";
Storing data of such magnitude in database is most correct approach to tasks of this kind. At minimum SQLlite is recommended but other databases MariaDB, MySQL, PostgreSQL will work quite well.
Following code demonstrates how desired output can be achieved without special modules, but it does not take in account possibly messed up input data. This script will report data records as different even if difference can be just one extra space.
Default output is into console window unless you specify option output.
NOTE: Whole file #1 is read into memory, please be patient processing big files can take a while.
use strict;
use warnings;
use feature 'say';
use Getopt::Long qw(GetOptions);
use Pod::Usage;
my %opt;
my #args = (
'file1|f1=s',
'file2|f2=s',
'output|o=s',
'debug|d',
'help|?',
'man|m'
);
GetOptions( \%opt, #args ) or pod2usage(2);
print Dumper(\%opt) if $opt{debug};
pod2usage(1) if $opt{help};
pod2usage(-exitval => 0, -verbose => 2) if $opt{man};
pod2usage(1) unless $opt{file1};
pod2usage(1) unless $opt{file2};
unlink $opt{output} if defined $opt{output} and -f $opt{output};
compare($opt{file1},$opt{file2});
sub compare {
my $fname1 = shift;
my $fname2 = shift;
my $hfile1 = file2hash($fname1);
open my $fh, '<:encoding(utf8)', $fname2
or die "Couldn't open $fname2";
while(<$fh>) {
chomp;
next unless /^(.*?),(.*)$/;
my($key,$data) = ($1, $2);
if( !defined $hfile1->{$key} ) {
my $msg = "$fname1 $key is missing";
say_msg($msg);
} elsif( $data ne $hfile1->{$key} ) {
my $msg = "$fname1 $key,$hfile1->{$key}\n$fname2 $_";
say_msg($msg);
}
}
}
sub say_msg {
my $msg = shift;
if( $opt{output} ) {
open my $fh, '>>:encoding(utf8)', $opt{output}
or die "Couldn't to open $opt{output}";
say $fh $msg;
close $fh;
} else {
say $msg;
}
}
sub file2hash {
my $fname = shift;
my %hash;
open my $fh, '<:encoding(utf8)', $fname
or die "Couldn't open $fname";
while(<$fh>) {
chomp;
next unless /^(.*?),(.*)$/;
$hash{$1} = $2;
}
close $fh;
return \%hash;
}
__END__
=head1 NAME
comp_cvs - compares two CVS files and stores differense
=head1 SYNOPSIS
comp_cvs.pl -f1 file1.cvs -f2 file2.cvs -o diff.txt
Options:
-f1,--file1 input CVS filename #1
-f2,--file2 input CVS filename #2
-o,--output output filename
-d,--debug output debug information
-?,--help brief help message
-m,--man full documentation
=head1 OPTIONS
=over 4
=item B<-f1,--file1>
Input CVS filename #1
=item B<-f2,--file2>
Input CVS filename #2
=item B<-o,--output>
Output filename
=item B<-d,--debug>
Print debug information.
=item B<-?,--help>
Print a brief help message and exits.
=item B<--man>
Prints the manual page and exits.
=back
=head1 DESCRIPTION
B<This program> accepts B<input> and processes to B<output> with purpose of achiving some goal.
=head1 EXIT STATUS
The section describes B<EXIT STATUS> codes of the program
=head1 ENVIRONMENT
The section describes B<ENVIRONMENT VARIABLES> utilized in the program
=head1 FILES
The section describes B<FILES> which used for program's configuration
=head1 EXAMPLES
The section demonstrates some B<EXAMPLES> of the code
=head1 REPORTING BUGS
The section provides information how to report bugs
=head1 AUTHOR
The section describing author and his contanct information
=head1 ACKNOWLEDGMENT
The section to give credits people in some way related to the code
=head1 SEE ALSO
The section describing related information - reference to other programs, blogs, website, ...
=head1 HISTORY
The section gives historical information related to the code of the program
=head1 COPYRIGHT
Copyright information related to the code
=cut
Output for test files
file1.cvs Vinoth,12,2548.245,"140,North Street,India"
file2.cvs Vinoth,12,2548.245,"140,North Street,USA"
#!/usr/bin/env perl
use Data::Dumper;
use Digest::MD5;
use 5.01800;
use warnings;
my %POS;
my %chars;
open my $FILEA,'<',q{FileA.txt}
or die "Can't open 'FileA.txt' for reading! $!";
open my $FILEB,'<',q{FileB.txt}
or die "Can't open 'FileB.txt' for reading! $!";
open my $OnlyInA,'>',q{OnlyInA.txt}
or die "Can't open 'OnlyInA.txt' for writing! $!";
open my $InBoth,'>',q{InBoth.txt}
or die "Can't open 'InBoth.txt' for writing! $!";
open my $OnlyInB,'>',q{OnlyInB.txt}
or die "Can't open 'OnlyInB.txt' for writing! $!";
<$FILEA>,
$POS{FILEA}=tell $FILEA;
<$FILEB>,
$POS{FILEB}=tell $FILEB;
warn Data::Dumper->Dump([\%POS],[qw(*POS)]),' ';
{ # Scan for first character of the records involved
while (<$FILEA>) {
$chars{substr($_,0,1)}++;
};
while (<$FILEB>) {
$chars{substr($_,0,1)}--;
};
# So what characters do we need to deal with?
warn Data::Dumper->Dump([\%chars],[qw(*chars)]),' ';
};
my #chars=sort keys %chars;
{
my %_h;
# For each of the characters in our character set
for my $char (#chars) {
warn Data::Dumper->Dump([\$char],[qw(*char)]),' ';
# Beginning of data sections
seek $FILEA,$POS{FILEA},0;
seek $FILEB,$POS{FILEB},0;
%_h=();
my $pos=tell $FILEA;
while (<$FILEA>) {
next
unless (substr($_,0,1) eq $char);
# for each record save the lengthAndMD5 as the key and its start as the value
$_h{lengthAndMD5(\$_)}=$pos;
$pos=tell $FILEA;
};
my $_s;
while (<$FILEB>) {
next
unless (substr($_,0,1) eq $char);
if (exists $_h{$_s=lengthAndMD5(\$_)}) { # It's a duplicate
print {$InBoth} $_;
delete $_h{$_s};
}
else { # (Not in FILEA) It's only in FILEB
print {$OnlyInB} $_;
}
};
# only in FILEA
warn Data::Dumper->Dump([\%_h],[qw(*_h)]),' ';
for my $key (keys %_h) { # Only in FILEA
seek $FILEA,delete $_h{$key},0;
print {$OnlyInA} scalar <$FILEA>;
};
# Should be empty
warn Data::Dumper->Dump([\%_h],[qw(*_h)]),' ';
};
};
close $OnlyInB
or die "Could NOT close 'OnlyInB.txt' after writing! $!";
close $InBoth
or die "Could NOT close 'InBoth.txt' after writing! $!";
close $OnlyInA
or die "Could NOT close 'OnlyInA.txt' after writing! $!";
close $FILEB
or die "Could NOT close 'FileB.txt' after reading! $!";
close $FILEA
or die "Could NOT close 'FileA.txt' after reading! $!";
exit;
sub lengthAndMD5 {
return sprintf("%8.8lx-%32.32s",length(${$_[0]}),Digest::MD5::md5_hex(${$_[0]}));
};
__END__

Symfony2 add query logging in Symfony and in csv file

I need to add query logging in Symfony and in csv file. Sometime database can be busy or not available but I want to log all queries.
This log should be in csv format with columns:
-url
-datasource name
-SQL content
-parameters
-username
-start time of query execution (accuracy in ms)
-end time of query execution (accuracy in ms)
Is any help how can I do this? or what can be done for this?
Maybe Custom Function In which I can create the csv file with the current login user info with the url and time to execute the query to access that particular url?
In Symfony If you have a common SQL function which runs when each
query executes then this one helpful for you.
This is the thing I have implemented in my case. Hope This may help you.
public function execute($parameters = null)
{
if ($this->executed) {
return $this;
}
$executionStartTime = microtime(true);
$stmt = $this->execute($parameters);
$this->result = $stmt->fetchAll();
// Close cursor to allow query caching.
$stmt->closeCursor();
$this->executed = true;;
$executionEndTime = microtime(true);
//below code is added to logging for SQL queries to web server in csv format
$execution_time = $executionEndTime - $executionStartTime;
$getpageParameter = $parameters;
$getusername = $this->getUser()->getUsername();
$url = $_SERVER['HTTP_HOST'].$_SERVER['REQUEST_URI'];
$parms = json_encode($getpageParameter);
$data = array(
date ("Y-m-d H:i:s")."|".$url."|".$getusername."|".$parms."|".$executionStartTime."|".$executionEndTime,
);
if(!file_exists('/../../app/logs/querylog.csv')){
$column = array(
"DATE & TIME|URL|USERNAME|PARAMETERS|START TIME|END TIME"
);
$fp = fopen('/../../app/logs/querylog.csv', 'a+');
foreach ( $column as $line ) {
$val = explode("|", $line);
fputcsv($fp, $val);
}
fclose($fp);
}
$fp = fopen('/../../app/logs/querylog.csv', 'a+');
foreach ( $data as $line ) {
$val = explode("|", $line);
fputcsv($fp, $val);
}
fclose($fp);
return $this;
}

Perl Import large .csv to MySQL, don't repeat data

I am trying to import several .csv files into a mysql database, the script below works except that it only imports the first row of my csv data into the database. Both my tables are populated with exactly one data entry.
Any help would be appreciated.
Thank you
#!/usr/bin/perl
use DBI;
use DBD::mysql;
use strict;
use warnings;
# MySQL CONFIG VARIABLES
my $host = "localhost";
my $user = "someuser";
my $pw = "somepassword";
my $database = "test";
my $dsn = "DBI:mysql:database=" . $database . ";host=" . $host;
my $dbh = DBI->connect($dsn, $user, $pw)
or die "Can't connect to the DB: $DBI::errstr\n";
print "Connected to DB!\n";
# enter the file name that you want import
my $filename = "/home/jonathan/dep/csv/linux_datetime_test_4.26.13_.csv";
open FILE, "<", $filename or die $!;
$_ = <FILE>;
$_ = <FILE>;
while (<FILE>) {
my #f = split(/,/,$_);
if (length($f[4]) < 10) {
print "No Weight\n";
}
else {
#insert the data into the db
print "insert into datetime_stamp\n";
}
my $sql = "INSERT INTO datetime_stamp (subject, date, time, weight)
VALUES('$f[1]', '$f[2]', '$f[3]', '$f[4]')";
print "$sql\n";
my $query = $dbh->do($sql);
my $sql = "INSERT INTO subj_weight (subject, weight) VALUES('$f[1]', '$f[2]')";
my $query = $dbh->do($sql);
close(FILE);
}
As has been commented, you close the input file after reading the first data entry, and so only populate your database with a single record.
However there are a few problems with your code you may want to consider:
You should set autoflush on the STDOUT file handle if you are printing diagnostics as the program runs. Otherwise perl won't print the output until either it has a buffer full of text to print or the file handle is closed when the program exits. That means you may not see the messages you have coded until long after the event
You should use Text::CSV to parse CSV data instead of relying on split
You can interpolate variables into a double-quoted string. That avoids the use of several concatenation operators and makes the intention clearer
Your open is near-perfect - an unusual thing - because you correctly use the three-parameter form of open as well as testing whether it succeeded and putting $! in the die string. However you should also always use a lexical file handle as well instead of the old-fashioned global ones
You don't chomp the lines you read from the input, so the last field will have a trailing newline. Using Text::CSV avoids the need for this
You use indices 1 through 4 of the data split from the input record. Perl indices start at zero, so that means you are droppping the first field. Is that correct?
Similarly you are inserting fields 1 and 2, which appear to be subject and date, into fields called subject and weight. It seems unlikely that this can be right
You should prepare your SQL statements, use placeholders, and provide the actual data in an execute call
You seem to diagnose the data read from the file ("No Weight") but insert the data into the database anyway. This may be correct but it seems unlikely
Here is a version of your program that includes these amendments. I hope it is of use to you.
#!/usr/bin/perl
use strict;
use warnings;
use DBI;
use Text::CSV;
use IO::Handle;
STDOUT->autoflush;
# MySQL config variables
my $host = "localhost";
my $user = "someuser";
my $pw = "somepassword";
my $database = "test";
my $dsn = "DBI:mysql:database=$database;host=$host";
my $dbh = DBI->connect($dsn, $user, $pw)
or die "Can't connect to the DB: $DBI::errstr\n";
print "Connected to DB!\n";
my $filename = "/home/jonathan/dep/csv/linux_datetime_test_4.26.13_.csv";
open my $fh, '<', $filename
or die qq{Unable to open "$filename" for input: $!};
my $csv = Text::CSV->new;
$csv->getline($fh) for 1, 2; # Drop header lines
my $insert_datetime_stamp = $dbh->prepare( 'INSERT INTO datetime_stamp (subject, date, time, weight) VALUES(?, ?, ?, ?)' );
my $insert_subj_weight = $dbh->prepare( 'INSERT INTO subj_weight (subject, weight) VALUES(?, ?)' );
while (my $row = $csv->getline($fh)) {
if (length($row->[4]) < 10) {
print qq{Invalid weight: "$row->[4]"\n};
}
else {
#insert the data into the db
print "insert into datetime_stamp\n";
$insert_datetime_stamp->execute(#$row[1..4]);
$insert_subj_weight->execute(#$row[1,4]);
}
}

Autokill select MySQL queries perl script

I use the below perl script to kill all select connections done by users but i want to make an exception for the root user since root queries are either important or backup purposes.
Below is my code
#!/usr/bin/perl -w
use DBI;
use strict;
$| = 1;
#------------------------------------------------------------------------------
# !!! Configure check time and timeout. These are both in seconds.
my $check = 15; # check processes every $check seconds
my $slow_time = 60; # stop processes that run for >= $slow_time seconds
# !!! Configure log file - All slow queries also get logged to this file
my $logfile = "./check_mysql_query.log"; # log slow queries to this file
# !!! Configure the database connection parameters
my $db_string = "dbi:mysql:mysql"; # DBI resource to connect to
my $db_user = "root"; # DBI username to connect as
my $db_pass = "password"; # DBI password to connect with
# !!! Configure path to sendmail program
my $sendmail_bin = "/usr/sbin/sendmail";
#
#------------------------------------------------------------------------------
my ($dbh,$sth,$sth2,$thread,$state,$time,$query,$explain);
print "connecting\n";
my $opt = {
'RaiseError'=>0,
'PrintError'=>0
};
$dbh = DBI->connect($db_string,$db_user,$db_pass,$opt);
unless ($dbh) {
print "Error: Unable to connect to database: $DBI::errstr\n";
exit 1;
}
$SIG{'TERM'} = sub {
print "caught sig TERM!\nexiting!\n";
$dbh->disconnect;
exit 1;
};
print "preparing\n";
unless ($sth = $dbh->prepare("show full processlist")) {
print "error preparing query: $DBI::errstr\nexiting!\n";
$dbh->disconnect;
exit 1;
}
print "initialized.. starting loop\n";
while(1) {
unless ($sth->execute) {
print "statement execute failed: ".$sth->errstr."\nexiting!\n";
last;
}
while(my #tmp = $sth->fetchrow) {
$thread = $tmp[0];
$state = $tmp[4];
$time = $tmp[5];
$query = $tmp[7];
if ($state eq "Query" && $query =~ /^SELECT/ && $time >= $slow_time) {
print "killing slow query thread=$thread state=$state time=$time\n";
$dbh->do("kill $thread");
unless (log_query($logfile,$query)) {
print "log_query failed! exiting!\n";
last;
}
}
}
sleep($check);
}
$sth->finish;
$dbh->disconnect;
exit 1;
sub log_query {
my ($file,$query) = #_;
unless (open(O,">>".$file)) {
print "error opening log file '$file': $!\n";
return undef;
}
print O $query."\n-----\n";
close(O);
return 1;
}
script credit: http://code.google.com/p/mysql-killquery/
i removed most of the script code for simplicity.
I think I get from my reading that your second column will be the user. So assuming root logs into the database as user 'root', then you can probably do this:
while(my #tmp = $sth->fetchrow) {
next if $tmp[1] eq 'root';
...
}