I need to store the fetched data from the MySQL database. So I used this code.
while (#row = $statement->fetchrow_array)
{
print "#row.\n"; # ------printing data
}
foreach $key(#row)
{
print $key."\n"; # ------empty data
}
In foreach loop the #row data is empty. How to solve this
UPDATE: It actually should be like this:
while (my #row = $statement->fetchrow_array) {
# print "#row.\n";
foreach my $key(#row) {
$query= "ALTER TABLE tablename DROP FOREIGN KEY $key;";
$statement = $connection->prepare($query);
$statement->execute()
or die "SQL Error: $DBI::errstr\n";
}
}
Well, it should be like this:
while (my #row = $statement->fetchrow_array) {
foreach my $key (#row) {
print $key."\n";
}
}
Otherwise all the result set will be consumed in the first loop.
As a sidenote, fetchrow_array returns a rowset as array of field values, not keys. To get the keys as well, you should use fetchrow_hashref:
while (my $row = $statement->fetchrow_hashref) {
while (my ($key, $value) = each %$row) {
print "$key => $value\n";
}
}
UPDATE: from your comments it looks like you actually need an array of column names. There's one way to do it:
my $row = $statement->fetchrow_hashref;
$statement->finish;
foreach my $key (keys %$row) {
# ... dropping $key from db as in your original example
}
But in fact, there are more convenient ways of doing what you want to do. First, there's a single method for preparing and extracting a single row: selectrow_hashref. Second, if you want to get just foreign keys information, why don't use the specific DBI method for that - foreign_key_info? For example:
my $sth = $dbh->foreign_key_info( undef, undef, undef,
undef, undef, $table_name);
my $rows = $sth->fetchall_hashref;
while (my ($k, $v) = each %$rows) {
# process $k/$v
}
Related
The following codes check for Duplicates in CSV file where TO Column is “USD”. I need your help to figure out how do I compare the resulted duplicate value, if the duplicate value has same value like in the below case, Perl should not give any warning, if the value is same. Perl file name is Source, just change the directory and run it.
#!/usr/bin/perl
use strict;
use warnings;
use Text::CSV;
use List::MoreUtils qw/ uniq /;
my %seen = ();
my #uniq = ();
my %uniq;
my %data;
my %dupes;
my #rows;
my $csv = Text::CSV->new ()
or die "Cannot use CSV: ".Text::CSV->error_diag ();
open my $fh, "<", 'D:\Longview\ENCDEVD740\DataServers\ENCDEVD740\lvaf\inbound\data\enc_meroll_fxrate_soa_load.csv' or die "Cannot use CSV: $!";
while ( my $row = $csv->getline( $fh ) ) {
# insert row into row list
push #rows, $row;
# join the unique keys with the
# perl 'multidimensional array emulation'
# subscript character
my $key = join( $;, #{$row}[0,1] );
# if it was just one field, just use
# my $key = $row->[$keyfieldindex];
# if you were checking for full line duplicates (header lines):
# my $key = join($;, #$row);
# if %data has an entry for the record, add it to dupes
#print "#{$row}\n ";
if (exists $data{$key}) { # duplicate
# if it isn't already duplicated
# add this row and the original
if (not exists $dupes{$key}) {
push #{$dupes{$key}}, $data{$key};
}
# add the duplicate row
push #{$dupes{$key}}, $row;
} else {
$data{ $key } = $row;
}
}
$csv->eof or $csv->error_diag();
close $fh;
# print out duplicates:
warn "Duplicate Values:\n";
warn "-----------------\n";
foreach my $key (keys %dupes) {
my #keys = split($;, $key);
if (($keys[1] ne 'USD') or ($keys[0] eq 'FROMCURRENCY')){
#print "Rejecting record since duplicate records are for Outofscope currencies\n";
#print "\$keys[0] = $keys[0]\n";
#print "\$keys[1] = $keys[1]\n";
next;
}
else {
print "Key: #keys\n";
foreach my $dupe (#{$dupes{$key}}) {
print "\tData: #$dupe\n";
}
}
}
Source - CSV File
Query
CSV File
Sample data:
FROMCURRENCY,TOCURRENCY,RATE
AED,USD,0.272257011
ANG,USD,0.557584544
ARS,USD,0.01421147
AUD,USD,0.68635
AED,USD,0.272257011
ANG,USD,0.557584544
ARS,USD,0.01421147
Different Values for duplicates
Like #Håkon wrote it seems like all your duplicates are in fact the same rate so they should not be considered duplicates. However, it could be an idea to store the rate in a hash mapped to each from and to currency. That way you don't need to check for duplicates every iteration and can rely on the uniqueness of the hash.
It's great that you use proper CSV parsers but here's an example using a single hash to keep track of duplicates by just splitting by , since the data seems reliable.
#!/usr/bin/env perl
use warnings;
use strict;
my $result = {};
my $format = "%-4s | %-4s | %s\n";
while ( my $line = <DATA> ) {
chomp $line;
my ( $from, $to, $rate ) = split( /,/x, $line );
$result->{$from}->{$to}->{$rate} = 1;
}
printf( $format, "FROM", "TO", "RATES" );
printf( "%s\n", "-" x 40 );
foreach my $from ( keys %$result ) {
foreach my $to ( keys %{ $result->{$from} } ) {
my #rates = keys %{ $result->{$from}->{$to} };
next if #rates < 2;
printf( $format, $from, $to, join( ", ", #rates ) );
}
}
__DATA__
AED,USD,0.272257011
ANG,USD,0.557584545
ANG,USD,1.557584545
ARS,USD,0.01421147
ARS,USD,0.01421147
ARS,USD,0.01421147
AUD,USD,0.68635
AUD,USD,1.68635
AUD,USD,2.68635
I change the test data to contain duplicates with the same rate and with different rates and the result would print.
FROM | TO | RATES
----------------------------------------
ANG | USD | 1.557584545, 0.557584545
AUD | USD | 1.68635, 0.68635, 2.68635
I'm having trouble processing the returned results from a DB SQL Mapper into a recognizable json encoded array.
function apiCheckSupplyId() {
/*refer to the model Xrefs*/
$supply_id = $this->f3->get('GET.supply_id');
$xref = new Xrefs($this->tongpodb);
$supply = $xref->getBySupplyId( $supply_id );
if ( count( $supply ) == 0 ) {
$this->logger->write('no xref found for supply_id=' .$supply_id);
$supply = array( array('id'=>0) );
echo json_encode( $supply );
} else {
$json = array();
foreach ($supply as $row){
$item = array();
foreach($row as $key => $value){
$item[$key] = $value;
}
array_push($json, $item);
}
$this->logger->write('xref found for supply_id=' .$supply_id.json_encode( $json ) );
echo json_encode( $json );
}
}
This is the method I am using but it seems very clunky to me. Is there a better way?
Assuming the getBySupplyId returns an array of Xref mappers, you could simplify the whole thing like this:
function apiCheckSupplyId() {
$supply_id = $this->f3->get('GET.supply_id');
$xref = new Xrefs($this->tongpodb);
$xrefs = $xref->getBySupplyId($supply_id);
echo json_encode(array_map([$xref,'cast'],$xrefs));
$this->logger->write(sprintf('%d xrefs found for supply_id=%d',count($xrefs),$supply_id));
}
Explanation:
The $xrefs variable contains an array of mappers. Each mapper being an object, you have to cast it to an array before encoding it to JSON. This can be done in one line by mapping the $xref->cast() method to each record: array_map([$xref,'cast'],$xrefs).
If you're not confident with that syntax, you can loop through each record and cast it:
$cast=[];
foreach ($xrefs as $x)
$cast[]=$x->cast();
echo json_encode($cast);
The result is the same.
The advantage of using cast() other just reading each value (as you're doing in your original script) is that it includes virtual fields as well.
I'm trying to write a program to fetch a big MySQL table, rename some fields and write it to JSON. Here is what I have for now:
use strict;
use JSON;
use DBI;
# here goes some statement preparations and db initialization
my $rowcache;
my $max_rows = 1000;
my $LIMIT_PER_FILE = 100000;
while ( my $res = shift( #$rowcache )
|| shift( #{ $rowcache = $sth->fetchall_arrayref( undef, $max_rows ) } ) ) {
if ( $cnt % $LIMIT_PER_FILE == 0 ) {
if ( $f ) {
print "CLOSE $fname\n";
close $f;
}
$filenum++;
$fname = "$BASEDIR/export-$filenum.json";
print "OPEN $fname\n";
open $f, ">$fname";
}
$res->{some_field} = $res->{another_field}
delete $res->{another_field}
print $f $json->encode( $res ) . "\n";
$cnt++;
}
I used the database row caching technique from
Speeding up the DBI
and everything seems good.
The only problem I have for now is that on $res->{some_field} = $res->{another_field}, the row interpreter complains and says that $res is Not a HASH reference.
Please could anybody point me to my mistakes?
If you want fetchall_arrayref to return an array of hashrefs, the first parameter should be a hashref. Otherwise, an array of arrayrefs is returned resulting in the "Not a HASH reference" error. So in order to return full rows as hashref, simply pass an empty hash:
$rowcache = $sth->fetchall_arrayref({}, $max_rows)
I'm trying to add an argument to the end of a command line, run that search through a MySQL database, and then list the results or say that nothing was found. I'm trying to do it by saving the query data as both hashes and arrays (these are exercises, I'm extremely new at PERL and scripting and trying to learn). However, I can't figure out how to do the same thing with a hash. I do want the SQL query to complete, and then write the output to a hash, so as not to invoke the While function. Any guidance would be appreciated.
#!/usr/bin/perl -w
use warnings;
use DBI;
use Getopt::Std;
&function1;
&function2;
if ($arrayvalue != 0) {
print "No values found for '$search'"."\n"};
sub function1 {
getopt('s:');
$dbh = DBI->connect("dbi:mysql:dbname=database", "root", "password")
or die $DBI::errstr;
$search = $opt_s;
$sql = $dbh->selectall_arrayref(SELECT Player from Players_Sport where Sport like '$search'")
or die $DBI::errstr;
#array = map { $_->[0] } #$sql;
$dbh->disconnect
or warn "Disconnection failed": $DBI::errstr\n";
}
sub function2 {
#array;
$arrayvalue=();
print join("\n", #array, "\n");
if(scalar (#array) == 0) {
$arrayvalue = -1
}
else {$arrayvalue = 0;
};
}
Please see and read the DBI documentation on selectall_hashref. It returns a reference to a hash of reference to hashes.
Use Syntax:
$dbh->selectall_hashref($statement, $key_field[, \%attri][, #bind_values])
So here is an example of what/how it would be returned:
my $dbh = DBI->connect($dsn, $user, $pw) or die $DBI::errstr;
my $href = $dbh->selectall_hashref(q/SELECT col1, col2, col3
FROM table/, q/col1/);
Your returned structure would look like:
{
value1 => {
col1 => 'value1',
col2 => 'value2',
col3 => 'value3'
}
}
So you could do something as follows for accessing your hash references:
my $href = $dbh->selectall_hashref( q/SELECT Player FROM
Players_Sport/, q/Player/ );
# $_ is the value of Player
print "$_\n" for (keys %$href);
You can access each hash record individually by simply doing as so:
$href->{$_}->{Player}
Cribbing from the documentation:
$sql = $dbh->selectall_hashef("SELECT Player from Players_Sport where Sport like ?", 'Players_Sport_pkey', $sport_like_value);
my %hash_of_sql = %{$sql};
Disclaimer: first time I've used DBI.
I have a MySQL table with a lot of indexed fields (f1, f2, f3, etc) that are used to generate WHERE clauses by long-running processes that iterate over chunks of the database performing various cleaning and testing operations.
The current version of this code works something like this:
sub get_list_of_ids() {
my ($value1, $value2, $value3...) = #_;
my $stmt = 'SELECT * FROM files WHERE 1';
my #args;
if (defined($value1)) {
$stmt .= ' AND f1 = ?';
push(#args, $value1);
}
# Repeat for all the different fields and values
my $select_sth = $dbh->prepare($stmt) or die $dbh->errstr;
$select_sth->execute(#args) or die $select_sth->errstr;
my #result;
while (my $array = $select_sth->fetch) {
push(#result, $$array[0]);
}
return \#result;
}
sub function_A() {
my ($value1, $value2, $value3...) = #_;
my $id_aref = get_list_of_ids($value1, $value2, $value3...);
foreach my $id (#$id_aref) {
# Do something with $id
# And something else with $id
}
}
sub function_B() {
my ($value1, $value2, $value3...) = #_;
my $id_aref = get_list_of_ids($value1, $value2, $value3...);
foreach my $id (#$id_aref) {
# Do something different with $id
# Maybe even delete the row
}
}
Anyway, I'm about to dump an awful lot more rows in the database, and am well aware that the code above wont scale up. I can think of several ways to fix it based on other languages. What is the best way to handle it in Perl?
Key points to note are that the logic in get_list_of_ids() is too long to replicate in each function; and that the operations on the selected rows are very varied.
Thanks in advance.
I presume by "scale up" you mean in maintenance terms rather than performance.
The key change to your code is to pass in your arguments as column/value pairs rather than a list of values with an assumed set of columns. This will allow your code to handle any new columns you might add.
DBI->selectcol_arrayref is both convenient and a bit faster, being written in C.
If you turn on RaiseError in your connect call, DBI will throw an exception on errors rather than having to write or die ... all the time. You should do that.
Finally, since we're writing SQL from possibly untrusted user input, I've taken care to escape the column name.
The rest is explained in this Etherpad, you can watch your code be transformed step by step.
sub get_ids {
my %search = #_;
my $sql = 'SELECT id FROM files';
if( keys %search ) {
$sql .= " WHERE ";
$sql .= join " AND ", map { "$_ = ?" }
map { $dbh->quote_identifier($_) }
keys %search;
}
return $dbh->selectcol_arrayref($sql, undef, values %search);
}
my $ids = get_ids( foo => 42, bar => 23 );
If you expect get_ids to return a huge list, too much to keep in memory, then instead of pulling out the whole array and storing it in memory you can return the statement handle and iterate with that.
sub get_ids {
my %search = #_;
my $sql = 'SELECT id FROM files';
if( keys %search ) {
$sql .= " WHERE ";
$sql .= join " AND ", map { "$_ = ?" }
map { $dbh->quote_identifier($_) }
keys %search;
}
my $sth = $dbh->prepare($sql);
$sth->execute(values %search);
return $sth;
}
my $sth = get_ids( foo => 42, bar => 23 );
while( my $id = $sth->fetch ) {
...
}
You can combine both approaches by returning a list of IDs in array context, or a statement handle in scalar.
sub get_ids {
my %search = #_;
my $sql = 'SELECT id FROM files';
if( keys %search ) {
$sql .= " WHERE ";
$sql .= join " AND ", map { "$_ = ?" }
map { $dbh->quote_identifier($_) }
keys %search;
}
# Convenient for small lists.
if( wantarray ) {
my $ids = $dbh->selectcol_arrayref($sql, undef, values %search);
return #$ids;
}
# Efficient for large ones.
else {
my $sth = $dbh->prepare($sql);
$sth->execute(values %search);
return $sth;
}
}
my $sth = get_ids( foo => 42, bar => 23 );
while( my $id = $sth->fetch ) {
...
}
my #ids = get_ids( baz => 99 );
Eventually you will want to stop hand coding SQL and use an Object Relation Mapper (ORM) such as DBIx::Class. One of the major advantages of an ORM is it is very flexible and can do the above for you. DBIx::Class can return a simple list of results, or very powerful iterator. The iterator is lazy, it will not perform the query until you start fetching rows, allowing you to change the query as needed without having to complicate your fetch routine.
my $ids = get_ids( foo => 23, bar => 42 );
$ids->rows(20)->all; # equivalent to adding LIMIT 20