I really need your help for understanding with the following perl example code:
#!/usr/bin/perl
# Hashtest
use strict;
use DBI;
use DBIx::Log4perl;
use Data::Dumper;
use utf8;
if (my $dbh = DBIx::Log4perl->connect("DBI:mysql:myDB","myUser","myPassword",{
RaiseError => 1,
PrintError => 1,
AutoCommit => 0,
mysql_enable_utf8 => 1
}))
{
my $data = undef;
my $sql_query = <<EndOfSQL;
SELECT 1
EndOfSQL
my $out = $dbh->prepare($sql_query);
$out->execute() or exit(0);
my $row = $out->fetchrow_hashref();
$out->finish();
# Debugging
print Dumper($row);
$dbh->disconnect;
exit(0);
}
1;
If i run this code on two machines i get different results.
Result on machine 1: (Result i needed with integer value)
arties#p51s:~$ perl hashTest.pl
Log4perl: Seems like no initialization happened. Forgot to call init()?
$VAR1 = {
'1' => 1
};
Resulst on machine 2: (Result that makes trouble because of string value)
arties#core3:~$ perl hashTest.pl
Log4perl: Seems like no initialization happened. Forgot to call init()?
$VAR1 = {
'1' => '1'
};
As you can see on machine 1 the value from MySQL will be interpreted as integer value and on machine 2 as string value.
I need on both machines the integer value. And it is not possible to modify the hash later, because the original code has too much values, that must be changed...
Both machines uses DBI 1.642 and DBIx::Log4perl 0.26
The only difference is the perl version machine 1 (v5.26.1) vs. machine 2 (v5.14.2)
So the big question is, how can I make sure I always get the integer in the hash as the result?
Update 10.10.2019:
To show perhaps better the problem, i improve the above example:
...
use Data::Dumper;
use JSON; # <-- Inserted
use utf8;
...
...
print Dumper($row);
# JSON Output
print JSON::to_json($row)."\n"; # <-- Inserted
$dbh->disconnect;
...
Now the output on machine 1 with last line the JSON Output:
arties#p51s:~$ perl hashTest.pl
Log4perl: Seems like no initialization happened. Forgot to call init()?
$VAR1 = {
'1' => 1
};
{"1":1}
Now the output on machine 2 with last line the JSON Output:
arties#core3:~$ perl hashTest.pl
$VAR1 = {
'1' => '1'
};
{"1":"1"}
You see, that both Data::Dumper AND JSON has the same behavor. And as i wrote bevor, +0 is not an option because the original hash is much more complex.
Both machines use JSON 4.02
#Nick P : That's the solution you linked Why does DBI implicitly change integers to strings? , the DBD::mysql was different on both systems! So i upgraded on machine 2 from Version 4.020 to Version 4.050 and now both systems has the same result! And Integers are Integers ;-)
So the result on both machines is now:
$VAR1 = {
'1' => 1
};
{"1":1}
Thank you!
Related
I am working with the Text::CSV library of Perl to import data from a CSV file, using the functional interface. The data is stored in an array of hashes, and the problem is that when the script tries to access those elements/keys, they are uninitialized (or undefined).
Using the library Dumper, it is possible to see that the array and the hashes are not empty, in fact, they are correctly filled with the data of the CSV file.
With this small piece of code, I get the following output:
my $array = csv(
in => $csv_file,
headers => 'auto');
foreach my $mem (#{$array}) {
print Dumper $mem;
foreach (keys $mem) {
print $mem{$_};
}
}
Last part of the output:
$VAR1 = {
'Column' => '16',
'Width' => '13',
'Type' => 'RAM',
'Depth' => '4096'
};
Use of uninitialized value in print at ** line 81.
Use of uninitialized value in print at ** line 81.
Use of uninitialized value in print at ** line 81.
Use of uninitialized value in print at ** line 81.
This happens with all the elements of the array. Is this problem related to the encoding, or I am just simply accessing the elements in a incorrect way?
$mem is a reference to a hash, but you keep trying to use it directly as a hash. Change your code to:
foreach (keys %$mem) {
print $mem->{$_};
}
There is a slight complication in that in some versions of perl, 'keys $mem' was allowed directly as an experimental feature, which later got removed. In any case, adding
use warnings;
use strict;
would likely have given you some helpful clues as to what was happening.
When I run your code on my version of Perl (5.24), I get this error:
Experimental keys on scalar is now forbidden at ... line ...
This points to the line:
foreach (keys $mem) {
You should dereference the hash ref:
use warnings;
use strict;
use Data::Dumper;
use Text::CSV qw( csv );
my $csv_file="data.csv";
my $array = csv(
in => $csv_file,
headers => 'auto');
foreach my $mem (#{$array}) {
print Dumper($mem);
foreach (keys %{ $mem }) {
print $mem->{$_}, "\n";
}
}
I have pored over this site (and others) trying to glean the answer for this but have been unsuccessful.
use Text::CSV;
my $csv = Text::CSV->new ( { binary => 1, auto_diag => 1 } );
$line = q(data="a=1,b=2",c=3);
my $csvParse = $csv->parse($line);
my #fields = $csv->fields();
for my $field (#fields) {
print "FIELD ==> $field\n";
}
Here's the output:
# CSV_XS ERROR: 2034 - EIF - Loose unescaped quote # rec 0 pos 6 field 1
FIELD ==>
I am expecting 2 array elements:
data="a=1,b=2"
c=3
What am I missing?
You may get away with using Text::ParseWords. Since you are not using real csv, it may be fine. Example:
use strict;
use warnings;
use Data::Dumper;
use Text::ParseWords;
my $line = q(data="a=1,b=2",c=3);
my #fields = quotewords(',', 1, $line);
print Dumper \#fields;
This will print
$VAR1 = [
'data="a=1,b=2"',
'c=3'
];
As you requested. You may want to test further on your data.
Your input data isn't "standard" CSV, at least not the kind that Text::CSV expects and not the kind that things like Excel produce. An entire field has to be quoted or not at all. The "standard" encoding of that would be "data=""a=1,b=2""",c=3 (which you can see by asking Text::CSV to print your expected data using say).
If you pass the allow_loose_quotes option to the Text::CSV constructor, it won't error on your input, but it won't consider the quotes to be "protecting" the comma, so you will get three fields, namely data="a=1, b=2" and c=3.
I'm trying to parse some json data with the fandom wikia API. When I browse to my marvel.fandom.com/api request I get following JSON output: {"batchcomplete":"","query":{"pages":{"45910":{"pageid":45910,"ns":0,"title":"Uncanny X-Men Vol 1 171"}}}}
Nothing to fancy to begin with and running it through a JSON parser online gives following output:
{
"batchcomplete":"",
"query":{
"pages":{
"45910":{
"pageid":45910,
"ns":0,
"title":"Uncanny X-Men Vol 1 171"
}
}
}
}
which seems to be ok as far as I can see
I want to get the pageid for several other requests but I can't seem to get the same output through Perl.
The script:
#!/usr/bin/perl
use strict;
use warnings;
use LWP::Simple;
use JSON;
use Data::Dumper;
my $url = "https://marvel.fandom.com/api.php?action=query&titles=Uncanny%20X-Men%20Vol%201%20171&format=json";
my $json = getprint( $url);
die "Could not get $url!" unless defined $json;
my $decoded_json = decode_json($json);
print Dumper($decoded_json);
but this gives following error:
Could not get https://marvel.fandom.com/api.php?action=query&titles=Uncanny%20X-Men%20Vol%201%20171&format=json! at ./marvelScraper.pl line 11.
When I change the get to getprint for some extra info, I get this:
500 Can't connect to marvel.fandom.com:443
<URL:https://marvel.fandom.com/api.php?action=query&titles=Uncanny%20X-Men%20Vol%201%20171&format=json>
malformed JSON string, neither tag, array, object, number, string or atom, at character offset 0 (before "(end of string)") at ./script.pl line 13.
I tried this on another computer and still get the same errors.
The versions of LWP::Simple and LWP::Protocol::https
/usr/bin/perl -MLWP::Simple -E'say $LWP::Simple::VERSION'
6.15
/usr/bin/perl -MLWP::Protocol::https -E'say $LWP::Protocol::https::VERSION'
6.09
Appearantly it has something to do with the Bash Ubuntu on Windows since on a Ubuntu 18.04 I get (with the same script) following response:
JSON text must be an object or array (but found number, string, true, false or null, use allow_nonref to allow this) at ./test.pl line 13.
{"batchcomplete":"","query":{"pages":{"45910":{"pageid":45910,"ns":0,"title":"Uncanny X-Men Vol 1 171"}}}}
Actually, the very same script works from my Bash Ubuntu on Windows with the get() command instead of the getprint() you gave after editing your question.
orabig#Windows:~/DEV$ ./so.pl
$VAR1 = {
'query' => {
'pages' => {
'45910' => {
'pageid' => 45910,
'ns' => 0,
'title' => 'Uncanny X-Men Vol 1 171'
}
}
},
'batchcomplete' => ''
};
So maybe you have another issue that has nothing to do with Perl or Ubuntu.
Can you try this for example ?
curl -v 'https://marvel.fandom.com/api.php?action=query&titles=Uncanny%20X-Men%20Vol%201%20171&format=json'
Maybe you just hit the site too much, and the 500 error is just a result of some anti-leech protection ?
I have a MySQL table with following structure.
alid bigint(20),
ndip varchar(20),
ndregion varchar(20),
occ_num int(3),
Delta_Flag int(1)
After selecting data from the table, I am getting all the data quoted and as a string value.
#!/usr/bin/perl
use strict;
use warnings;
use Data::Dumper;
use FindBin;
use lib $FindBin::Bin;
use Database;
my $pwd = $FindBin::Bin;
my $db = Database->new( 'mysql', "$pwd/config.ini" );
my $db1 = Database->new( 'mysql', "$pwd/config2.ini" );
my #tables = qw( AutoTT_AlarmStatus_Major1 );
for my $table ( #tables ) {
my $query_select = "SELECT alid, ndip, ndregion, occ_num, Delta_Flag FROM $table LIMIT 1";
my $result = $db->db_get_results( $query_select );
print Dumper( $result );
for my $item ( #{$result} ) {
# Here I want to prepare, bind and insert this data
# into other table with same structure
}
}
Database.pm
sub db_get_results {
my $self = shift;
my $qry = shift;
my $sth = $self->{dbh}->prepare( $qry );
$sth->execute();
my #return = ();
while ( my #line = $sth->fetchrow_array ) {
push #return, \#line;
}
return \#return;
}
Output:
$VAR1 = [
[
'1788353',
'10.34.38.12',
'North Central',
'1',
'1'
]
];
Why is DBI implicitly converting all integers to strings?
As #choroba notes in his answer, it's not the DBI that's doing anything with the data. It's just passing through what the driver module (DBD::mysql in your case) returned.
In the General Interface Rules & Caveats section of the DBI docs it says:
Most data is returned to the Perl script as strings. (Null values are returned as undef.) This allows arbitrary precision numeric data to be handled without loss of accuracy. Beware that Perl may not preserve the same accuracy when the string is used as a number.
I wrote that back in the days before it was common to configure perl to support 64-bit integers, and long-double floating point types were unusual. These days I recommend that drivers return values in the most 'natural' Perl type that doesn't risk data loss.
For some drivers that can be tricky to implement, especially those that support returning multiple result sets, with different numbers of columns, from a single handle, as DBD::mysql does.
I skimmed the DBD::mysql docs but didn't see any mention of this topic, so I looked at the relevant code where I can see that the current DBD::mysql is returning numbers as numbers. There's also lots of references to recent changes in this area in the Change log.
Perhaps you're using an old version of DBD::mysql and should upgrade.
That's how the DBD driver for MySQL works. Other databases might behave differently. For example, in SQLite, numbers remain numeric:
#!/usr/bin/perl
use warnings;
use strict;
use DBI;
use Data::Dumper;
my $dbh = 'DBI'->connect('dbi:SQLite:dbname=:memory:', q(), q());
$dbh->do('CREATE TABLE t (id INT, val VARCHAR(10))');
my $insert = $dbh->prepare('INSERT INTO t VALUES (?, ?)');
$insert->execute(#$_) for [ 1, 'foo' ], [ 2, 'bar' ];
my $query = $dbh->prepare('SELECT id, val FROM t');
$query->execute;
while (my $row = $query->fetchrow_arrayref) {
print Dumper($row);
}
__END__
$VAR1 = [
1,
'foo'
];
$VAR1 = [
2,
'bar'
];
This is nothing DBI does in general. As it already was pointed out many database-drivers (DBD::xy) of the DBI system convert numbers to strings. AFAIK its not possible to avoid that.
What you can do is ask the statement handle for corresponding native type or (easier in your case) wether the column of your resultset is numeric in the mysql-DB or not. Here is an example
Given this basic database:
mysql> create table test (id INT,text VARCHAR(20));
Query OK, 0 rows affected (0.01 sec)
mysql> INSERT INTO test VALUES (1,'lalala');
Query OK, 1 row affected (0.00 sec)
you can lookup wether the column is numeric or not by using the driver-specific field 'mysql_is_num':
Reference to an array of boolean values; TRUE indicates, that the respective column contains numeric values.
(from DBD::mysql)
#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
use DBI;
use DBD::mysql;
use Data::Dumper;
my $dsn = "DBI:mysql:database=test;host=localhost";
my $dbh = DBI->connect($dsn,'user','pass') or die "$!";
my $sql = "SELECT * FROM test WHERE id = ?";
my $sth = $dbh->prepare($sql);
$sth->execute(1);
my $num_fields = $sth->{'NUM_OF_FIELDS'};
my $num_mask = $sth->{'mysql_is_num'};
my $result;
my $cnt = 0;
while (my $line = $sth->fetchrow_arrayref){
for (my $i = 0; $i < $num_fields; $i++){
if ($num_mask->[$i]){
$line->[$i] + 0;
}
$result->[$cnt] = $line;
}
$cnt++;
}
print Dumper($result);
I hope this helps. As it was written in a hurry, please excuse the style. Of course i'm open to any suggestions.
Have a working SQL statement:
my $q_it = $dbh->prepare("SELECT customdata.Field_ID,
customdata.Record_ID,
customdata.StringValue
FROM customdata
WHERE customdata.Field_ID='10012' && (StringValue LIKE '1%' OR StringValue LIKE '2%' OR StringValue LIKE '9%');
");
I have written a very simple Perl script for my client to run on their server/db. I could not test it directly, but I passed my code to their DBA:
$q_it->execute();
open (MYFILE, '>>data.txt');
while (my #row=$q_it->fetchrow_array)
{
print MYFILE $row[0].$row[1].$row[2];
}
close (MYFILE);
$q_it is just a normal SQL Select statement. I would assume the data.txt will contain many records(rows). However, surprisingly, it return the results in a single column, but many rows like:
100012
100012
...
100012
315941
315667
...
315633
2011-06
2011-06
...
2011-06
There are just about correct no. of rows for "100012", the "31"values and the date strings. Ideally, it should be
100012 315941 2011-06
100012 315667 2011-06
100012 315633 2011-06
Could it be something I did wrong in my Perl or is this because their MySQL database has different structures?
Thanks for the help!
I would guess that you are looking at a previous attempt to dump the database. To get all values of the first column, followed by all values of the second etc. requires a very different program from the one you have shown.
Don't forget that you are opening the file for append, which will leave any old data at the start of the file. I would have thought an open for write would be appropriate here, as the output from failed attempts is of little value.
I would also check the status of the open using
open MYFILE, '>', 'data.txt' or die $!;
Apart from that, unless you have set $\ to a newline, you need to terminate your printed output with a newline to separate the records. It is also easier to write
print "#row\n";
rather than mention each of the fields explicity.
I really like using something like this:
sub fetch_result_rows {
my ($dbh, $sql, #bind_params) = #_;
try {
return $dbh->selectall_arrayref($sql, { RaiseError => 1, Slice => {}, }, #bind_params);
} catch {
confess("Unable to run SQL:\n$sql\nBIND PARAMS: #{[ join(', ', #bind_params) ]}");
}
}
The magic is in Slice, which causes the return to come back as an arrayref of hash refs. To me the performance overhead is worth it because the result looks like:
[
{ column1 => value, column2 => value, column3 => value }, # row1
{ column1 => value, column2 => value, column3 => value }, # row2
...
]
Read up in perldoc DBI for more information