I can't get this Perl code to return true integer values for integers in the table. The MySQL table columns are correctly specified as integers, yet the JSON output here wraps all query values in quotes. How can I correctly preserve data-types (esp. integers and boolean values) as specified?
use strict;
use warnings;
use DBI;
use JSON;
my $sth = "SELECT id, name, age FROM table";
my $data = $dbh->selectall_arrayref($sth, {Slice => {}});
my $response = encode_json($data);
print $response;
## outputs: {"id":"1","name":"Joe Blodge","age":"42"}
What am I doing wrong here? How can I get this to output the correctly formatted JSON:
{"id":1,"name":"Joe Blodge","age":42}
DBD::mysql returns all results as strings (see https://github.com/perl5-dbi/DBD-mysql/issues/253). Normally Perl doesn't care, encoding to JSON is one of the few times when it matters. You can either use Cpanel::JSON::XS::Type to provide type declarations for your JSON structure:
use Cpanel::JSON::XS;
use Cpanel::JSON::XS::Type;
my $response = encode_json($data, {id => JSON_TYPE_INT, name => JSON_TYPE_STRING, age => JSON_TYPE_INT});
or you can go through and numify the appropriate elements before JSON encoding.
$data->{$_} += 0 for qw(id age);
It is possible to check the type (as indicated by MySQL) of each returned column, if you construct and execute your query using a statement handle then the type will be available as an array in $sth->{TYPE}, but this is pretty complex and may not be reliable.
Related
I execute a postgresql query to a database using PDO and I get back as repsonse strings in the form of:
POINT(23.7336253085595 38.0002872112492)
How can I get the numbers of these strings and store them into to different variables?
That's my code in order to send the query question:
include 'postgreConnect.php';
$maxGid = 1084;
for ($rowPostGis=1; $rowPostGis<=$maxGid;$rowPostGis++){
$stmt = $dbconn->prepare("SELECT ST_AsText(ST_Transform(geom, 4326)) AS geom FROM part_athens_centroids WHERE gid = :rowPostGis;");
$stmt->execute(array('rowPostGis' => $rowPostGis));
while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) {
$geom = $row['geom'];
echo($geom);
//echo($geom);
}
}
I would look into http://php.net/manual/en/function.explode.php this will convert your string into an array of strings that you can use http://php.net/manual/en/function.intval.php to convert each string to an int. You may need to crop the query result down to just the numbers for that use http://php.net/manual/en/function.substr.php.
I need an sql select statement to retrieve 04:30 and test.zip from this string:
{"TIME":"04:30","DATE":"11\/25\/2013","FILENAME":["test.zip"]}
use this \[(.*?)\]
it return value between [ and ]
and for 04:30 use TIME":(.*?),
it return value after "TIME":
Can't you just decode it and use PHP? (assuming you can't change the way it's stored in the db)
<?php
$str = '{"TIME":"04:30","DATE":"11/25/2013","FILENAME":["test.zip"]}';
$o = json_decode($str);
$time = $o->TIME;
$file = $o->FILENAME[0];
var_dump($time); //"04:30"
var_dump($file); //"test.zip"
Regex replaces etc in MySQL require a UDF (user-defined function) mysql-udf-regexp
If none of the above are viable solutions (change DB structure, do it with PHP, use a MySQL UDF), you'll need to get creative. It would require a known, static format of that string, but you could replace some parts and substring others. For example:
SELECT SUBSTRING(REPLACE(`column_name`,'{"TIME":"',''),1,5) AS `time` FROM `table_name`
File is more complex, this example assuming only one filename in the array
SELECT REPLACE(SUBSTRING(`column_name`,LOCATE('"FILENAME":["',`column_name`)+13),'"]}','') AS `file` FROM `table_name`
Those two field selections get 04:30 and test.zip respectively (you can of course use those functions in the same statement, rather than separately like I have, by comma separating them)
I have an array
my #cols = ("accountid", "balance");
and a dataset
my $rowsref=$dbh->selectall_arrayref($_[0]);
foreach my $row (#$rowsref) {
print join(", ", map {defined $_ ? $_ : "(null)"} #$row), "\n";
}
which prints "1, 150".
I would like to get a JSON output like [{"accountid": 1, "balance": 150},{..}].
I have the JSON module loaded, but unsure how to merge #cols with each $row.
edit: added explanation of column name transaction in 2nd example
edit: Fixed for your requirement of a single JSON encoding of the whole resultset.
edit: forgot keys in cols mapping code in 2nd example.
edit: typo'd hashref everywhere to arrayref. :-
Firstly, use selectall_hashref instead, so that it already contains the key names.
Then use one of the JSON encoding modules to encode each row.
(making the same assumptions as your code....)
Using the list-of-hashrefs from selectall_hashref() as-is:
use JSON::XS;
use 5.10.0;
my $rowsref = $dbh->selectall_hashref($_[0]);
print JSON::XS::encode_json($rowsref),"\n";
Performing translation on colnames from selectall_hashref():
If the column names from the database aren't the same as your column names, then you'll need a mapping:
use JSON::XS;
use 5.10.0;
my $trans = { account => 'accountid', amount => 'balance' };
my $rowsref = $dbh->selectall_hashref($_[0]);
my $output = [];
for my $row (#$rowsref) {
push #$output, {
map {
my $colname = exists($trans->{$_}) ? $trans->{$_} : $_;
my $value = $row->{$_};
$colname => $value;
} keys %$row
});
}
print JSON::XS::encode_json($output),"\n";
For each $row above of the resultset, keys %$row gives back the column names in the row as returned from the database.
The map operation takes each of those column names and producues 2 scalar values; (1) $colname is either the original database column name or (if it's found in the $trans hashref) a 'translation' of the column name; (2) $value is the value returned by the database for this column name in this particular $row. $colname => $value returns both the $colname and $value from the map as a 'flattened' pair of scalar values. That means that the map operation returns a list of scalar values twice as long as the original list of column names returned by keys %$row.
Finally, the push #$output, { ... } creates an anonymous hash reference from that list of scalar values in key,value,key,value,... order and adds it to the end of the $output array reference.
Blind translation from selectall_arrayref()
If (for some reason) you have a pathological aversion to querying hashrefs from your database, I guess you could do:
use 5.10.0;
my #cols = ("accountid", "balance");
my $rowsref = $dbh->selectall_arrayref($_[0]);
my $output = [];
for my $row (#$rowsref) {
my %row = map { $cols[$_] => $row->[$_] } (0 .. $#cols);
push #$output, \%row;
}
print JSON::XS::encode_json($output),"\n";
That assumes there are only two columns coming back from the query, though.
"Defined or" operator:
By the way... assuming a late enough perl, you can replace this sort of thing:
defined $_ ? $_ : "(null)"
with this:
$_ // "(null)"
Your code editor (e.g: Vim) might not syntax highlight it correctly if it's not up to date with perl. (e.g: it might treat it as an m// construct).
Note that PostgreSQL can also generate JSON. If it is an option for you then Perl JSON module is redundant.
I'm looking for help understanding why json_decode returns a scalar instead of a hash. I'm still learning perl and a description or some reference lit. would be great.
So the questions:
Why does json_decode return a scalar? (or is it not a scalar)
Is there a better way for me to work with the data?
Here is my code:
use strict;
use warnings;
use JSON qw(decode_json);
use LWP::UserAgent;
my $url = "http://api.bf4stats.com/api/playerInfo?plat=xbox&name=Ttylz00&output=json";
my $ua = LWP::UserAgent->new;
my $data = $ua->get($url);
my $json;
if($data->is_success){
$json = decode_json($data->decoded_content);
}
&sData($json);
sub sData {
my $data = $_[0];
my $kdr = int($data->{stats}->{extra}->{kdr}*100)/100;
printf "\nName: %s\nRank: %s, %s\nKDR: %s\n", $data->{player}->{name},
$data->{player}->{rank}->{nr}, $data->{player}->{rank}->{name},
$kdr;
}
In Perl, a function can only really return a scalar or a list.
Since hashes can be initialized or assigned from lists (e.g. %foo = (a => 1, b => 2)), I guess you're asking why json_decode returns something like { a => 1, b => 2 } (a reference to an anonymous hash) rather than (a => 1, b => 2) (a list that can be copied into a hash).
I can think of a few good reasons for this:
in Perl, an array or hash always contains scalars. So in something like { "a": { "b": 3 } }, the { "b": 3 } part has to be a scalar; and for consistency, it makes sense for the whole thing to be a scalar in the same way.
if the hash is quite large (many keys at top-level), it's pointless and expensive to iterate over all the elements to convert it into a list, and then build a new hash from that list.
in JSON, the top-level element can be either an object (= Perl hash) or an array (= Perl array). If json_decode returned a list in the former case, it's not clear what it would return in the latter case. After decoding the JSON string, how could you examine the result to know what to do with it? (And it wouldn't be safe to write %foo = json_decode(...) unless you already knew that you had a hash.) So json_decode's behavior works better for any general-purpose library code that has to use it without already knowing very much about the data it's working with.
I need to generate an XML file from database records, and I get the error "out of memory". Here's the script I am using, it's found on Google, but it's not suitable for me, and it's also killing the server's allocated memory. It's a start though.
#!/usr/bin/perl
use warnings;
use strict;
use XML::Simple;
use DBI;
my $dbh = DBI->connect('DBI:mysql:db_name;host=host_address','db_user','db_pass')
or die DBI->errstr;
# Get an array of hashes
my $recs = $dbh->selectall_arrayref('SELECT * FROM my_table',{ Columns => {} });
# Convert to XML where each hash element becomes an XML element
my $xml = XMLout( {record => $recs}, NoAttr => 1 );
print $xml;
$dbh->disconnect;
This script only prints the records, because I tested with a where clause for a single row id.
First of all, I couldn't manage to make it to save the output to a file.xml.
Second, I need somehow to split the "job" in multiple jobs and then put together the XML file all in one piece.
I have no idea how to achieve both.
Constraint: No access to change server settings.
These are problem lines:
my $recs = $dbh->selectall_arrayref('SELECT * FROM my_table',{ Columns => {} });
This reads the whole table into memory, representing every single row as an array of values.
my $xml = XMLout( {record => $recs}, NoAttr => 1 );
This is probably even larger structure, it is a the whole XML string in one go.
The lowest memory-use solution needs to involve loading the table one item at a time, and printing that item out immediately. In DBI, it is possible to make a query so that you fetch one row at a time in a loop.
You will need to play with this before the result looks like your intended output (I haven't tried to match your XML::Simple output - I'm leaving that to you:
print "<records>\n";
my $sth = $dbh->prepare('SELECT * FROM my_table');
$sth->execute;
while ( my $row = $sth->fetchrow_arrayref ) {
# Convert db row to XML row
print XMLout( {row => $row}, NoAttr => 1 ),"\n";
}
print "</records>\n";
Perl can use e.g. open( FILEHANDLE, mode, filename ) to start access to a file and print FILEHANDLE $string to print to it, or you could just call your script and pipe it to a file e.g. perl myscript.pl > table.xml
It's the select * with no contraints that will be killing your memory. Add some constraint to your query ie date or id and use a loop to execute the query and do your output in chunks. That way you won't need to load the whole table in mem before your get started on the output.