perl: Creating PDF Avery lables on linux? - html

Which would a the best and most flexible process for creating formatted PDF's of Avery labels on a linux machine with perl?
The labels need to include images and will have formatting similar to spanned rows and columns in an html table. And example would be several rows of text on the left hand side and an image on the right hand side that spans the text.
These are my thoughts but if you have additional ideas please let me know.
perl to PDF with PDF::API2
perl to PS with ??? -> PS to PDF with ???
perl to HTML w/ CSS formatting -> HTML to PDF with wkhtmltopdf
Has anybody done this and have any pointers, examples or links that may be of assistance?
Thank You,
~Donavon

They are all viable options.
I found wkhtmltopdf to be too resource intensive and slow. If you do want to go down that route there are existing html templates already which can be found by a quick google search.
PDF::API2 performs very well, and I run it on a server system with no problems. Here's an example script I use for laying out elements in a grid format;
#!/usr/bin/env perl
use strict 'vars';
use FindBin;
use PDF::API2;
# Min usage, expects bank.pdf to exist in same location
render_grid_pdf(
labels => ['One', 'Two', 'Three', 'Four'],
cell_width => 200,
cell_height => 50,
no_of_columns => 2,
no_of_rows => 2,
);
# Advanced usage
render_grid_pdf(
labels => ['One', 'Two', 'Three', 'Four'],
cell_width => 200,
cell_height => 50,
no_of_columns => 2,
no_of_rows => 2,
font_name => "Helvetica-Bold",
font_size => 12,
template => "blank.pdf",
save_as => "my_labels.pdf",
# Manually set coordinates to start prinding
page_offset_x => 20, # Acts as a left margin
page_offset_y => 600,
);
sub render_grid_pdf {
my %args = #_;
# Print data
my $labels = $args{labels} || die "Labels required";
# Template, outfile and labels
my $template = $args{template} || "$FindBin::Bin/blank.pdf";
my $save_as = $args{save_as} || "$FindBin::Bin/out.pdf";
# Layout Properties
my $no_of_columns = $args{no_of_columns} || die "Number of columns required";
my $no_of_rows = $args{no_of_rows} || die "Number of rows required";
my $cell_width = $args{cell_width} || die "Cell width required";
my $cell_height = $args{cell_height} || die "Cell height required";
my $font_name = $args{font_name} || "Helvetica-Bold";
my $font_size = $args{font_size} || 12;
# Note: PDF::API2 uses cartesion coordinates, 0,0 being
# bottom. left. These offsets are used to set the print
# reference to top-left to make things easier to manage
my $page_offset_x = $args{page_offset_x} || 0;
my $page_offset_y = $args{page_offset_y} || $no_of_rows * $cell_height;
# Open an existing PDF file as a templata
my $pdf = PDF::API2->open("$template");
# Add a built-in font to the PDF
my $font = $pdf->corefont($font_name);
my $page = $pdf->openpage(1);
# Add some text to the page
my $text = $page->text();
$text->font($font, $font_size);
# Print out labels
my $current_label = 0;
OUTERLOOP: for (my $row = 0; $row < $no_of_columns; $row++) {
for (my $column = 0; $column < $no_of_columns; $column++) {
# Calculate label x, y positions
my $label_y = $page_offset_y - $row * $cell_height;
my $label_x = $page_offset_x + $column * $cell_width;
# Print label
$text->translate( $label_x, $label_y );
$text->text( $labels->[$current_label]);
# Increment labels index
$current_label++;
# Exit condition
if ( $current_label > scalar #{$labels}) {
last OUTERLOOP;
}
}
}
# Save the PDF
$pdf->saveas($save_as);
}

Great you have found an answer you like.
Another option, which may or may not have suited you, would be to prepare the sheet that will be printed as labels as an open/libreoffice document, with pictures, layout, non-variant text ... (and can do all your testing runs through open/libreoffice).
Then:
use OpenOffice::OODoc;
then: read you data from a database
then:
my $document = odfDocument( file => "$outputFilename",
create => "text",
template_path => $myTemplateDir );
then:
for (my $r = 0; $r < $NumOfTableRows; $r++ ) {
for (my $c = 0; $c < $NumOfTableCols; $c++) {
:
$document->cellValue($theTableName, $r, $c, $someText);
# test: was written properly ?
my $writtenTest = $document->cellValue($theTableName, $r, $c);
chomp $writtenTest;
if ($someText ne $writtenTest) {
:
}
}
}
then:
$document->save($outputFilename );
# save (convert to) a pdf
# -f format;
# -n no start new listener; use existing
# -T timeout to connect to its *OWN* listener (only);
# -e exportFilterOptions
`unoconv -f pdf -n -T 60 -e PageRange=1-2 $outputFilename `;
# I remove the open/libreoffice doc, else clutter and confusion
`rm $outputFilename `;
As a quick overview of the practical issues:
name the layout tables
place your nice, correct open/libreoffice doc in "/usr/local/lib/site_perl/myNewTemplateDir/" say. You will need this (I think that this is the default, but I pass it anyway as $myTemplateDir
gotcha: the modules routines to wait for Open/Libreoffice to start (for the unoconv converter to start) do NOT work - unoconv will still not work after they say it will. I create dummy pdfs until one actually works - exists and has a non-zero size.

Related

perl: Finding mean and variance of large numbers without overflow

I am using a subroutine (stats) to calculate statistics for a list of numbers.
These numbers may be big enough to lose precision if stored as normal perl numbers.
I recieve such numbers as JSON formatted strings.
To decode these strings without losing precision,
I use a JSON::PP object with allow_nonref and allow_bignum activated.
I send the list of such decoded numbers to stats subroutine
(see in code shown below).
This routine calculates some statistics.
These statistics are then encoded to JSON and saved to file.
Most of the time the process seems to work correctly, but
for some inputs (see code for examples) the calculated value of mean and variance statistics
are either clearly wrong, or are encoded as JSON strings by the encoder, or both.
I suspect this is due to interaction of Math::BigInt and Math::BigFloat objects created by JSON decode, and List::Util::sum0.
I am trying to figure out what causes this and a way to avoid/fix this,
preferably without resorting to big non core modules.
I am willing to accept imprecise calculation of mean and variance,
but not entirely inaccurate results
or numerical results encoded as string in JSON.
A script (stats.pl) to demonstrate the problem:
use strict;
use warnings;
use Data::Dumper;
$Data::Dumper::Varname = "DUMPED_RAWDATA";
use JSON::PP;
use List::Util;
my $JSON = JSON::PP->new->allow_bignum->utf8->pretty->canonical;
sub stats {
#TODO fix bug about negative variance. AVOID OVERFLOW
#TODO use GMP, XS?
# #_ has decoded numbers (called RAWDATA here)
my $n = scalar #_;
my $sum = List::Util::sum0(#_);
my $mean = $sum / $n;
my $var = List::Util::sum0( map { $_**2 } #_ ) / $n - $mean**2;
my $s = {
n => $n,
sum => $sum,
max => List::Util::max(#_),
min => List::Util::min(#_),
mean => $mean,
variance => $var
};
# DUMP STATE IF SOME ERROR OCCURS
print Dumper( \#_ ),
$JSON->encode( { json_encoded_stats => $s, json_encoded_rawdata => \#_ } )
if ( '"' eq substr( $JSON->encode($var), 0, 1 ) #MEAN ENCODED AS STRING
or '"' eq substr( $JSON->encode($mean), 0, 1 ) #VARIANCE ENCODED AS STRING
or $var < 0 ); #VARIANCE IS NEGATIVE!
$s;
}
my #test = (
[
qw( 919300112739897344 919305709216464896 919305709216464896 985592115567603712 959299136196456448)
],
[qw(479655558 429035600 3281034608 3281034608 2606592908 3490045576)],
[ qw(914426431563644928) x 3142 ]
);
for (#test) {
print "---\n";
stats( map { $JSON->decode($_) } #$_ );
}
Below is the curtailed output of perl stats.pl with problems indicated as <---.
---
$DUMPED_RAWDATA1 = [
'919300112739897344',
'919305709216464896',
'919305709216464896',
'985592115567603712',
'959299136196456448'
];
{
"json_encoded_rawdata" : [
919300112739897344,
919305709216464896,
919305709216464896,
985592115567603712,
959299136196456448
],
"json_encoded_stats" : {
"max" : 985592115567603712,
"mean" : "9.40560556587377e+17", <--- ENCODED AS STRING
"min" : 919300112739897344,
"n" : 5,
"sum" : 4702802782936887296,
"variance" : 7.46903843214008e+32
}
}
---
$DUMPED_RAWDATA1 = [
479655558,
429035600,
3281034608,
3281034608,
2606592908,
3490045576
];
{
"json_encoded_rawdata" : [
479655558,
429035600,
3281034608,
3281034608,
2606592908,
3490045576
],
"json_encoded_stats" : {
"max" : 3490045576,
"mean" : 2261233143,
"min" : 429035600,
"n" : 6,
"sum" : 13567398858,
"variance" : "-1.36775568782523e+18" <--- NEGATIVE VARIANCE, STRING ENCODED
}
}
---
$DUMPED_RAWDATA1 = [
'914426431563644928',
.
.
.
<snip 3140 identical lines>
'914426431563644928'
];
{
"json_encoded_rawdata" : [
914426431563644928,
.
.
.
<snip 3140 identical lines>
914426431563644928
],
"json_encoded_stats" : {
"max" : 914426431563644928,
"mean" : "9.14426431563676e+17", <--- STRING ENCODED
"min" : 914426431563644928,
"n" : 3142,
"sum" : 2.87312784797307e+21,
"variance" : -9.75463826617761e+22 <--- NEGATIVE VARIANCE
}
}
None of your inputs are big enough to require JSON::PP to create Math::BigInt objects on a system with 64-bit ints, so it doesn't.
You could do something like the following at the start of your sub.
#_ = map { Math::BigInt->new($_) } #_; # Or ::BigFloat?
Alternatively,
my $zero_B = Math::BigInt->new(0);
sub stats {
my $n = #_;
my $sum_B = sum($zero_B, #_);
my $mean_B = $sum_B / $n;
my $var_B = sum( map { Math::BigInt->new($_) ** 2 } #_ ) / $n - $mean_B ** 2;
my ($min, $max) = minmax(#_);
return {
n => $n,
sum => $sum_B,
max => $max,
min => $min,
mean => $mean_B,
variance => $var_B,
};
}
All together:
use strict;
use warnings;
use Data::Dumper qw( Dumper );
use JSON::PP qw( );
use List::MoreUtils qw( minmax );
use List::Util qw( sum );
use Math::BigInt qw( );
my $zero_B = Math::BigInt->new(0);
my $JSON = JSON::PP->new->allow_bignum->utf8->pretty->canonical;
sub stats {
my $n = #_;
my $sum_B = sum($zero_B, #_);
my $mean_B = $sum_B / $n;
my $var_B = sum( map { Math::BigInt->new($_) ** 2 } #_ ) / $n - $mean_B ** 2;
my ($min, $max) = minmax(#_);
return {
n => $n,
sum => $sum_B,
max => $max,
min => $min,
mean => $mean_B,
variance => $var_B,
};
}
my #test = (
[qw( 919300112739897344 919305709216464896 919305709216464896 985592115567603712 959299136196456448 )],
[qw( 479655558 429035600 3281034608 3281034608 2606592908 3490045576 )],
[ qw( 914426431563644928 ) x 3142 ]
);
for (#test) {
print "---\n";
my $s = stats( map { $JSON->decode($_) } #$_ );
if (
$JSON->encode($s->{variance}) =~ /"/ # MEAN ENCODED AS STRING
|| $JSON->encode($s->{mean}) =~ /"/ # VARIANCE ENCODED AS STRING
|| $s->{variance} < 0 # VARIANCE IS NEGATIVE!
) {
local $Data::Dumper::Varname = "DUMPED_RAWDATA";
print Dumper($_);
print $JSON->encode({
json_encoded_rawdata => $_,
json_encoded_stats => $s,
});
} else {
print "ok\n";
}
}
Notes:
Both approaches will work even if the objects are already Math::* objects.
I identified the vars are guaranteed to contain a Math:Big* object using _B for clarity.
I moved the testing code to the test harness.
I used minmax because it's more efficient than calling min and max separately.
I imported the subs from the modules to avoid having to use use their full name.
No need to force something in scalar context into scalar context.
#ikegami's answer works correctly
but it is too slow for me as this subroutine
is called a lot of times in my program's inner loop.
I think that is the cost of ensuring that
all numbers are converted to arbitrary precision ones.
I ended up using the following implementation
which avoids converting all numbers to arbitrary
precision type.
sub stats {
my $n = scalar #_;
my $sum = List::Util::sum0(#_);
my $mean = $sum / $n;
my $var = List::Util::sum0( map { ( $_ - $mean )**2 } #_ ) / $n;
$mean += 0;
$var += 0; # TO ENSURE THAT THEY ARE ENCODED AS NUMBERS IN JSON
{
n => $n,
sum => $sum,
max => List::Util::max(#_),
min => List::Util::min(#_),
mean => $mean,
variance => $var,
};
}
I changed the method of calculating variance
to ensure that negative results are avoided
(as suggested by #Robert).
It may sacrifice precision in $sum
(and everything that depends on $sum)
due to floating point addition of large integers.
It completes the job in an acceptable execution time though.
The unintended JSON encoding of numbers as strings
is explained in https://metacpan.org/pod/JSON::PP#simple-scalars.
This problem is solved by using the method
suggested there to force encoding as numbers.
JSON::PP will encode undefined scalars as JSON null values, scalars
that have last been used in a string context before encoding as JSON
strings, and anything else as number value
You can force the type to be a JSON number by numifying it:
my $x = "3"; # some variable containing a string
$x += 0; # numify it, ensuring it will be dumped as a number
$x *= 1; # same thing, the choice is yours. in to force

decode_json and return first key in hash

JSON string input: https://www.alphavantage.co/query?function=TIME_SERIES_DAILY&symbol=MSFT&apikey=demo
I am trying to return just the first key (current day) in the hash but have been unable to do so. My code looks like the following
#!/usr/bin/perl
use strict;
use warnings;
use LWP::Simple;
use Data::Dumper;
use JSON;
my $html = get("https://www.alphavantage.co/query?function=TIME_SERIES_DAILY&symbol=AMD&apikey=CMDPTEHVYH7W5VSZ");
my $decoded = decode_json($html);
my ($open) = $decoded->{'Time Series (Daily)'}->[0]->{'1. open'};
I keep getting "Not an ARRAY reference" which I researched and got more confused.
I can access what I want directly with the below code but I want to access just the first result or the current day:
my ($open) = $decoded->{'Time Series (Daily)'}{'2017-12-20'}{'1. open'};
Also if I do something like this:
my ($open) = $decoded->{'Time Series (Daily)'};
print Dumper($open);
The output is as follows:
$VAR1 = {
'2017-09-07' => {
'1. open' => '12.8400',
'5. volume' => '35467788',
'2. high' => '12.9400',
'4. close' => '12.6300',
'3. low' => '12.6000'
},
'2017-11-15' => {
'3. low' => '10.7700',
'4. close' => '11.0700',
'2. high' => '11.1300',
'5. volume' => '33326871',
'1. open' => '11.0100'
},
'2017-11-30' => {
'1. open' => '10.8700',
'2. high' => '11.0300',
'5. volume' => '43101899',
'3. low' => '10.7600',
'4. close' => '10.8900'
},
Thank you in advance for any help you can provide a noob.
Problem 1: { denotes the start of a JSON object, which gets decoded into a hash. Trying to derefence an array is going to fail.
Problem 2: Like Perl hashes, JSON objects are unordered, so talking about the
"first key" makes no sense. Perhaps you want the most recent date?
use List::Util qw( maxstr );
my $time_series_daily = $decoded->{'Time Series (Daily)'};
my $latest_date = maxstr #$time_series_daily;
my $open = $time_series_daily->{$latest_date}{'1. open'};
You are picking among hashref keys, not array (sequential container) elements. Since hashes are inherently unordered you can't index into that list but need to sort keys as needed.
With the exact format you show this works
my $top = (sort { $b cmp $a } keys %{ $decoded->{'Time Series (Daily)'} } )[0];
say $decoded->{'Time Series (Daily)'}{$top}{'1. open'};
It gets the list of keys, inverse-sorts them (alphabetically), and takes the first element of that list.
If your date-time format may vary then you'll need to parse it for sorting.
If you will really ever only want the most-recent one this is inefficient since it sorts the whole list. Then use a more specific tool to extract only the "largest" element, like
use List::Util qw(reduce);
my $top = reduce { $a gt $b ? $a : $b }
keys %{ $decoded->{'Time Series (Daily)'} };
But then in your case this can be done simply by maxstr from the same List::Util module, as shown in ikegami's answer. On the other hand, if the datetime format doesn't lend itself to a direct lexicographical comparison used by strmax then the reduce allows use of custom comparisons.

Symfony3 : How to do a massive import from a CSV file as fast as possible?

I have a .csv file with more than 690 000 rows.
I found a solution to import data that works very well but it's a little bit slow... (around 100 records every 3 seconds = 63 hours !!).
How can I improve my code to make it faster ?
I do the import via a console command.
Also, I would like to import only prescribers that aren't already in database (to save time). To complicate things, no field is really unique (except for id).
Two prescribers can have the same lastname, firstname, live in the same city and have the same RPPS and professional codes. But, it's the combination of these 6 fields which makes them unique !
That's why I check on every field before create a new one.
<?php
namespace AppBundle\Command;
use Symfony\Bundle\FrameworkBundle\Command\ContainerAwareCommand;
use Symfony\Component\Console\Input\InputInterface;
use Symfony\Component\Console\Output\OutputInterface;
use Symfony\Component\Console\Helper\ProgressBar;
use AppBundle\Entity\Prescriber;
class PrescribersImportCommand extends ContainerAwareCommand
{
protected function configure()
{
$this
// the name of the command (the part after "bin/console")
->setName('import:prescribers')
->setDescription('Import prescribers from .csv file')
;
}
protected function execute(InputInterface $input, OutputInterface $output)
{
// Show when the script is launched
$now = new \DateTime();
$output->writeln('<comment>Start : ' . $now->format('d-m-Y G:i:s') . ' ---</comment>');
// Import CSV on DB via Doctrine ORM
$this->import($input, $output);
// Show when the script is over
$now = new \DateTime();
$output->writeln('<comment>End : ' . $now->format('d-m-Y G:i:s') . ' ---</comment>');
}
protected function import(InputInterface $input, OutputInterface $output)
{
$em = $this->getContainer()->get('doctrine')->getManager();
// Turning off doctrine default logs queries for saving memory
$em->getConnection()->getConfiguration()->setSQLLogger(null);
// Get php array of data from CSV
$data = $this->getData();
// Start progress
$size = count($data);
$progress = new ProgressBar($output, $size);
$progress->start();
// Processing on each row of data
$batchSize = 100; # frequency for persisting the data
$i = 1; # current index of records
foreach($data as $row) {
$p = $em->getRepository('AppBundle:Prescriber')->findOneBy(array(
'rpps' => $row['rpps'],
'lastname' => $row['nom'],
'firstname' => $row['prenom'],
'profCode' => $row['code_prof'],
'postalCode' => $row['code_postal'],
'city' => $row['ville'],
));
# If the prescriber doest not exist we create one
if(!is_object($p)){
$p = new Prescriber();
$p->setRpps($row['rpps']);
$p->setLastname($row['nom']);
$p->setFirstname($row['prenom']);
$p->setProfCode($row['code_prof']);
$p->setPostalCode($row['code_postal']);
$p->setCity($row['ville']);
$em->persist($p);
}
# flush each 100 prescribers persisted
if (($i % $batchSize) === 0) {
$em->flush();
$em->clear(); // Detaches all objects from Doctrine!
// Advancing for progress display on console
$progress->advance($batchSize);
$progress->display();
}
$i++;
}
// Flushing and clear data on queue
$em->flush();
$em->clear();
// Ending the progress bar process
$progress->finish();
}
protected function getData()
{
// Getting the CSV from filesystem
$fileName = 'web/docs/prescripteurs.csv';
// Using service for converting CSV to PHP Array
$converter = $this->getContainer()->get('app.csvtoarray_converter');
$data = $converter->convert($fileName);
return $data;
}
}
EDIT
According to #Jake N answer, here is the final code.
It's very very faster ! 10 minutes to import 653 727 / 693 230 rows (39 503 duplicate items!)
1) Add two columns in my table : created_at and updated_at
2) Add a single index of type UNIQUE on every column of my table (except id and dates) to prevent duplicate items with phpMyAdmin.
3) Add ON DUPLICATE KEY UPDATE in my query, to update just the updated_at column.
foreach($data as $row) {
$sql = "INSERT INTO prescripteurs (rpps, nom, prenom, code_prof, code_postal, ville)
VALUES(:rpps, :nom, :prenom, :codeprof, :cp, :ville)
ON DUPLICATE KEY UPDATE updated_at = NOW()";
$stmt = $em->getConnection()->prepare($sql);
$r = $stmt->execute(array(
'rpps' => $row['rpps'],
'nom' => $row['nom'],
'prenom' => $row['prenom'],
'codeprof' => $row['code_prof'],
'cp' => $row['code_postal'],
'ville' => $row['ville'],
));
if (!$r) {
$progress->clear();
$output->writeln('<comment>An error occured.</comment>');
$progress->display();
} elseif (($i % $batchSize) === 0) {
$progress->advance($batchSize);
$progress->display();
}
$i++;
}
// Ending the progress bar process
$progress->finish();
1. Don't use Doctrine
Try to not use Doctrine if you can, it eats memory and as you have found is slow. Try and use just raw SQL for the import with simple INSERT statements:
$sql = <<<SQL
INSERT INTO `category` (`label`, `code`, `is_hidden`) VALUES ('Hello', 'World', '1');
SQL;
$stmt = $this->getDoctrine()->getManager()->getConnection()->prepare($sql);
$stmt->execute();
Or you can prepare the statement with values:
$sql = <<<SQL
INSERT INTO `category` (`label`, `code`, `is_hidden`) VALUES (:label, :code, :hidden);
SQL;
$stmt = $this->getDoctrine()->getManager()->getConnection()->prepare($sql);
$stmt->execute(['label' => 'Hello', 'code' => 'World', 'hidden' => 1);
Untested code, but it should get you started as this is how I have done it before.
2. Index
Also, for your checks, have you got an index on all those fields? So that the lookup is as quick as possible.

Importing .csv file in X-Cart 5 from console

I want to set up a cron job and do scheduled imports from a particular .csv file that I will upload/update via ftp.
I wonder if there is an easy way to set up a product import for X-Cart 5 using linux console command?
There is not default way to do import via linux console. But you can create simple console script and run it via cron.
Example of code(only concept, not solution for your case):
#!/usr/bin/env php
<?php
if ('cli' != PHP_SAPI) {
exit (1);
}
require_once __DIR__ . DIRECTORY_SEPARATOR . 'top.inc.php';
XLite::getInstance()->run(true);
// Initialize importer
// See all possible options in classes/XLite/Logic/Import/Importer.php __construct()
$importer = new \XLite\Logic\Import\Importer(
array(
'warningsAccepted' => true,
'delimiter' => ',',
'ignoreFileChecking' => true,
'files' => array(
'/full/path/to/xcart/var/import/products.csv',
'/full/path/to/xcart/var/import/categories.csv'
)
)
);
// Verifiaction step
while ($importer->getStep()->valid()) {
$importer->getStep()->current()->process();
$importer->getStep()->next();
}
// Check warnings & errors after verification and save to log file
if($importer->hasWarnings()) {
$warnings = \XLite\Core\Database::getRepo('XLite\Model\ImportLog')
->findBy(array('type' => \XLite\Model\ImportLog::TYPE_WARNING));
\XLite\Logger::logCustom('import_warnings', var_export($warnings, true));
//Clear warning messages
\XLite\Core\Database::getRepo('XLite\Model\ImportLog')
->deleteByType(\XLite\Model\ImportLog::TYPE_WARNING);
}
if($importer->hasErrors()) {
$errors = \XLite\Core\Database::getRepo('XLite\Model\ImportLog')
->findBy(array('type' => \XLite\Model\ImportLog::TYPE_ERROR));
\XLite\Logger::logCustom('import_errors', var_export($errors, true));
}
// Import/proccess quick data for products/resize images
// This loop wont'b executed if ($importer->hasWarnings() == true && warningsAccepted == false)
// or ($importer->hasErrors() == true)
while ($importer->isNextStepAllowed()) {
$importer->getOptions()->step = $importer->getOptions()->step + 1;
$importer->getOptions()->position = 0;
while ($importer->getStep()->valid()) {
$importer->getStep()->current()->process();
$importer->getStep()->next();
}
}
Also you can use scheduled task in X-Cart 5. To use it you should create your own module with class witch will extends abstract class classes/XLite/Core/Task/Base/Periodic.php
You can find example of code in file classes/XLite/Module/CDev/XMLSitemap/Core/Task/GenerateSitemap.php
Run tasks registered in X-Cart 5: php console.php --target=cron

Convert MongoDB Controller items into JSON in perl?

using MongoDB;
my $content = $db->items->find('something');
my $json;
while($content->next){
push the data into a $json
}
where there is a way
my #variables = $content->all;
foreach (#variables){
push the data into $json
}
Is there any way i can convert directly push the data into json string;
and push the data into Mojolicious
$self->render(json => $json);
Wrote a small Mojo test script for it. Use
$collection->find->all
to get a list of all pages, not an iterator. Here's the test:
#!/usr/bin/env perl
use Mojolicious::Lite;
use MongoDB;
# --- MongoDB preparation ---
my $conn = MongoDB::Connection->new; # create a MongoDB connection
my $test_db = $conn->test; # a MongoDB database
my $tests = $test_db->tests; # a test collection
# insert test objects
$tests->remove();
$tests->insert({name => 'foo', age => 42});
$tests->insert({name => 'bar', age => 17});
# --- Mojolicious::Lite web app ---
# access the tests collection in a mojoy way
helper mongo_tests => sub {$tests};
# list all tests as JSON
get '/' => sub {
my $self = shift;
$self->render(json => [$self->mongo_tests->find->all]);
};
# --- web app test ---
use Test::More tests => 6;
use Test::Mojo;
my $tester = Test::Mojo->new;
$tester->get_ok('/')->status_is(200);
my $json = $tester->tx->res->json;
is $json->[0]{name}, 'foo', 'right name';
is $json->[0]{age}, 42, 'right age';
is $json->[1]{name}, 'bar', 'right name';
is $json->[1]{age}, 17, 'right age';
Output:
1..6
ok 1 - get /
ok 2 - 200 OK
ok 3 - right name
ok 4 - right age
ok 5 - right name
ok 6 - right age
Note: I couldn't use is_deeply to test the $json data structure because MongoDB adds OIDs. You'll see them when you dump $json.