I'm new to XML::Twig, and I'm trying to parse a PubMed XML 2.0 esummary final to place into a mySQL database. I've gotten this far:
#!/bin/perl -w
use strict;
use DBI;
use XML::Twig;
my $uid = "";
my $title = "";
my $sortpubdate = "";
my $sortfirstauthor = "";
my $dbh = DBI->connect ("DBI:mysql:medline:localhost:80",
"root", "mysql");
my $t= new XML::Twig( twig_roots => { 'DocumentSummary' => $uid => \&submit },
twig_handlers => { 'DocumentSummary/Title' => $title, 'DocumentSummary/SortPubDate' => $sortpubdate, 'DocumentSummary/SortFirstAuthor' => $sortfirstauthor});
$t->parsefile('20112.xml');
$dbh->disconnect();
exit;
sub submit
{ my $insert= $dbh->prepare( "INSERT INTO medline_citation (uid, title, sortpubdate, sortfirstauthor) VALUES (?, ?, ?, ?);");
$insert->bind_param( 1, $uid);
$insert->bind_param( 2, $title);
$insert->bind_param( 3, $sortpubdate);
$insert->bind_param( 4, $sortfirstauthor);
$insert->execute();
$t->purge;
}
But Perl seems to stall for some reason. Am I doing this right? I'm trying to use twig_roots to decrease the amount of parsing since I'm only interested in a few fields (these are large files).
Here is an example of the XML:
<DocumentSummary uid="22641317">
<PubDate>2012 Jun 1</PubDate>
<EPubDate></EPubDate>
<Source>Clin J Oncol Nurs</Source>
<Authors>
<Author>
<Name>Park SH</Name>
<AuthType>
Author
</AuthType>
<ClusterID>0</ClusterID>
</Author>
<Author>
<Name>Knobf MT</Name>
<AuthType>
Author
</AuthType>
<ClusterID>0</ClusterID>
</Author>
<Author>
<Name>Sutton KM</Name>
<AuthType>
Author
</AuthType>
<ClusterID>0</ClusterID>
</Author>
</Authors>
<LastAuthor>Sutton KM</LastAuthor>
<Title>Etiology, assessment, and management of aromatase inhibitor-related musculoskeletal symptoms.</Title>
<SortTitle>etiology assessment and management of aromatase inhibitor related musculoskeletal symptoms </SortTitle>
<Volume>16</Volume>
<Issue>3</Issue>
<Pages>260-6</Pages>
<Lang>
<string>eng</string>
</Lang>
<NlmUniqueID>9705336</NlmUniqueID>
<ISSN>1092-1095</ISSN>
<ESSN>1538-067X</ESSN>
<PubType>
<flag>Journal Article</flag>
</PubType>
<RecordStatus>
PubMed - in process
</RecordStatus>
<PubStatus>4</PubStatus>
<ArticleIds>
<ArticleId>
<IdType>pii</IdType>
<IdTypeN>4</IdTypeN>
<Value>N1750TW804546361</Value>
</ArticleId>
<ArticleId>
<IdType>doi</IdType>
<IdTypeN>3</IdTypeN>
<Value>10.1188/12.CJON.260-266</Value>
</ArticleId>
<ArticleId>
<IdType>pubmed</IdType>
<IdTypeN>1</IdTypeN>
<Value>22641317</Value>
</ArticleId>
<ArticleId>
<IdType>rid</IdType>
<IdTypeN>8</IdTypeN>
<Value>22641317</Value>
</ArticleId>
<ArticleId>
<IdType>eid</IdType>
<IdTypeN>8</IdTypeN>
<Value>22641317</Value>
</ArticleId>
</ArticleIds>
<History>
<PubMedPubDate>
<PubStatus>entrez</PubStatus>
<Date>2012/05/30 06:00</Date>
</PubMedPubDate>
<PubMedPubDate>
<PubStatus>pubmed</PubStatus>
<Date>2012/05/30 06:00</Date>
</PubMedPubDate>
<PubMedPubDate>
<PubStatus>medline</PubStatus>
<Date>2012/05/30 06:00</Date>
</PubMedPubDate>
</History>
<References>
</References>
<Attributes>
<flag>Has Abstract</flag>
</Attributes>
<PmcRefCount>0</PmcRefCount>
<FullJournalName>Clinical journal of oncology nursing</FullJournalName>
<ELocationID></ELocationID>
<ViewCount>0</ViewCount>
<DocType>citation</DocType>
<SrcContribList>
</SrcContribList>
<BookTitle></BookTitle>
<Medium></Medium>
<Edition></Edition>
<PublisherLocation></PublisherLocation>
<PublisherName></PublisherName>
<SrcDate></SrcDate>
<ReportNumber></ReportNumber>
<AvailableFromURL></AvailableFromURL>
<LocationLabel></LocationLabel>
<DocContribList>
</DocContribList>
<DocDate></DocDate>
<BookName></BookName>
<Chapter></Chapter>
<SortPubDate>2012/06/01 00:00</SortPubDate>
<SortFirstAuthor>Park SH</SortFirstAuthor>
</DocumentSummary>
Thanks!
The way I'd do this is to have a single handler, for the DocumentSummary, that feeds the DB, then purges the record. There is no need to get fancier than this.
Also, I find DBIx::Simple, well, simpler to use than raw DBI, it takes care of preparing and caching statements for me:
#!/bin/perl
use strict;
use warnings;
use DBIx::Simple;
use XML::Twig;
my $db = DBIx::Simple->connect ("dbi:SQLite:dbname=t.db"); # replace by your DSN
my $t= XML::Twig->new( twig_roots => { DocumentSummary => \&submit },)
->parsefile('20112.xml');
$db->disconnect();
exit;
sub submit
{ my( $t, $summary)= #_;
my $insert= $db->query( "INSERT INTO medline_citation (uid, title, sortpubdate, sortfirstauthor) VALUES (?, ?, ?, ?);",
$summary->att( 'uid'),
map { $summary->field( $_) } ( qw( Title SortPubDate SortFirstAuthor))
);
$t->purge;
}
If you're wondering about map { $summary->field( $_) } ( qw( Title SortPubDate SortFirstAuthor)), it's just a fancier (and IMHO more maintainable) way to write $summary->field( 'Title'), $summary->field( 'SortPubDate'), $summary->field( 'SortFirstAuthor' )
Your syntax of handlers is wrong. See the documentation for examples:
my $twig=XML::Twig->new(
twig_handlers =>
{ title => sub { $_->set_tag( 'h2') }, # change title tags to h2
para => sub { $_->set_tag( 'p') }, # change para to p
hidden => sub { $_->delete; }, # remove hidden elements
list => \&my_list_process, # process list elements
div => sub { $_[0]->flush; }, # output and free memory
},
pretty_print => 'indented', # output will be nicely formatted
empty_tags => 'html', # outputs <empty_tag />
);
Related
JSON string input: https://www.alphavantage.co/query?function=TIME_SERIES_DAILY&symbol=MSFT&apikey=demo
I am trying to return just the first key (current day) in the hash but have been unable to do so. My code looks like the following
#!/usr/bin/perl
use strict;
use warnings;
use LWP::Simple;
use Data::Dumper;
use JSON;
my $html = get("https://www.alphavantage.co/query?function=TIME_SERIES_DAILY&symbol=AMD&apikey=CMDPTEHVYH7W5VSZ");
my $decoded = decode_json($html);
my ($open) = $decoded->{'Time Series (Daily)'}->[0]->{'1. open'};
I keep getting "Not an ARRAY reference" which I researched and got more confused.
I can access what I want directly with the below code but I want to access just the first result or the current day:
my ($open) = $decoded->{'Time Series (Daily)'}{'2017-12-20'}{'1. open'};
Also if I do something like this:
my ($open) = $decoded->{'Time Series (Daily)'};
print Dumper($open);
The output is as follows:
$VAR1 = {
'2017-09-07' => {
'1. open' => '12.8400',
'5. volume' => '35467788',
'2. high' => '12.9400',
'4. close' => '12.6300',
'3. low' => '12.6000'
},
'2017-11-15' => {
'3. low' => '10.7700',
'4. close' => '11.0700',
'2. high' => '11.1300',
'5. volume' => '33326871',
'1. open' => '11.0100'
},
'2017-11-30' => {
'1. open' => '10.8700',
'2. high' => '11.0300',
'5. volume' => '43101899',
'3. low' => '10.7600',
'4. close' => '10.8900'
},
Thank you in advance for any help you can provide a noob.
Problem 1: { denotes the start of a JSON object, which gets decoded into a hash. Trying to derefence an array is going to fail.
Problem 2: Like Perl hashes, JSON objects are unordered, so talking about the
"first key" makes no sense. Perhaps you want the most recent date?
use List::Util qw( maxstr );
my $time_series_daily = $decoded->{'Time Series (Daily)'};
my $latest_date = maxstr #$time_series_daily;
my $open = $time_series_daily->{$latest_date}{'1. open'};
You are picking among hashref keys, not array (sequential container) elements. Since hashes are inherently unordered you can't index into that list but need to sort keys as needed.
With the exact format you show this works
my $top = (sort { $b cmp $a } keys %{ $decoded->{'Time Series (Daily)'} } )[0];
say $decoded->{'Time Series (Daily)'}{$top}{'1. open'};
It gets the list of keys, inverse-sorts them (alphabetically), and takes the first element of that list.
If your date-time format may vary then you'll need to parse it for sorting.
If you will really ever only want the most-recent one this is inefficient since it sorts the whole list. Then use a more specific tool to extract only the "largest" element, like
use List::Util qw(reduce);
my $top = reduce { $a gt $b ? $a : $b }
keys %{ $decoded->{'Time Series (Daily)'} };
But then in your case this can be done simply by maxstr from the same List::Util module, as shown in ikegami's answer. On the other hand, if the datetime format doesn't lend itself to a direct lexicographical comparison used by strmax then the reduce allows use of custom comparisons.
I have a .csv file with more than 690 000 rows.
I found a solution to import data that works very well but it's a little bit slow... (around 100 records every 3 seconds = 63 hours !!).
How can I improve my code to make it faster ?
I do the import via a console command.
Also, I would like to import only prescribers that aren't already in database (to save time). To complicate things, no field is really unique (except for id).
Two prescribers can have the same lastname, firstname, live in the same city and have the same RPPS and professional codes. But, it's the combination of these 6 fields which makes them unique !
That's why I check on every field before create a new one.
<?php
namespace AppBundle\Command;
use Symfony\Bundle\FrameworkBundle\Command\ContainerAwareCommand;
use Symfony\Component\Console\Input\InputInterface;
use Symfony\Component\Console\Output\OutputInterface;
use Symfony\Component\Console\Helper\ProgressBar;
use AppBundle\Entity\Prescriber;
class PrescribersImportCommand extends ContainerAwareCommand
{
protected function configure()
{
$this
// the name of the command (the part after "bin/console")
->setName('import:prescribers')
->setDescription('Import prescribers from .csv file')
;
}
protected function execute(InputInterface $input, OutputInterface $output)
{
// Show when the script is launched
$now = new \DateTime();
$output->writeln('<comment>Start : ' . $now->format('d-m-Y G:i:s') . ' ---</comment>');
// Import CSV on DB via Doctrine ORM
$this->import($input, $output);
// Show when the script is over
$now = new \DateTime();
$output->writeln('<comment>End : ' . $now->format('d-m-Y G:i:s') . ' ---</comment>');
}
protected function import(InputInterface $input, OutputInterface $output)
{
$em = $this->getContainer()->get('doctrine')->getManager();
// Turning off doctrine default logs queries for saving memory
$em->getConnection()->getConfiguration()->setSQLLogger(null);
// Get php array of data from CSV
$data = $this->getData();
// Start progress
$size = count($data);
$progress = new ProgressBar($output, $size);
$progress->start();
// Processing on each row of data
$batchSize = 100; # frequency for persisting the data
$i = 1; # current index of records
foreach($data as $row) {
$p = $em->getRepository('AppBundle:Prescriber')->findOneBy(array(
'rpps' => $row['rpps'],
'lastname' => $row['nom'],
'firstname' => $row['prenom'],
'profCode' => $row['code_prof'],
'postalCode' => $row['code_postal'],
'city' => $row['ville'],
));
# If the prescriber doest not exist we create one
if(!is_object($p)){
$p = new Prescriber();
$p->setRpps($row['rpps']);
$p->setLastname($row['nom']);
$p->setFirstname($row['prenom']);
$p->setProfCode($row['code_prof']);
$p->setPostalCode($row['code_postal']);
$p->setCity($row['ville']);
$em->persist($p);
}
# flush each 100 prescribers persisted
if (($i % $batchSize) === 0) {
$em->flush();
$em->clear(); // Detaches all objects from Doctrine!
// Advancing for progress display on console
$progress->advance($batchSize);
$progress->display();
}
$i++;
}
// Flushing and clear data on queue
$em->flush();
$em->clear();
// Ending the progress bar process
$progress->finish();
}
protected function getData()
{
// Getting the CSV from filesystem
$fileName = 'web/docs/prescripteurs.csv';
// Using service for converting CSV to PHP Array
$converter = $this->getContainer()->get('app.csvtoarray_converter');
$data = $converter->convert($fileName);
return $data;
}
}
EDIT
According to #Jake N answer, here is the final code.
It's very very faster ! 10 minutes to import 653 727 / 693 230 rows (39 503 duplicate items!)
1) Add two columns in my table : created_at and updated_at
2) Add a single index of type UNIQUE on every column of my table (except id and dates) to prevent duplicate items with phpMyAdmin.
3) Add ON DUPLICATE KEY UPDATE in my query, to update just the updated_at column.
foreach($data as $row) {
$sql = "INSERT INTO prescripteurs (rpps, nom, prenom, code_prof, code_postal, ville)
VALUES(:rpps, :nom, :prenom, :codeprof, :cp, :ville)
ON DUPLICATE KEY UPDATE updated_at = NOW()";
$stmt = $em->getConnection()->prepare($sql);
$r = $stmt->execute(array(
'rpps' => $row['rpps'],
'nom' => $row['nom'],
'prenom' => $row['prenom'],
'codeprof' => $row['code_prof'],
'cp' => $row['code_postal'],
'ville' => $row['ville'],
));
if (!$r) {
$progress->clear();
$output->writeln('<comment>An error occured.</comment>');
$progress->display();
} elseif (($i % $batchSize) === 0) {
$progress->advance($batchSize);
$progress->display();
}
$i++;
}
// Ending the progress bar process
$progress->finish();
1. Don't use Doctrine
Try to not use Doctrine if you can, it eats memory and as you have found is slow. Try and use just raw SQL for the import with simple INSERT statements:
$sql = <<<SQL
INSERT INTO `category` (`label`, `code`, `is_hidden`) VALUES ('Hello', 'World', '1');
SQL;
$stmt = $this->getDoctrine()->getManager()->getConnection()->prepare($sql);
$stmt->execute();
Or you can prepare the statement with values:
$sql = <<<SQL
INSERT INTO `category` (`label`, `code`, `is_hidden`) VALUES (:label, :code, :hidden);
SQL;
$stmt = $this->getDoctrine()->getManager()->getConnection()->prepare($sql);
$stmt->execute(['label' => 'Hello', 'code' => 'World', 'hidden' => 1);
Untested code, but it should get you started as this is how I have done it before.
2. Index
Also, for your checks, have you got an index on all those fields? So that the lookup is as quick as possible.
I'm working in a bioinformatics project that requires me to read genomic data (nothing too fancy, just think of it as strings) from various organisms and insert it into a database. Each read belongs to one organism, and can contain from 5000 to 50000 thousand genes, which I need to process and analyze prior to storage.
The script currently doing this is written in perl and, after all calculations, stores the results in a hash likie this:
$new{$id}{gene_name} = $id;
$new{$id}{gene_database_source} = $gene_database_source
$new{$id}{product} = $product;
$new{$id}{sequence} = $sequence;
$new{$id}{seqlength} = $seqlength;
$new{$id}{digest} = $digest;
$new{$id}{mw} = $mw;
$new{$id}{iep} = $iep;
$new{$id}{tms} = $tms;
After all genes are read and, the insertions are made looping through the hash into an eval{} statement.
eval {
foreach my $id (keys %new) {
my $rs = $schema->resultset('Genes')->create(
{
gene_name => $new{$id}{gene_name},
gene_product => $new{$id}{product},
sequence => $new{$id}{sequence},
gene_protein_length => $new{$id}{seqlength},
digest => $new{$id}{digest},
gene_isoelectric_point => $new{$id}{iep},
gene_molecular_weight => $new{$id}{mw},
gene_tmd_count => $new{$id}{tms},
gene_species => $species,
species_code => $spc,
user_id => $tdruserid,
gene_database_source => $new{$id}{gene_database_source}
}
);
};
While this "works", it has at least two problems I'd like to solve:
The eval statement is intended to "failsafe" the insertions: if one of the insertions fail, the eval dies and no insertion is done. This is clearly not how eval works. I'm pretty sure all insertions made
until failure point will be done and there's no rollback whatsoever.
The script needs to loop twice through very large datasets (one while reading and creating the hashes, and once again when reading
the hashes and performing the insertions). This makes the process' performance rather poor.
Instead of creating the hashes, I'd been thinking of using the new directive of DBIX $schema->new({..stuff..}); and then doing a massive insert transaction. That would solve the double iteration and the eval would either work (or not) with a single transaction, which would do the expected behaviour of < either all insertions or none > ... Is there a way to do this?
You can create your massive transaction by using a TxnScopeGuard in DBIC. In the most basic form, that would be as follows.
eval { # or try from Try::Tiny
my $guard = $schema->txn_scope_guard;
foreach my $id ( keys %new ) {
my $rs = $schema->resultset('Genes')->create(
{
gene_name => $new{$id}{gene_name},
gene_product => $new{$id}{product},
sequence => $new{$id}{sequence},
gene_protein_length => $new{$id}{seqlength},
digest => $new{$id}{digest},
gene_isoelectric_point => $new{$id}{iep},
gene_molecular_weight => $new{$id}{mw},
gene_tmd_count => $new{$id}{tms},
gene_species => $species,
species_code => $spc,
user_id => $tdruserid,
gene_database_source => $new{$id}{gene_database_source}
}
);
}
$guard->commit;
}
You create a scope guard object, and when you're done setting up your transaction, you commit it. If the object goes out of scope, i.e. because something died, it will rollback the transaction automatically.
The eval can catch the die, and your program will not crash. You had that part correct, but you're also right that your code will not undo previous inserts. Note that Try::Tiny's try provides nicer syntax. But it's not needed here.
Transaction in this case means that all queries are collected and run at the same time.
Note that this will still insert one row per INSERT statement only!
If you want to instead create larger INSERT statements, like the following, you need populate, not new.
INSERT INTO foo (bar, baz) VALUES
(1, 1),
(2, 2),
(3, 3),
...
The populate method lets you pass in an array reference with multiple rows at one time. This is supposed to be way faster than inserting one at a time.
$schema->resultset("Artist")->populate([
[ qw( artistid name ) ],
[ 100, 'A Formally Unknown Singer' ],
[ 101, 'A singer that jumped the shark two albums ago' ],
[ 102, 'An actually cool singer' ],
]);
Translated to your loop, that would be as follows. Note that the documentation claims that it's faster if you run it in void context.
eval {
$schema->resultset('Genes')->populate(
[
[
qw(
gene_name gene_product sequence
gene_protein_length digest gene_isoelectric_point
gene_molecular_weight gene_tmd_count gene_species
species_code user_id gene_database_source
)
],
map {
[
$new{$_}{gene_name}, $new{$_}{product},
$new{$_}{sequence}, $new{$_}{seqlength},
$new{$_}{digest}, $new{$_}{iep},
$new{$_}{mw}, $new{$_}{tms},
$species, $spc,
$tdruserid, $new{$_}{gene_database_source},
]
} keys %new
],
);
}
Like this the scope guard is not needed. However, I would advise you to not do more than 1000 rows per statement though. Processing it in chunks might be a good idea for performance reasons. In that case, you'd loop over the keys 1000 at a time. List::MoreUtils has a nice natatime function for that.
use List::MoreUtils 'natatime';
eval {
my $guard = $schema->txn_scope_guard;
my $it = natatime 1_000, keys %new;
while ( my #keys = $it->() ) {
$schema->resultset('Genes')->populate(
[
[
qw(
gene_name gene_product sequence
gene_protein_length digest gene_isoelectric_point
gene_molecular_weight gene_tmd_count gene_species
species_code user_id gene_database_source
)
],
map {
[
$new{$_}{gene_name}, $new{$_}{product},
$new{$_}{sequence}, $new{$_}{seqlength},
$new{$_}{digest}, $new{$_}{iep},
$new{$_}{mw}, $new{$_}{tms},
$species, $spc,
$tdruserid, $new{$_}{gene_database_source},
]
} #keys
],
);
}
$guard->commit;
}
Now it will do 1000 rows per insertion, and run all those queries in one big transaction. If one of them fails, none will be done.
The script needs to loop twice through very large datasets (one while reading and creating the hashes, and once again when reading the hashes and performing the insertions). This makes the process' performance rather poor.
You're not showing how you create the data, besides this assignment.
$new{$id}{gene_name} = $id;
$new{$id}{gene_database_source} = $gene_database_source
$new{$id}{product} = $product;
If that's all there is to it, nothing is stopping you from using the approach I've shown above directly where you're processing the data the first time and building the hash. The following code is incomplete, because you're not telling us where the data is coming from, but you should get the gist.
eval {
my $guard = $schema->txn_scope_guard;
# we use this to collect rows to process
my #rows;
# this is where your data comes in
while ( my $foo = <DATA> ) {
# here you process the data and come up with your variables
my ( $id, $gene_database_source, $product, $sequence, $seqlength,
$digest, $mw, $iep, $tms );
# collect the row so we can insert it later
push(
#rows,
[
$id, $gene_database_source, $product, $sequence, $seqlength,
$digest, $mw, $iep, $tms,
]
);
# only insert if we reached the limit
if ( scalar #rows == 1000 ) {
$schema->resultset('Genes')->populate(
[
[
qw(
gene_name gene_product sequence
gene_protein_length digest gene_isoelectric_point
gene_molecular_weight gene_tmd_count gene_species
species_code user_id gene_database_source
)
],
\#rows,
],
);
# empty the list of values
#rows = ();
}
}
$guard->commit;
}
Essentially we collect up to 1000 rows directly as array references while we process them, and when we've reached the limit, we pass them to the database. We then reset our row array and start over. Again, all of this is wrapped in a transaction, so it will only be committed if all the inserts are fine.
There is more information on transactions in DBIC in the cookbook.
Please note that I have not tested any of this code.
I have received arguments in my mason handler which looks in the following format:
$data = {
'cacheParams' => 0,
'requests' => {
'locationId' => 1,
'uniqueId' => [
'ABC',
'DEF',
'XYZ'
]
}
};
I am able to access the requests by using $data['requests']. How do I access the values stored in requests, i.e. locationId and uniqueId ? I need to use these values to form another JSON in the following way:
my $input = {
stateID => 44,
locationId => requests.locationId,
uniqueId => requests.uniqueId
.
.
.
}
The $data['requests'] object should be an hash in your way. So you can access to the keys like in the following:
$data['requests']->{'locationId'}
$data['requests']->{'uniqueId'}
or
$requests = $data['requests']
$locationId = $requests->{'locationId'}
$uniqueId = $requests->{'uniqueId'}
I'd like to translate a perl web site in several languages. I search for and tried many ideas, but I think the best one is to save all translations inside the mySQL database. But I get a problem...
When a sentence extracted from the database contains a variable (scalar), it prints on the web page as a scalar:
You have $number new messages.
Is there a proper way to reassign $number its actual value ?
Thank you for your help !
You can use printf format strings in your database and pass in values to that.
printf allows you to specify the position of the argument so only have know what position with the list of parameters "$number" is.
For example
#!/usr/bin/perl
use strict;
my $Details = {
'Name' => 'Mr Satch',
'Age' => '31',
'LocationEn' => 'England',
'LocationFr' => 'Angleterre',
'NewMessages' => 20,
'OldMessages' => 120,
};
my $English = q(
Hello %1$s, I see you are %2$s years old and from %3$s
How are you today?
You have %5$i new messages and %6$i old messages
Have a nice day
);
my $French = q{
Bonjour %1$s, je vous vois d'%4$s et âgés de %2$s ans.
Comment allez-vous aujourd'hui?
Vous avez %5$i nouveaux messages et %6$i anciens messages.
Bonne journée.
};
printf($English, #$Details{qw/Name Age LocationEn LocationFr NewMessages OldMessages/});
printf($French, #$Details{qw/Name Age LocationEn LocationFr NewMessages OldMessages/});
This would be a nightmare to maintain so an alternative might be to include an argument list in the database:
#!/usr/bin/perl
use strict;
sub fetch_row {
return {
'Format' => 'You have %i new messages and %i old messages' . "\n",
'Arguments' => 'NewMessages OldMessages',
}
}
sub PrintMessage {
my ($info,$row) = #_;
printf($row->{'Format'},#$info{split(/ +/,$row->{'Arguments'})});
}
my $Details = {
'Name' => 'Mr Satch',
'Age' => '31',
'LocationEn' => 'England',
'LocationFr' => 'Angleterre',
'NewMessages' => 20,
'OldMessages' => 120,
};
my $row = fetch_row();
PrintMessage($Details,$row)