using MongoDB;
my $content = $db->items->find('something');
my $json;
while($content->next){
push the data into a $json
}
where there is a way
my #variables = $content->all;
foreach (#variables){
push the data into $json
}
Is there any way i can convert directly push the data into json string;
and push the data into Mojolicious
$self->render(json => $json);
Wrote a small Mojo test script for it. Use
$collection->find->all
to get a list of all pages, not an iterator. Here's the test:
#!/usr/bin/env perl
use Mojolicious::Lite;
use MongoDB;
# --- MongoDB preparation ---
my $conn = MongoDB::Connection->new; # create a MongoDB connection
my $test_db = $conn->test; # a MongoDB database
my $tests = $test_db->tests; # a test collection
# insert test objects
$tests->remove();
$tests->insert({name => 'foo', age => 42});
$tests->insert({name => 'bar', age => 17});
# --- Mojolicious::Lite web app ---
# access the tests collection in a mojoy way
helper mongo_tests => sub {$tests};
# list all tests as JSON
get '/' => sub {
my $self = shift;
$self->render(json => [$self->mongo_tests->find->all]);
};
# --- web app test ---
use Test::More tests => 6;
use Test::Mojo;
my $tester = Test::Mojo->new;
$tester->get_ok('/')->status_is(200);
my $json = $tester->tx->res->json;
is $json->[0]{name}, 'foo', 'right name';
is $json->[0]{age}, 42, 'right age';
is $json->[1]{name}, 'bar', 'right name';
is $json->[1]{age}, 17, 'right age';
Output:
1..6
ok 1 - get /
ok 2 - 200 OK
ok 3 - right name
ok 4 - right age
ok 5 - right name
ok 6 - right age
Note: I couldn't use is_deeply to test the $json data structure because MongoDB adds OIDs. You'll see them when you dump $json.
Related
I'm working on some document enhancements and example code snippets for Ruby's JSON class. I'm puzzled by this option to JSON.parse:
create_additions: If set to false, the Parser doesn't create additions even if a matching class and ::create_id was found. This option defaults to false.
Could someone please provide example code for using this?
Consider this:
require 'json'
class Range
def to_json(*a)
{
'json_class' => self.class.name,
'data' => [ first, last, exclude_end? ]
}.to_json(*a)
end
def self.json_create(o)
new(*o['data'])
end
end
foo = 1 .. 2
Generating JSON:
JSON.generate(foo) # => "{\"json_class\":\"Range\",\"data\":[1,2,false]}"
JSON.generate(foo, { create_additions: false }) # => "{\"json_class\":\"Range\",\"data\":[1,2,false]}"
JSON.generate(foo, { create_additions: true }) # => "{\"json_class\":\"Range\",\"data\":[1,2,false]}"
Parsing the generated JSON:
JSON.parse( JSON.generate(foo) ) # => {"json_class"=>"Range", "data"=>[1, 2, false]}
JSON.parse( JSON.generate(foo), { create_additions: false } ) # => {"json_class"=>"Range", "data"=>[1, 2, false]}
JSON.parse( JSON.generate(foo), { create_additions: true } ) # => 1..2
"2.4.3. JSON.parse and JSON.load" demonstrates a potential bug in JSON that affected create_additions. From there it was a simple thing, just some lines testing the result of toggling the state.
Why they had to close the security hole is for you to research as it involves the specification for JSON serialized data and it being a data-exchange standard, and an example in the JSON docs needs to cover that.
The example is right there in the documentation: https://ruby-doc.org/stdlib-2.6.3/libdoc/json/rdoc/JSON.html#module-JSON-label-Extended+rendering+and+loading+of+Ruby+objects.
The main difference in this respect between parse and load is that the former defaults to not create additions, the latter defaults to do it.
Extended rendering and loading of Ruby objects
provides optional additions allowing to serialize and deserialize Ruby
classes without loosing their type.
# without additions
require "json"
json = JSON.generate({range: 1..3, regex: /test/})
# => '{"range":"1..3","regex":"(?-mix:test)"}'
JSON.parse(json)
# => {"range"=>"1..3", "regex"=>"(?-mix:test)"}
# with additions
require "json/add/range"
require "json/add/regexp"
json = JSON.generate({range: 1..3, regex: /test/})
# => '{"range":{"json_class":"Range","a":[1,3,false]},"regex":{"json_class":"Regexp","o":0,"s":"test"}}'
JSON.parse(json)
# => {"range"=>{"json_class"=>"Range", "a"=>[1, 3, false]}, "regex"=>{"json_class"=>"Regexp", "o"=>0, "s"=>"test"}}
JSON.load(json)
# => {"range"=>1..3, "regex"=>/test/}
See #load for details.
I've created a basic client and server that pass a string, which I've changed to JSON instead. But the JSON string is only parsable before it gets sent through TCP. After it's sent, the string version is identical (after a chomp), but on the server side it no longer processes the JSON correctly. Here is some of my code (with other bits trimmed)
Some of the client code
require 'json'
require 'socket'
foo = {'a' => 1, 'b' => 2, 'c' => 3}
puts foo.to_s + "......."
foo.to_json
puts foo['b'] # => outputs the correct '2' answer
client = TCPSocket.open('localhost', 2000)
client.puts json
client.close;
Some of the server code
require 'socket'
require 'json'
server = TCPServer.open(2000)
while true
client = server.accept # Accept client
response = client.gets
print response
response = response.chomp
response.to_json
puts response['b'] # => outputs 'b'
end
The output 'b' should be '2' instead. How do I fix this?
Thanks
In your server you wrote response.to_json. This turns a string to JSON, then throws it away. And I don't like the .chomp, either.
Try
response = client.gets
hash = JSON.parse(response)
Now hash is a Ruby Hash object with your data in it, and hash['b'] should work correctly.
The problem is that .to_json does not parse JSON inside a string and replace itself with the result. It is used to convert the string into a format that is an acceptable JSON value.
require 'json'
string = "abc"
puts string
puts string.to_json
This will output:
abc
"abc"
The method is added to the String class by the JSON generator and it uses it internally to generate the JSON document.
But why does your response['b'] return "b"?
Because Ruby strings have a method called [] that can be used to:
Return a substring: "abc"[0,2] => "ab"
Return a single character from index: "abc"[1] => "b"
Return a substring if the string contains it: "abc"["bc"] => "bc", "abc"["fg"] => nil
Return a regexp match: "abc"[/^a([a-z])c/, 1] => "b"
and possibly some other ways I can't think of right now.
So this happens because your response is a string that has the character "b" in it:
response = "something with a b"
puts response["b"]
# outputs b
puts response["x"]
# outputs a blank line because response does not contain "x".
Instead of .to_json your code has to call JSON.parse or JSON.load:
data = JSON.parse(response)
puts data['b']
how can I populate %select options in the haml view with JSON parameters I have in sinatra controller.
In the sinatra controller I have:
response = JSON.parse(curl_resp)
nestedData = response["data"][0]
nestedData.each do |c|
names = c["attributes"]["names"]
end
return haml :newPage, :locals => {:name => example: name in names}
and this the %select options in newPage.haml view:
%select{:name => "select names"}
%option{:value => "id1"} #{locals[:name]}.[0]
%option{:value => "id2"} #{locals[:name]}.[1]
%option{:value => "id3"} #{locals[:name]}.[2]
%option{:value => "id4"} #{locals[:name]}.[3]
this is a sample JSON I get from curl:
{"data":[
{"id":"id1","attributes":{"name":"gnu"}},
{"id":"id2","attributes":{"name":"Alice"}},
{"id":"id3","attributes":{"name":"testsubject"}},
{"id":"id4","attributes":{"name":"testissuer"}}
]}
If your requirement is to iterate over the entire dataset and display <option> tags, you can use something like:
# app.rb
get '/' do
# This is obtained from JSON.parse-ing the incoming data. I've used the JSON
# value directly
#json = {
data:[
{id:"id1",attributes:{name:"gnu"}},
{id:"id2",attributes:{name:"Alice"}},
{id:"id3",attributes:{name:"testsubject"}},
{id:"id4",attributes:{name:"testissuer"}}
]
}
haml :index
end
And in the view:
/ index.haml
%select
= #json[:data].each do |data_item|
%option{ value: data_item[:id] }
= data_item[:attributes][:name]
This way, you don't have to hard-code the number of option tags in the template, which would make it more complicated.
I have a perl hash that is obtained from parsing JSON. The JSON could be anything a user defined API could generated. The goal is to obtain a date/time string and determine if that date/time is out of bounds according to a user defined threshold. The only issue I have is that perl seems a bit cumbersome when dealing with hash key/subkey iteration. How can I look through all the keys and determine if a key or subkey exists throughout the hash? I have read many threads throughout stackoverflow, but nothing that exactly meets my needs. I only started perl last week so I may be missing something... Let me know if that's the case.
Below is the "relevant" code/subs. For all code see: https://gitlab.com/Jedimaster0/check_http_freshness
use warnings;
use strict;
use LWP::UserAgent;
use Getopt::Std;
use JSON::Parse 'parse_json';
use JSON::Parse 'assert_valid_json';
use DateTime;
use DateTime::Format::Strptime;
# Verify the content-type of the response is JSON
eval {
assert_valid_json ($response->content);
};
if ( $# ){
print "[ERROR] Response isn't valid JSON. Please verify source data. \n$#";
exit EXIT_UNKNOWN;
} else {
# Convert the JSON data into a perl hashrefs
$jsonDecoded = parse_json($response->content);
if ($verbose){print "[SUCCESS] JSON FOUND -> ", $response->content , "\n";}
if (defined $jsonDecoded->{$opts{K}}){
if ($verbose){print "[SUCCESS] JSON KEY FOUND -> ", $opts{K}, ": ", $jsonDecoded->{$opts{K}}, "\n";}
NAGIOS_STATUS(DATETIME_DIFFERENCE(DATETIME_LOOKUP($opts{F}, $jsonDecoded->{$opts{K}})));
} else {
print "[ERROR] Retreived JSON does not contain any data for the specified key: $opts{K}\n";
exit EXIT_UNKNOWN;
}
}
sub DATETIME_LOOKUP {
my $dateFormat = $_[0];
my $dateFromJSON = $_[1];
my $strp = DateTime::Format::Strptime->new(
pattern => $dateFormat,
time_zone => $opts{z},
on_error => sub { print "[ERROR] INVALID TIME FORMAT: $dateFormat OR TIME ZONE: $opts{z} \n$_[1] \n" ; HELP_MESSAGE(); exit EXIT_UNKNOWN; },
);
my $dt = $strp->parse_datetime($dateFromJSON);
if (defined $dt){
if ($verbose){print "[SUCCESS] Time formatted using -> $dateFormat\n", "[SUCCESS] JSON date converted -> $dt $opts{z}\n";}
return $dt;
} else {
print "[ERROR] DATE VARIABLE IS NOT DEFINED. Pattern or timezone incorrect."; exit EXIT_UNKNOWN
}
}
# Subtract JSON date/time from now and return delta
sub DATETIME_DIFFERENCE {
my $dateInitial = $_[0];
my $deltaDate;
# Convert to UTC for standardization of computations and it's just easier to read when everything matches.
$dateInitial->set_time_zone('UTC');
$deltaDate = $dateNowUTC->delta_ms($dateInitial);
if ($verbose){print "[SUCCESS] (NOW) $dateNowUTC UTC - (JSON DATE) $dateInitial ", $dateInitial->time_zone->short_name_for_datetime($dateInitial), " = ", $deltaDate->in_units($opts{u}), " $opts{u} \n";}
return $deltaDate->in_units($opts{u});
}
Sample Data
{
"localDate":"Wednesday 23rd November 2016 11:03:37 PM",
"utcDate":"Wednesday 23rd November 2016 11:03:37 PM",
"format":"l jS F Y h:i:s A",
"returnType":"json",
"timestamp":1479942217,
"timezone":"UTC",
"daylightSavingTime":false,
"url":"http:\/\/www.convert-unix-time.com?t=1479942217",
"subkey":{
"altTimestamp":1479942217,
"altSubkey":{
"thirdTimestamp":1479942217
}
}
}
[SOLVED]
I have used the answer that #HåkonHægland provided. Here are the below code changes. Using the flatten module, I can use any input string that matches the JSON keys. I still have some work to do, but you can see the issue is resolved. Thanks #HåkonHægland.
use warnings;
use strict;
use Data::Dumper;
use LWP::UserAgent;
use Getopt::Std;
use JSON::Parse 'parse_json';
use JSON::Parse 'assert_valid_json';
use Hash::Flatten qw(:all);
use DateTime;
use DateTime::Format::Strptime;
# Verify the content-type of the response is JSON
eval {
assert_valid_json ($response->content);
};
if ( $# ){
print "[ERROR] Response isn't valid JSON. Please verify source data. \n$#";
exit EXIT_UNKNOWN;
} else {
# Convert the JSON data into a perl hashrefs
my $jsonDecoded = parse_json($response->content);
my $flatHash = flatten($jsonDecoded);
if ($verbose){print "[SUCCESS] JSON FOUND -> ", Dumper($flatHash), "\n";}
if (defined $flatHash->{$opts{K}}){
if ($verbose){print "[SUCCESS] JSON KEY FOUND -> ", $opts{K}, ": ", $flatHash>{$opts{K}}, "\n";}
NAGIOS_STATUS(DATETIME_DIFFERENCE(DATETIME_LOOKUP($opts{F}, $flatHash->{$opts{K}})));
} else {
print "[ERROR] Retreived JSON does not contain any data for the specified key: $opts{K}\n";
exit EXIT_UNKNOWN;
}
}
Example:
./check_http_freshness.pl -U http://bastion.mimir-tech.org/json.html -K result.creation_date -v
[SUCCESS] JSON FOUND -> $VAR1 = {
'timestamp' => '20161122T200649',
'result.data_version' => 'data_20161122T200649_data_news_topics',
'result.source_version' => 'kg_release_20160509_r33',
'result.seed_version' => 'seed_20161016',
'success' => 1,
'result.creation_date' => '20161122T200649',
'result.data_id' => 'data_news_topics',
'result.data_tgz_name' => 'data_news_topics_20161122T200649.tgz',
'result.source_data_version' => 'seed_vtv: data_20161016T102932_seed_vtv',
'result.data_digest' => '6b5bf1c2202d6f3983d62c275f689d51'
};
Odd number of elements in anonymous hash at ./check_http_freshness.pl line 78, <DATA> line 1.
[SUCCESS] JSON KEY FOUND -> result.creation_date:
[SUCCESS] Time formatted using -> %Y%m%dT%H%M%S
[SUCCESS] JSON date converted -> 2016-11-22T20:06:49 UTC
[SUCCESS] (NOW) 2016-11-26T19:02:15 UTC - (JSON DATE) 2016-11-22T20:06:49 UTC = 94 hours
[CRITICAL] Delta hours (94) is >= (24) hours. Data is stale.
You could try use Hash::Flatten. For example:
use Hash::Flatten qw(flatten);
my $json_decoded = parse_json($json_str);
my $flat = flatten( $json_decoded );
say "found" if grep /(?:^|\.)\Q$key\E(?:\.?|$)/, keys %$flat;
You can use Data::Visitor::Callback to traverse the data structure. It lets you define callbacks for different kinds of data types inside your structure. Since we're only looking at a hash it's relatively simple.
The following program has a predefined list of keys to find (those would be user input in your case). I converted your example JSON to a Perl hashref and included it in the code because the conversion is not relevant. The program visits every hashref in this data structure (including the top level) and runs the callback.
Callbacks in Perl are code references. These can be created in two ways. We're doing the anonymous subroutine (sometimes called lambda function in other languages). The callback gets passed two arguments: the visitor object and the current data substructure.
We'll iterate all the keys we want to find and simply check if they exist in that current data structure. If we see one, we count it's existence in the %seen hash. Using a hash to store things we have seen is a common idiom in Perl.
We're using a postfix if here, which is convenient and easy to read. %seen is a hash, so we access the value behind the $key with $seen{$key}, while $data is a hash reference, so we use the dereferencing operator -> to access the value behind $key with $data->{$key}.
The callback needs us to return the $data again so it continues. The last line is just there, it's not important.
I've used Data::Printer to output the %seen hash because it's convenient. You can also use Data::Dumper if you want. In production, you will not need that.
use strict;
use warnings;
use Data::Printer;
use Data::Visitor::Callback;
my $from_json = {
"localDate" => "Wednesday 23rd November 2016 11:03:37 PM",
"utcDate" => "Wednesday 23rd November 2016 11:03:37 PM",
"format" => "l jS F Y h:i:s A",
"returnType" => "json",
"timestamp" => 1479942217,
"timezone" => "UTC",
"daylightSavingTime" =>
0, # this was false, I used 0 because that's a non-true value
"url" => "http:\/\/www.convert-unix-time.com?t=1479942217",
"subkey" => {
"altTimestamp" => 1479942217,
"altSubkey" => {
"thirdTimestamp" => 1479942217
}
}
};
my #keys_to_find = qw(timestamp altTimestamp thirdTimestamp missingTimestamp);
my %seen;
my $visitor = Data::Visitor::Callback->new(
hash => sub {
my ( $visitor, $data ) = #_;
foreach my $key (#keys_to_find) {
$seen{$key}++ if exists $data->{$key};
}
return $data;
},
);
$visitor->visit($from_json);
p %seen;
The program outputs the following. Note this is not a Perl data structure. Data::Printer is not a serializer, it's a tool to make data human readable in a convenient way.
{
altTimestamp 1,
thirdTimestamp 1,
timestamp 1
}
Since you also wanted to constraint the input, here's an example how to do that. The following program is a modification of the one above. It allows to give a set of different constraints for every required key.
I've done that by using a dispatch table. Essentially, that's a hash that contains code references. Kind of like the callbacks we use for the Visitor.
The constraints I've included are doing some things with dates. An easy way to work with dates in Perl is the core module Time::Piece. There are lots of questions around here about various date things where Time::Piece is the answer.
I've only done one constraint per key, but you could easily include several checks in those code refs, or make a list of code refs and put them in an array ref (keys => [ sub(), sub(), sub() ]) and then iterate that later.
In the visitor callback we are now also keeping track of the keys that have %passed the constraints check. We're calling the coderef with $coderef->($arg). If a constraint check returns a true value, it gets noted in the hash.
use strict;
use warnings;
use Data::Printer;
use Data::Visitor::Callback;
use Time::Piece;
use Time::Seconds; # for ONE_DAY
my $from_json = { ... }; # same as above
# prepare one of the constraints
# where I'm from, Christmas eve is considered Christmas
my $christmas = Time::Piece->strptime('24 Dec 2016', '%d %b %Y');
# set up the constraints per required key
my %constraints = (
timestamp => sub {
my ($epoch) = #_;
# not older than one day
return $epoch < time && $epoch > time - ONE_DAY;
},
altTimestamp => sub {
my ($epoch) = #_;
# epoch value should be an even number
return ! $epoch % 2;
},
thirdTimestamp => sub {
my ($epoch) = #_;
# before Christmas 2016
return $epoch < $christmas;
},
);
my %seen;
my %passed;
my $visitor = Data::Visitor::Callback->new(
hash => sub {
my ( $visitor, $data ) = #_;
foreach my $key (%constraints) {
if ( exists $data->{$key} ) {
$seen{$key}++;
$passed{$key}++ if $constraints{$key}->( $data->{$key} );
}
}
return $data;
},
);
$visitor->visit($from_json);
p %passed;
The output this time is:
{
thirdTimestamp 1,
timestamp 1
}
If you want to learn more about the dispatch tables, take a look at chapter two of the book Higher Order Perl by Mark Jason Dominus which is legally available for free here.
Which would a the best and most flexible process for creating formatted PDF's of Avery labels on a linux machine with perl?
The labels need to include images and will have formatting similar to spanned rows and columns in an html table. And example would be several rows of text on the left hand side and an image on the right hand side that spans the text.
These are my thoughts but if you have additional ideas please let me know.
perl to PDF with PDF::API2
perl to PS with ??? -> PS to PDF with ???
perl to HTML w/ CSS formatting -> HTML to PDF with wkhtmltopdf
Has anybody done this and have any pointers, examples or links that may be of assistance?
Thank You,
~Donavon
They are all viable options.
I found wkhtmltopdf to be too resource intensive and slow. If you do want to go down that route there are existing html templates already which can be found by a quick google search.
PDF::API2 performs very well, and I run it on a server system with no problems. Here's an example script I use for laying out elements in a grid format;
#!/usr/bin/env perl
use strict 'vars';
use FindBin;
use PDF::API2;
# Min usage, expects bank.pdf to exist in same location
render_grid_pdf(
labels => ['One', 'Two', 'Three', 'Four'],
cell_width => 200,
cell_height => 50,
no_of_columns => 2,
no_of_rows => 2,
);
# Advanced usage
render_grid_pdf(
labels => ['One', 'Two', 'Three', 'Four'],
cell_width => 200,
cell_height => 50,
no_of_columns => 2,
no_of_rows => 2,
font_name => "Helvetica-Bold",
font_size => 12,
template => "blank.pdf",
save_as => "my_labels.pdf",
# Manually set coordinates to start prinding
page_offset_x => 20, # Acts as a left margin
page_offset_y => 600,
);
sub render_grid_pdf {
my %args = #_;
# Print data
my $labels = $args{labels} || die "Labels required";
# Template, outfile and labels
my $template = $args{template} || "$FindBin::Bin/blank.pdf";
my $save_as = $args{save_as} || "$FindBin::Bin/out.pdf";
# Layout Properties
my $no_of_columns = $args{no_of_columns} || die "Number of columns required";
my $no_of_rows = $args{no_of_rows} || die "Number of rows required";
my $cell_width = $args{cell_width} || die "Cell width required";
my $cell_height = $args{cell_height} || die "Cell height required";
my $font_name = $args{font_name} || "Helvetica-Bold";
my $font_size = $args{font_size} || 12;
# Note: PDF::API2 uses cartesion coordinates, 0,0 being
# bottom. left. These offsets are used to set the print
# reference to top-left to make things easier to manage
my $page_offset_x = $args{page_offset_x} || 0;
my $page_offset_y = $args{page_offset_y} || $no_of_rows * $cell_height;
# Open an existing PDF file as a templata
my $pdf = PDF::API2->open("$template");
# Add a built-in font to the PDF
my $font = $pdf->corefont($font_name);
my $page = $pdf->openpage(1);
# Add some text to the page
my $text = $page->text();
$text->font($font, $font_size);
# Print out labels
my $current_label = 0;
OUTERLOOP: for (my $row = 0; $row < $no_of_columns; $row++) {
for (my $column = 0; $column < $no_of_columns; $column++) {
# Calculate label x, y positions
my $label_y = $page_offset_y - $row * $cell_height;
my $label_x = $page_offset_x + $column * $cell_width;
# Print label
$text->translate( $label_x, $label_y );
$text->text( $labels->[$current_label]);
# Increment labels index
$current_label++;
# Exit condition
if ( $current_label > scalar #{$labels}) {
last OUTERLOOP;
}
}
}
# Save the PDF
$pdf->saveas($save_as);
}
Great you have found an answer you like.
Another option, which may or may not have suited you, would be to prepare the sheet that will be printed as labels as an open/libreoffice document, with pictures, layout, non-variant text ... (and can do all your testing runs through open/libreoffice).
Then:
use OpenOffice::OODoc;
then: read you data from a database
then:
my $document = odfDocument( file => "$outputFilename",
create => "text",
template_path => $myTemplateDir );
then:
for (my $r = 0; $r < $NumOfTableRows; $r++ ) {
for (my $c = 0; $c < $NumOfTableCols; $c++) {
:
$document->cellValue($theTableName, $r, $c, $someText);
# test: was written properly ?
my $writtenTest = $document->cellValue($theTableName, $r, $c);
chomp $writtenTest;
if ($someText ne $writtenTest) {
:
}
}
}
then:
$document->save($outputFilename );
# save (convert to) a pdf
# -f format;
# -n no start new listener; use existing
# -T timeout to connect to its *OWN* listener (only);
# -e exportFilterOptions
`unoconv -f pdf -n -T 60 -e PageRange=1-2 $outputFilename `;
# I remove the open/libreoffice doc, else clutter and confusion
`rm $outputFilename `;
As a quick overview of the practical issues:
name the layout tables
place your nice, correct open/libreoffice doc in "/usr/local/lib/site_perl/myNewTemplateDir/" say. You will need this (I think that this is the default, but I pass it anyway as $myTemplateDir
gotcha: the modules routines to wait for Open/Libreoffice to start (for the unoconv converter to start) do NOT work - unoconv will still not work after they say it will. I create dummy pdfs until one actually works - exists and has a non-zero size.