JSON::XS under mod_perl fails with POST requests - json

I am using the default install of Apache and mod_perl on Ubuntu 16.04.1 LTS, I also have reproduced this with the default JSON::XS and I updated to the latest from CPAN JSON-XS-3.02.
The code below works in all cases if I am not using mod_perl.
The script and html below work when using perl via mod_cgi with both POST and GET requests.
If however I am using mod_perl and I use a POST (as in the html provided) it fails, "Hello" does not print, and I get the following error in my apache log file.
Usage: JSON::XS::new(klass).
If I pass the same parameter(s) via a GET method, the script works fine.
test2.pl
#!/usr/bin/perl
use strict;
use warnings;
use CGI;
use JSON::XS;
my $q = new CGI();
print $q->header(-type => 'text/plain');
my $action = $q->param('a');
my $json_str = '{"foo":"bar"}';
my $pscalar = JSON::XS->new->utf8->decode($json_str);
print "Hello";
exit 1;
HTML to call the above (named test2.pl on the server)
<html>
<body>
<form action="test2.pl" method="POST">
<input type="text" name="a"/>
<button type="submit">
</form>
</body>
</html>

OK So this was a rather wild goose chase, analyzing apache core dumps and stack traces, fixing bugs that weren't really there... Long story short.
I was trying to add an include directory to my perl by using
PerlSwitches -I/usr/local/lib/site_perl/my_new_directory
As part of that I added
PerlOptions +Parent so that I would get a new interpreter for each virtual host so my -I was effective for only one virtual host at a time.
I had added those flags before I enabled mod_perl, so when I enabled mod_perl, it just never worked.
By removing the PerlOptions +Parent things started working as expected.
As a side note, it appears +Parent makes things wonky in genral.

Related

Extracting the outputs/results from an executed .pexe file

My goal is to convert a C++ program in to a .pexe file in order to execute it later on a remote computer. The .pexe file will contain some mathematical formulas or functions to be calculated on a remote computer, so I’ll be basically using the computational power of the remote computer. For all this I’ll be using the nacl_sdk with the Pepper library and I will be grateful if someone could clarify some things for me:
Is it possible to save the outputs of the executed .pexe file on the remote computer in to a file, if it’s possible then how? Which file formats are supported?
Is it possible to send the outputs of the executed .pexe file on the remote computer automatically to the host computer, if it’s possible then how?
Do I have to install anything for that to work on the remote computer?
Any suggestion will be appreciated.
From what I've tried it seems like you can't capture the stuff that your pexe writes to stdout - it just goes to the stdout of the browser (it took me hours to realize that it does go somewhere - I followed a bad tutorial that had me believe the pexes stdout was going to be posted to the javascript side and was wondering why it "did nothing").
I currently work on porting my stuff to .pexe also, and it turned out to be quite simple, but that has to do with the way I write my programs:
I write my (C++) programs such that all code-parts read inputs only from an std::istream object and write their outputs to some std::ostream object. Then I just pass std::cin and std::cout to the top-level call and can use the program interactively in the shell. But then I can easily swap out the top-level call to use an std::ifstream and std::ofstream to use the program for batch-processing (without pipes from cat and redirecting to files, which can be troublesome under some circumstances).
Since I write my programs like that, I can just implement the message handler like
class foo : public pp::Instance {
... ctor, dtor,...
virtual void HandleMessage(const pp::Var& msg) override {
std::stringstream i, o;
i << msg.AsString();
toplevelCall(i,o);
PostMessage(o.str());
}
};
so the data I get from the browser is put into a stringstream, which the rest of the code can use for inputs. It gets another stringstream where the rest of the code can write its outputs to. And then I just send that output back to the browser. (Downside is you have to wait for the program to finish before you get to see the result - you could derive a class from ostream and have the << operator post to the browser directly... nacl should come with a class that does that - I don't know if it actually does...)
On the html/js side, you can then have a textarea and a pre (which I like to call stdin and stdout ;-) ) and a button which posts the content of the textarea to the pexe - And have an eventhandler that writes the messages from the pexe to the pre like this
<embed id='pnacl' type='application/x-pnacl' src='manifest.nmf' width='0' height='0'/>
<textarea id="stdin">Type your input here...</textarea>
<pre id='stdout' width='80' height='25'></pre>
<script>
var pnacl = document.getElementById('pnacl');
var stdout = document.getElementById('stdout');
var stdin = document.getElementById('stdin');
pnacl.addEventListener('message', function(ev){stdout.textContent += ev.data;});
</script>
<button onclick="pnacl.postMessage(stdin.value);">Submit</button>
Congratulations! Your program now runs in the browser!
I am not through with porting my compilers, but it seems like this would even work for stuff that uses flex & bison (you only have to copy FlexLexer.h to the include directory of the pnacl sdk and ignore the warnings about the "register" storage location specifier :-)
Are you using the .pexe in a browser? That's the usual case.
I recommend using nacl_io to emulate POSIX in the browser (also look at file_io. This will allow you to save files locally, retrieve them, in any format you fancy.
To send the output use the browser's usual capabilities such as XMLHttpRequest. You need PNaCl to talk to JavaScript for this, you may want to look at some of the examples.
A regular web server will do, it really depends on what you're doing.

Understanding JSON-RPC in Perl

I am trying to understand the concept of JSON RPC and it's Perl implementation. Though I can fin d a lot of examples for Python/Java, I find surprisingly little or no examples for it in Perl.
I am following this example but am not sure it is complete. The example I had in mind was to add 2 integers. Now I have a very basic HTML page set up, like so:
<html>
<body>
<input type="text" name="num1"><br>
<input type="text" name="num2"><br>
<button>Add</button>
</body>
</html>
Next, based on the example above, I have 3 files:
test1.pl
# Daemon version
use JSON::RPC::Server::Daemon;
# see documentation at:
# https://metacpan.org/pod/distribution/JSON-RPC/lib/JSON/RPC/Legacy.pm
my $server = JSON::RPC::Server::Daemon->new(LocalPort => 8080);
$server -> dispatch({'/test' => 'myApp'});
$server -> handle();
test2.pl
#!/usr/bin/perl
use JSON::RPC::Client;
my $client = new JSON::RPC::Client;
my $uri = 'http://localhost:8080/test';
my $obj = {
method => 'sum', # or 'MyApp.sum'
params => [10, 20],
};
my $res = $client->call( $uri, $obj );
if($res){
if ($res->is_error) {
print "Error : ", $res->error_message;
} else {
print $res->result;
}
} else {
print $client->status_line;
}
myApp.pl
package myApp;
#optionally, you can also
use base qw(JSON::RPC::Procedure); # for :Public and :Private attributes
sub sum : Public(a:num, b:num) {
my ($s, $obj) = #_;
return $obj->{a} + $obj->{b};
}
1;
While I understand what these files individually do, I am at a complete loss when it comes to combining them and making them work together.
My questions are as follows:
Does the button in the HTML page come inside a tag (like we would normally do in a CGI-based program)? If yes, what file does that call? If no, then how do I pass the values to be added?
What is the order of execution of the 3 Perl files? Which one calls which one? How is the flow of execution?
When I tried to run the perl files from the CLI, i.e using $./test2.pl, I got the following error: Error 301 Moved Permanently. What moved permanently? which file was it trying to access? I tried running the files from withing /var/www/html and /var/www/html/test.
Some help in understanding the nuances of this would really be appreciated. Thanks in advance
Does the button in the HTML page come inside a tag (like we would
*normally do in a CGI-based program)? If yes, what file does that call?*
If no, then how do I pass the values to be added?
HTML has nothing at all to do with JSON-RPC. While the RPC call is done via an HTTP POST request, if you're doing that from the browser, you'll need to use XMLHttpRequest (i.e: AJAX). Unlink an HTML form post the Content-encoding: header will need to be something specific to JSON-RPC (e.g: application/json or similar), and you'll need to encode your form data via JSON.stringify and correctly construct the JSON-RPC "envelope", including the id, jsonrpc, method and params properties.
Rather than doing this by hand you might use a purpose-build JSON-RPC JavaScript client like the jQuery-JSONRP plugin (there are many others) -- although the protocol is so simple that implementations usually are less than 20 lines of code.
From the jQuery-RPC documentation, you'd set up the connection like this:
$.jsonRPC.setup({
endPoint: '/ENDPOINT-ROUTE-GOES-HERE'
});
and you'd call the server-side method like this:
$.jsonRPC.request('sum', {
params: [YOURNUMBERINPUTELEMENT1.value, YOURNUMBERINPUT2.value],
success: function(result) {
/* Do something with the result here */
},
error: function(result) {
/* Result is an RPC 2.0 compatible response object */
}
});
What is the order of execution of the 3 Perl files? Which one calls
*which one? How is the flow of execution?*
You'll likely only need test2.pl for testing. It's an example implementation of a JSON-RPC client. You likely want your client to run in your web-browser (as described above). The client JavaScript will make an HTTP POST request to wherever test1.pl is serving content. (e.g: http://localhost:8080).
Or, if you want to keep your code as HTML<-->CGI, then you'll need to make JSON-RPC client calls from within your Perl CGI server-side code (which seems silly if it's on the same machine).
When test1.pl calls dispatch, the MyApp module will be loaded.
Then, when test1.pl calls handle, the sum function in the MyApp package will be called.
The JSON::RPC::Server module takes care of marshalling from JSON-RPC to perl datastructures and back again around the call to handle. die()ing in sum should result in a JSON-RPC exception being transmitted to the calling client, rather than death of the test1.pl script.
When I tried to run the perl files from the CLI, i.e using
*$./test2.pl, I got the following error: Error 301 Moved Permanently.*
What moved permanently? which file was it trying to access? I tried
*running the files from withing /var/www/html and /var/www/html/test.*
This largely depends the configuration of your machine. There's nothing obvious (in your code) to suggest that a 301 Moved Permanently would be issued in response to a valid JSON-RPC request.

Retrieving HTTP URLs using Perl scripting

I'm trying to save the whole web page on my system as a .html file and then parse that file, to find some tags and use them.
I'm able to save/parse http://<url>, but not able to save/parse https://<url>. I'm using Perl.
I'm using the following code to save HTTP and it works fine but doesn't work for HTTPS:
use strict;
use warnings;
use LWP::Simple qw($ua get);
use LWP::UserAgent;
use LWP::Protocol::https;
use HTTP::Cookies;
sub main
{
my $ua = LWP::UserAgent->new();
my $cookies = HTTP::Cookies->new(
file => "cookies.txt",
autosave => 1,
);
$ua->cookie_jar($cookies);
$ua->agent("Google Chrome/30");
#$ua->ssl_opts( SSL_ca_file => 'cert.pfx' );
$ua->proxy('http','http://proxy.com');
my $response = $ua->get('http://google.com');
#$ua->credentials($response, "", "usrname", "password");
unless($response->is_success) {
print "Error: " . $response->status_line;
}
# Let's save the output.
my $save = "save.html";
unless(open SAVE, '>' . $save) {
die "nCannot create save file '$save'n";
}
# Without this line, we may get a
# 'wide characters in print' warning.
binmode(SAVE, ":utf8");
print SAVE $response->decoded_content;
close SAVE;
print "Saved ",
length($response->decoded_content),
" bytes of data to '$save'.";
}
main();
Is it possible to parse an HTTPS page?
Always worth checking the documentation for the modules that you're using...
You're using modules from libwww-perl. That includes a cookbook. And in that cookbook, there is a section about HTTPS, which says:
URLs with https scheme are accessed in exactly the same way as with
http scheme, provided that an SSL interface module for LWP has been
properly installed (see the README.SSL file found in the libwww-perl
distribution for more details). If no SSL interface is installed for
LWP to use, then you will get "501 Protocol scheme 'https' is not
supported" errors when accessing such URLs.
The README.SSL file says this:
As of libwww-perl v6.02 you need to install the LWP::Protocol::https
module from its own separate distribution to enable support for
https://... URLs for LWP::UserAgent.
So you just need to install LWP::Protocol::https.
You need to have https://metacpan.org/module/Crypt::SSLeay for https links
It provides SSL support for LWP.
Bit me in the ass with a project of my own.

Download files with Perl

I have updated my code to look like this. When I run it though it says it cannot find the specified link. Also what is a good way to test that it is indeed connecting to the page?
#!/usr/bin/perl -w
use strict;
use LWP;
use WWW::Mechanize;
my $mech = WWW::Mechanize->new();
my $browser = LWP::UserAgent->new;
$browser->credentials(
'Apache/2.2.3 (CentOS):80',
'datawww2.wxc.com',
'************' => '*************'
);
my $response = $browser->get(
'http://datawww2.wxc.com/kml/echo/MESH_Max_180min/'
);
$mech->follow_link( n => 8);
(Original Post)
What is the best way to download small files with Perl?
I looked on CPAN and found lwp-download, but it seems to only download from the link. I have a page with links that change every thirty minutes with the date and time in the name so they are never the same. Is there a built-in function I can use? Everyone on Google keeps saying to use Wget, but I was kind of wanting to stick with Perl if possible just to help me learn it better while I program with it.
Also there is a user name and password to log into the site. I know how to access the site using Perl still, but I thought that might change what I can use to download with.
As stated in a comment in your other question: here
You can use the same method to retrieve .csv files as .html, or any other text-based file for the matter.
#!/usr/bin/perl -w
use strict;
use LWP::Simple;
my $csv = get("http://www.spc.noaa.gov/climo/reports/last3hours_hail.csv")
or die "Could not fetch NWS CSV page.";
To login, you may need to use WWW::Mechanize to fill out the webform (look at $mech->get(), $mech->submit_form(), and $mech->follow_link())
Basically, you need to fetch the page, parse it to get the URL, and then download the file.
Personally, I'd use HTML::TreeBuilder::XPath, write a quick XPath expression to go straight to the correct href attribute node, and then plug that into LWP.
use HTML::TreeBuilder::XPath;
my $tree = HTML::TreeBuilder::XPath->new;
$tree->parse({put page content here});
foreach($tree->findnodes({put xpath expression here}){
{download the file}
}

How to get phpinfo() variables from php programmatically?

I am attempting to get a list of dependable(consistent across requests) list of "hidden" constants in PHP(as in, the client-side won't know about it in most cases without hacking).
Some of the things I am interested in is the following:
./configure options.
I would also like the very first System value in phpinfo.
The loaded PHP modules(as shown in the Apache section)
The build date of PHP.
Registered PHP streams
Registered stream socket transports
Registered stream filters
How can I get either just a portion of the phpinfo or get these values as a regular string? Note that it doesn't matter if there if markup included, but I don't want to parse the phpinfo as that just seems really slow and surely there is a better way..
Here you go:
ini_get_all() or get_loaded_extensions() were the closest I could find
php_uname()
apache_get_modules()
phpversion() was the closest I could find
stream_get_wrappers()
stream_get_transports()
stream_get_filters()
See also get_defined_constants() and some more.
As Chacha102 mentioned you can also use output control functions and parse the phpinfo():
ob_start();
phpinfo();
$variable = ob_get_contents();
ob_get_clean();
Due to the use of ob_get_clean() it won't mess up other output buffering levels you may be using.
Most of the stuff available from phpinfo() can be found in constants. Try looking through:
print_r(get_defined_constants());
Or the functions on this page: http://us.php.net/manual/en/ref.info.php. There are tons of functions to get information about specific extensions.
The following functions might be worth looking at:
ini_get() http://us.php.net/manual/en/function.ini-get.php
getenv() http://us.php.net/manual/en/function.getenv.php
get_cfg_var() http://us.php.net/manual/en/function.get-cfg-var.php
Maybe I am late a bit, but basically if you call a shell script problematically to the php.exe
php -i
then you can parse all the information required