Display output in HTML format by perl - html

I have a hashmap with some information(key and value) in a perl file. I want to display them in HTML output and each displayed (key, value) will link to something. When I click the link then there will be some information there.
Anyone suggests me how can I do that. Is this similar to creating a CGI file and use CGI.pm? I will update more detail on this question later.

Yes, you can use the excellent CGI module to render HTML content for you, even if you are not processing CGI forms (i.e. use the module only on output, rather than also for input processing):
use CGI;
my $q = CGI->new;
my #html_list = map {
$q->li($_ . ": " . $hash{$_};
} keys %hash;
print $q->ul($q->li({-type=>'foo'}, #html_list);

Depending on the data you're trying to display, something like HTML::Table may be useful if you want to display it in tabular format and don't want the drugery of assembling the appropriate HTML yourself.
For instance, you could do something like:
my $table = HTML::Table->new(-columns => 2);
for my $key (sort keys %hash) {
$table->addRow($key, $hash{$key});
}
$table->print;
Also, there is a free Beginning Perl book available online, which has a chapter devoted to CGI scripts, along with a lot of other useful information.
If this is more than just a simple one-off script, you might also wish to consider using one of the many Perl web frameworks like Dancer, Catalyst, Mojo etc.

Related

How to pass back html and logic information after an ajax call with CI

I have a CI and jQuery based project. I've got a site searching my db. It consists of a jQueryUI accordion. One section contains input fields for an advanced search and the other section is used to display a html table with results.
The search parameters from the first section are sent to the server using ajax post. This is crunched by the server and either a html styled error message or a html table with results (and later some other stuff such as how many results found, how much time consumed etc.) is returned.
Back on the client jQuery must be able to distinguish between the two. Best would be to be able to transmit another variable 'search_success'. If 'search_success' is false, the error is prepended to section one above the input fields. Otherwise the html block is displayed in section two and jQuery opens section 2.
Right now I'm returning plain html with a 0 or 1 prepended. This first char is chopped off by jQuery and used to distinguish between the two possible results. This is kind of ugly.
After reading this post about sending array using json I thought about addressing this problem in json.
I intended to build something like
echo json_encode(array('search_success' => $search_success, 'html' => $html));
This would alow for nice structuring of the data. Problem now is, my 'html' is not a simple php variable but a view:
<?php
$template = array('table_open' => '<table id="table" data-url="'.base_url().'">');
$this->table->set_template($template);
$this->table->set_heading($table_header);
echo $this->table->generate($table);
?>
This view could also get a lot more complicated. Of course I could abandon the CI MVC and store the whole html in a php string which I could transform to json with the above code. However, this would defeat the purpose of storing the whole html part in a view.
Is there a way to wrap my whole view in json without relinquishing my view architecture?
Or what approach would be more suitable to the problem?
Thanks, singultus
To bring this topic to an end, the answer is simple:
$json['html'] = $this->load->view('myfile', '', true); // 3. param 'true'!
$json['other_stuff'] = $other stuff;
echo json_encode($json);
See here at the very end. This approach allows for a nicely structured response to the server.
All credit to #koala_dev!

Perl - Add and Modify HTML with pQuery

I'm just a hobbyist Perl programmer learning pQuery and using a local HTML file to aid the process. Here is what I have so far:
use strict;
use warnings;
use pQuery;
my $filename = 'learn.html';
my $file = pQuery($filename);
my $metadesc = pQuery("meta", $file)->eq(2);
my $title = $file->find('title');
my $h1 = $file->find('h1')->find('a');
my $h2 = $file->find('h2')->eq(0);
$title->html('New Title');
$h1->html('New Heading');
$h2->html('New Sub-Heading');
However, I've hit a bit of a wall and can't quite work out what to do next. What I'd like to do:
Modify the "Content" attribute of $metadesc;
Add a p inside a div immediately after $h2;
If it were jQuery, I would say 1. use the .attr() method to update the attributes of $metadesc and 2. use the insertAfter method.
But as the module says, it's under construction and "This module is still being written. The documented methods all work as documented (but may not be completed ports of their jQuery counterparts yet)." So those methods may not be implemented yet.

passing variables to perl script from html

I am trying to call a perl script from my HTML page. The way am trying to do is to call the url of the perl script located on the server.
Here is the piece of code:
HTML:
var fname = "Bob";
var url='http://xxx.com:30000/cgi-bin/abc.pl?title=fname';
window.open(url,"_self");
The way am trying to retrieve it in perl as:
Perl:
print "$ARGV[0]\n";
Now, I have 3 questions:
I think this is the correct way to pass the variables but am not able to print the argument in perl.
If i want to pass another variable lname, how do i append it to the url?
My window.open should open the output in the same window, since it uses the parameter _self. Still it doesn't.
Could anybody point out the problems?
Thanks,
Buzz
No #ARGV contains command line arguments and will be empty.
You need the CGI module
use warnings;
use strict;
use CGI;
my $query = CGI->new;
print $query->param( 'title' );
Edit:
Take a look at dan1111's answer on how to generate HTML and display it in the browser.
In addition to what Matteo said, a simple print statement is not enough to send some output to the browser.
Please see a recent answer I wrote giving a sample CGI script with output.
In regard to your other issues:
Variables are appended to a url separated with &:
var url='http://xxx.com:30000/cgi-bin/abc.pl?title=fname&description=blah';
Based on this question, perhaps you should try window.location.href = url; instead (though that doesn't explain why your code isn't working).
There are two different environments that each pass variables two different ways. The command line can pass variables through the #ARGV and the browser can pass variables through #ENV. It doesn't matter what language you use, those are the arrays that you will have to employ.

Perl::Mechanize: running a simple crawler with a loop [multiple queries]

currently ironing out a way to parse the data of a page: http://www.foundationfinder.ch/
i love to do it in Perl: Well - i am just musing which is the best way to do the job.
Guess that i am in front of a nice learning curve. ;) This task will give me some nice Perl lessions. At the moment it goes abit over my head...;-)
So here is a sample-page:
... and as i thought i can find all 790 resultpages within a certain range between Id= 0 and Id= 100000 i thought, that i can go the way with a loop:
http://www.foundationfinder.ch/ShowDetails.php?Id=11233&InterfaceLanguage=&Type=Html
http://www.foundationfinder.ch/ShowDetails.php?Id=927&InterfaceLanguage=1&Type=Html
http://www.foundationfinder.ch/ShowDetails.php?Id=949&InterfaceLanguage=1&Type=Html
http://www.foundationfinder.ch/ShowDetails.php?Id=20011&InterfaceLanguage=1&Type=Html
http://www.foundationfinder.ch/ShowDetails.php?Id=10579&InterfaceLanguage=1&Type=Html
i thought i can go the Perl-Way but i am not very very sure: I was trying to use LWP::UserAgent on the same URLs [see below] with different query arguments, and i am wondering if LWP::UserAgent provides a way for us to loop through the query arguments? I am not sure that LWP::UserAgent has a method for us to do that. Well - i sometimes heard that it is easier to use Mechanize. But is it really easier!?
BTW; But if i am going the PHP way i could do it with Curl - couldnt i!?
Here is my approach: I tried to figure it out. And i digged deeper in the Manpages and Howtos. We can have a loop constructing the URLs and use Curl - repeatedly
As noted above: here we have some resultpages;
http://www.foundationfinder.ch/ShowDetails.php?Id=11233&InterfaceLanguage=&Type=Html
http://www.foundationfinder.ch/ShowDetails.php?Id=927&InterfaceLanguage=1&Type=Html
Alternatively we can add a request_prepare handler that computes and add the query
arguments before we send out the request.
Again: What is aimed: i want to parse the data and afterwards i want to store it in a local MySQL-database
should i define a extern_uid !?
and go like this:
for my $i (0..10000) {
$ua->get('http://www.foundationfinder.ch/ShowDetails.php?Id=', id => 21, extern_uid => $i);
# process reply
}
Well but now i get stuck- i need help - can i do the job like this!?
regards
zero
Dont do it like this. Use HTTP live headers (Firefox Plugin) or eqv. to see what the javasript does behind the scenes while you select what you need from here to get to that page (with the table).
To get the data from the table, use HTML::TableExtract or HTML::TreeBuilder::XPath if you want to use XPath
If you do want to iterate over the queries, just create another var:
my $a = 'http://www.foundationfinder.ch/ShowDetails.php?Id=' . $q . '&InterfaceLanguage=&Type=Html';
and increment $q as you go, make sure the page is valid before trying to load it with get

Saving HTML tables to a Database

I am trying to scrape an html table and save its data in a database. What strategies/solutions have you found to be helpful in approaching this program.
I'm most comfortable with Java and PHP but really a solution in any language would be helpful.
EDIT: For more detail, the UTA (Salt Lake's Bus system) provides bus schedules on its website. Each schedule appears in a table that has stations in the header and times of departure in the rows. I would like to go through the schedules and save the information in the table in a form that I can then query.
Here's the starting point for the schedules
It all depends on how properly your HTML to scrape is? If it's valid XHTML, you can simply use some XPath queries on it to get whatever you want.
Example of xpath in php: http://blogoscoped.com/archive/2004_06_23_index.html#108802750834787821
A helper class to scrape a table into an array: http://www.tgreer.com/class_http_php.html
There is a nice book about this topic: Spidering Hacks by Kevin Hemenway and Tara Calishain.
I've found that scripting languages are generally better suited for doing such tasks. I personally prefer Python, but PHP will work as well. Chopping, mincing and parsing strings in Java is just too much work.
I have tried screen-scraping before, but I found it to be very brittle, especially with dynamically-generated code.
I found a third-party DOM-parser and used it to navigate the source code with Regex-like matching patterns in order to find the data I needed.
I suggested trying to find out if the owners of the site have a published API (often Web Services) for retrieving data from their system. If not, then good luck to you.
If what you want is a form a csv table then you can use this:
using python:
for example imagine you want to scrape forex quotes in csv form from some site like: fxoanda
then...
from BeautifulSoup import BeautifulSoup
import urllib,string,csv,sys,os
from string import replace
date_s = '&date1=01/01/08'
date_f = '&date=11/10/08'
fx_url = 'http://www.oanda.com/convert/fxhistory?date_fmt=us'
fx_url_end = '&lang=en&margin_fixed=0&format=CSV&redirected=1'
cur1,cur2 = 'USD','AUD'
fx_url = fx_url + date_f + date_s + '&exch=' + cur1 +'&exch2=' + cur1
fx_url = fx_url +'&expr=' + cur2 + '&expr2=' + cur2 + fx_url_end
data = urllib.urlopen(fx_url).read()
soup = BeautifulSoup(data)
data = str(soup.findAll('pre', limit=1))
data = replace(data,'[<pre>','')
data = replace(data,'</pre>]','')
file_location = '/Users/location_edit_this'
file_name = file_location + 'usd_aus.csv'
file = open(file_name,"w")
file.write(data)
file.close()
once you have it in this form you can convert the data to any form you like.
At the risk of starting a shitstorm here on SO, I'd suggest that if the format of the table never changes, you could just about get away with using Regularexpressions to parse and capture the content you need.
pianohacker overlooked the HTML::TableExtract module, which was designed for exactly this sort of thing. You'd still need LWP to retrieve the table.
This would be by far the easiest with Perl, and the following CPAN modules:
http://metacpan.org/pod/HTML::Parser
http://metacpan.org/pod/LWP
http://metacpan.org/pod/DBD/mysql
http://metacpan.org/pod/DBI.pm
CPAN being the main distribution mechanism for Perl modules, and accessible by running the following shell command, for example:
# cpan HTML::Parser
If you're on Windows, things will be more interesting, but you can still do it: http://www.perlmonks.org/?node_id=583586