write into a csv file in multiple cells - csv

I am coding in perl, how can you write into a csv file multiple variables and put each one in a separate cell in the same line.
this a part of my Code:
#!/usr/bin/perl
use feature qw(say);
use strict;
use warnings;
use constant BUFSIZE => 6;
my $year += 1900;
my $input_file = 'path\ZONE0.txt';
my $outputfile = 'path\outputfile.csv';
open (my $BIN, "<:raw", $input_file) or die "can't open the file $input_file: $!";
my $buffer;
open(FH, '>>', $outputfile) or die $!;
while (1) {
my $bytes_read = sysread $BIN, $buffer, BUFSIZE;
die "Could not read file $input_file: $!" if !defined $bytes_read;
last if $bytes_read <= 0;
my #decimal= map { unpack "C", $_ } split //, $buffer;
my $start= $decimal[0];
my $DevType = $decimal[1];
my #hexDevType = sprintf("0x%x", $DevType);
my #DevUID =($decimal[5], $decimal[4], $decimal[3], $decimal[2]);
my #hexDevUID = map { sprintf("0x%x",$_) } #DevUID;
print FH $start, ' ' , print FH $DevType,' ', #hexDevUID , "\n";
}
close $BIN;
this results in puting all the variable next to each other in one cell, which is not what I want. can you help me separate the variables.

CSV files don't have cells. I suspect you're opening the file in a spreadsheet program.
The secret of a CSV file is that the values are separated by commas. So you need to put commas between any values that you want to appear in separate cells in your spreadsheet.
It looks like your data is in #hexDevUID. The simplest way is to turn that into a comma-separated string using join():
join(',', #hexDevUID)
But the more robust approach will be to use Text::CSV_XS.

Bellow is modified OPs code which does not utilize any CVS modules for output.
Added error handling code for read error and insufficient number of read bytes for further processing.
use strict;
use warnings;
use feature 'say';
use constant BUFSIZE => 6;
my($buffer,$bytes_read);
my $infile = shift || 'path\ZONE0.txt';
my $outfile = 'path\outputfile.csv';
open my $in, '<:raw', $infile
or die "Can't open $infile: $!";
open my $out, '+>>', $outfile
or die "Can't open $outfile: $!";
do {
$bytes_read = sysread $in, $buffer, BUFSIZE;
die "Error: read from $infile: $!" unless defined $bytes_read;
error_handler($bytes_read) unless $bytes_read == 6;
my #decimal = map { ord } split //, $buffer;
my($start,$DevType) = #decimal[0,1];
my #hexDevUID = map { sprintf("0x%02x",$_) } #decimal[5,4,3,2];
say $out join(',',($start,$DevType,#hexDevUID));
} while ( $bytes_read );
sub error_handler {
my $bytes = shift;
close $out;
close $in;
say "
Error: called error_handler(\$read_bytes)
Action: Emergency file closure to preserve data
Cause: Read insufficient $bytes bytes
" unless $bytes == 0;
exit $bytes ? 1 : 0;
}
The loop can be rewritten with use of unpack like following
do {
$bytes_read = sysread $in, $buffer, BUFSIZE;
die "Error: read from $infile: $!" unless defined $bytes_read;
error_handler($bytes_read) unless $bytes_read == 6;
my($start,$DevType,#devUID) = unpack('CCC4',$buffer);
my #hexDevUID = reverse map { sprintf "0x%02x", $_ } #devUID;
say $out join(',',($start,$DevType,#hexDevUID));
} while ( $bytes_read );

Related

How to read a csv using Perl?

I want to read a csv using perl excluding the first row. Further, col 2 and col3 variables need to be stored in another file and the row read must be deleted.
Edit : Following code has worked. I just want the deletion part.
use strict;
use warnings;
my ($field1, $field2, $field3, $line);
my $file = 'D:\Patching_test\ptch_file.csv';
open( my $data, '<', $file ) or die;
while ( $line = <$data> ) {
next if $. == 1;
( $field1, $field2, $field3 ) = split ',', $line;
print "$field1 : $field2 : $field3 ";
my $filename = 'D:\DB_Patch.properties';
unlink $filename;
open( my $sh, '>', $filename )
or die "Could not open file '$filename' $!";
print $sh "Patch_id=$field2\n";
print $sh "Patch_Name=$field3";
close($sh);
close($data);
exit 0;
}
OPs problem poorly presented for processing
no input data sample provided
no desired output data presented
no modified input file after processing presented
Based on problem description following code provided
use strict;
use warnings;
use feature 'say';
my $input = 'D:\Patching_test\ptch_file.csv';
my $output = 'D:\DB_Patch.properties';
my $temp = 'D:\script_temp.dat';
open my $in, '<', $input
or die "Couldn't open $input";
open my $out, '>', $output
or die "Couldn't open $output";
open my $tmp, '>', $temp
or die "Couldn't open $temp";
while ( <$in> ) {
if( $. == 1 ) {
say $tmp $_;
} else {
my($patch_id, $patch_name) = (split ',')[1,2];
say $out "Patch_id=$patch_id";
say $out "Patch_Name=$patch_name";
}
}
close $in;
close $out;
close $tmp;
rename $temp,$input;
exit 0;

How to start reading CSV from beginning again?

use Text::CSV_XS;
my $csv = Text::CSV_XS->new;
open my $fh, "test.csv" or die "test.csv: $!";
while (my $row = $csv->getline($fh)) {
my #fields = #$row;
if ($fields[0] eq "A1") {
print "Found A1", "\n";
last;
}
}
# now start searching the CSV again
If I have gone through some of a CSV using Text::CSV_XS, how can I then start again from the beginning? Is there some way to return the pointer/window to the beginning of the file?
use Fcntl qw( SEEK_SET );
seek($fh, 0, SEEK_SET);
You could also just re-open the file.

Compare 2 CSV Huge CSV Files and print the differences to another csv file using perl

I have 2 csv files of multiple fields(approx 30 fields), and huge size ( approx 4GB ).
File1:
EmployeeName,Age,Salary,Address
Vinoth,12,2548.245,"140,North Street,India"
Vivek,40,2548.245,"140,North Street,India"
Karthick,10,10.245,"140,North Street,India"
File2:
EmployeeName,Age,Salary,Address
Vinoth,12,2548.245,"140,North Street,USA"
Karthick,10,10.245,"140,North Street,India"
Vivek,40,2548.245,"140,North Street,India"
I want to compare these 2 files and report the differences into another csv file. In the above example, Employee Vivek and Karthick details are present in different row numbers but still the record data is same, so it should be considered as match. Employee Vinoth record should be considered as a mismatch since there is a mismatch in the address.
Output diff.csv file can contain the mismatched record from the File1 and File 2 as below.
Diff.csv
EmployeeName,Age,Salary,Address
F1, Vinoth,12,2548.245,"140,North Street,India"
F2, Vinoth,12,2548.245,"140,North Street,USA"
I've written the code so far as below. After this I'm confused which option to choose whether a Binary Search or any other efficient way to do this. Could you please help me?
My approach
1. Load the File2 in memory as hashes of hashes.
2.Read line by line from File1 and match it with the hash of hashes in memory.
use strict;
use warnings;
use Text::CSV_XS;
use Getopt::Long;
use Data::Dumper;
use Text::CSV::Hashify;
use List::BinarySearch qw( :all );
# Get Command Line Parameters
my %opts = ();
GetOptions( \%opts, "file1=s", "file2=s", )
or die("Error in command line arguments\n");
if ( !defined $opts{'file1'} ) {
die "CSV file --file1 not specified.\n";
}
if ( !defined $opts{'file2'} ) {
die "CSV file --file2 not specified.\n";
}
my $file1 = $opts{'file1'};
my $file2 = $opts{'file2'};
my $file3 = 'diff.csv';
print $file2 . "\n";
my $csv1 =
Text::CSV_XS->new(
{ binary => 1, auto_diag => 1, sep_char => ',', eol => $/ } );
my $csv2 =
Text::CSV_XS->new(
{ binary => 1, auto_diag => 1, sep_char => ',', eol => $/ } );
my $csvout =
Text::CSV_XS->new(
{ binary => 1, auto_diag => 1, sep_char => ',', eol => $/ } );
open( my $fh1, '<:encoding(utf8)', $file1 )
or die "Cannot not open '$file1' $!.\n";
open( my $fh2, '<:encoding(utf8)', $file2 )
or die "Cannot not open '$file2' $!.\n";
open( my $fh3, '>:encoding(utf8)', $file3 )
or die "Cannot not open '$file3' $!.\n";
binmode( STDOUT, ":utf8" );
my $f1line = undef;
my $f2line = undef;
my $header1 = undef;
my $f1empty = 'false';
my $f2empty = 'false';
my $reccount = 0;
my $hash_ref = hashify( "$file2", 'EmployeeName' );
if ( $f1empty eq 'false' ) {
$f1line = $csv1->getline($fh1);
}
while (1) {
if ( $f1empty eq 'false' ) {
$f1line = $csv1->getline($fh1);
}
if ( !defined $f1line ) {
$f1empty = 'true';
}
if ( $f1empty eq 'true' ) {
last;
}
else {
## Read each line from File1 and match it with the File 2 which is loaded as hashes of hashes in perl. Need help here.
}
}
print "End of Program" . "\n";
Storing data of such magnitude in database is most correct approach to tasks of this kind. At minimum SQLlite is recommended but other databases MariaDB, MySQL, PostgreSQL will work quite well.
Following code demonstrates how desired output can be achieved without special modules, but it does not take in account possibly messed up input data. This script will report data records as different even if difference can be just one extra space.
Default output is into console window unless you specify option output.
NOTE: Whole file #1 is read into memory, please be patient processing big files can take a while.
use strict;
use warnings;
use feature 'say';
use Getopt::Long qw(GetOptions);
use Pod::Usage;
my %opt;
my #args = (
'file1|f1=s',
'file2|f2=s',
'output|o=s',
'debug|d',
'help|?',
'man|m'
);
GetOptions( \%opt, #args ) or pod2usage(2);
print Dumper(\%opt) if $opt{debug};
pod2usage(1) if $opt{help};
pod2usage(-exitval => 0, -verbose => 2) if $opt{man};
pod2usage(1) unless $opt{file1};
pod2usage(1) unless $opt{file2};
unlink $opt{output} if defined $opt{output} and -f $opt{output};
compare($opt{file1},$opt{file2});
sub compare {
my $fname1 = shift;
my $fname2 = shift;
my $hfile1 = file2hash($fname1);
open my $fh, '<:encoding(utf8)', $fname2
or die "Couldn't open $fname2";
while(<$fh>) {
chomp;
next unless /^(.*?),(.*)$/;
my($key,$data) = ($1, $2);
if( !defined $hfile1->{$key} ) {
my $msg = "$fname1 $key is missing";
say_msg($msg);
} elsif( $data ne $hfile1->{$key} ) {
my $msg = "$fname1 $key,$hfile1->{$key}\n$fname2 $_";
say_msg($msg);
}
}
}
sub say_msg {
my $msg = shift;
if( $opt{output} ) {
open my $fh, '>>:encoding(utf8)', $opt{output}
or die "Couldn't to open $opt{output}";
say $fh $msg;
close $fh;
} else {
say $msg;
}
}
sub file2hash {
my $fname = shift;
my %hash;
open my $fh, '<:encoding(utf8)', $fname
or die "Couldn't open $fname";
while(<$fh>) {
chomp;
next unless /^(.*?),(.*)$/;
$hash{$1} = $2;
}
close $fh;
return \%hash;
}
__END__
=head1 NAME
comp_cvs - compares two CVS files and stores differense
=head1 SYNOPSIS
comp_cvs.pl -f1 file1.cvs -f2 file2.cvs -o diff.txt
Options:
-f1,--file1 input CVS filename #1
-f2,--file2 input CVS filename #2
-o,--output output filename
-d,--debug output debug information
-?,--help brief help message
-m,--man full documentation
=head1 OPTIONS
=over 4
=item B<-f1,--file1>
Input CVS filename #1
=item B<-f2,--file2>
Input CVS filename #2
=item B<-o,--output>
Output filename
=item B<-d,--debug>
Print debug information.
=item B<-?,--help>
Print a brief help message and exits.
=item B<--man>
Prints the manual page and exits.
=back
=head1 DESCRIPTION
B<This program> accepts B<input> and processes to B<output> with purpose of achiving some goal.
=head1 EXIT STATUS
The section describes B<EXIT STATUS> codes of the program
=head1 ENVIRONMENT
The section describes B<ENVIRONMENT VARIABLES> utilized in the program
=head1 FILES
The section describes B<FILES> which used for program's configuration
=head1 EXAMPLES
The section demonstrates some B<EXAMPLES> of the code
=head1 REPORTING BUGS
The section provides information how to report bugs
=head1 AUTHOR
The section describing author and his contanct information
=head1 ACKNOWLEDGMENT
The section to give credits people in some way related to the code
=head1 SEE ALSO
The section describing related information - reference to other programs, blogs, website, ...
=head1 HISTORY
The section gives historical information related to the code of the program
=head1 COPYRIGHT
Copyright information related to the code
=cut
Output for test files
file1.cvs Vinoth,12,2548.245,"140,North Street,India"
file2.cvs Vinoth,12,2548.245,"140,North Street,USA"
#!/usr/bin/env perl
use Data::Dumper;
use Digest::MD5;
use 5.01800;
use warnings;
my %POS;
my %chars;
open my $FILEA,'<',q{FileA.txt}
or die "Can't open 'FileA.txt' for reading! $!";
open my $FILEB,'<',q{FileB.txt}
or die "Can't open 'FileB.txt' for reading! $!";
open my $OnlyInA,'>',q{OnlyInA.txt}
or die "Can't open 'OnlyInA.txt' for writing! $!";
open my $InBoth,'>',q{InBoth.txt}
or die "Can't open 'InBoth.txt' for writing! $!";
open my $OnlyInB,'>',q{OnlyInB.txt}
or die "Can't open 'OnlyInB.txt' for writing! $!";
<$FILEA>,
$POS{FILEA}=tell $FILEA;
<$FILEB>,
$POS{FILEB}=tell $FILEB;
warn Data::Dumper->Dump([\%POS],[qw(*POS)]),' ';
{ # Scan for first character of the records involved
while (<$FILEA>) {
$chars{substr($_,0,1)}++;
};
while (<$FILEB>) {
$chars{substr($_,0,1)}--;
};
# So what characters do we need to deal with?
warn Data::Dumper->Dump([\%chars],[qw(*chars)]),' ';
};
my #chars=sort keys %chars;
{
my %_h;
# For each of the characters in our character set
for my $char (#chars) {
warn Data::Dumper->Dump([\$char],[qw(*char)]),' ';
# Beginning of data sections
seek $FILEA,$POS{FILEA},0;
seek $FILEB,$POS{FILEB},0;
%_h=();
my $pos=tell $FILEA;
while (<$FILEA>) {
next
unless (substr($_,0,1) eq $char);
# for each record save the lengthAndMD5 as the key and its start as the value
$_h{lengthAndMD5(\$_)}=$pos;
$pos=tell $FILEA;
};
my $_s;
while (<$FILEB>) {
next
unless (substr($_,0,1) eq $char);
if (exists $_h{$_s=lengthAndMD5(\$_)}) { # It's a duplicate
print {$InBoth} $_;
delete $_h{$_s};
}
else { # (Not in FILEA) It's only in FILEB
print {$OnlyInB} $_;
}
};
# only in FILEA
warn Data::Dumper->Dump([\%_h],[qw(*_h)]),' ';
for my $key (keys %_h) { # Only in FILEA
seek $FILEA,delete $_h{$key},0;
print {$OnlyInA} scalar <$FILEA>;
};
# Should be empty
warn Data::Dumper->Dump([\%_h],[qw(*_h)]),' ';
};
};
close $OnlyInB
or die "Could NOT close 'OnlyInB.txt' after writing! $!";
close $InBoth
or die "Could NOT close 'InBoth.txt' after writing! $!";
close $OnlyInA
or die "Could NOT close 'OnlyInA.txt' after writing! $!";
close $FILEB
or die "Could NOT close 'FileB.txt' after reading! $!";
close $FILEA
or die "Could NOT close 'FileA.txt' after reading! $!";
exit;
sub lengthAndMD5 {
return sprintf("%8.8lx-%32.32s",length(${$_[0]}),Digest::MD5::md5_hex(${$_[0]}));
};
__END__

HTML tag parsing script

I've written an HTML tag parsing script that I think should work but I'm getting a file not found error. Maybe I'm having a senior moment but I'm stuck. I have all of the *.html files that I want to parse in a directory called Test and I am executing the perl script from a folder called temp that has the directory Test in it. The exact error is: Error opening Test/1.html: No such file or directory.
Here's the code:
#!/usr/bin/perl
use strict;
use warnings;
use File::Find;
use HTTP::Headers;
use HTML::HeadParser;
use Text::CSV;
my $csv1 = Text::CSV->new ( { binary => 1 } ) or die Text::CSV->error_diag();
$csv1->eol ("\n");
my $dfile = 'all_tags.csv';
open my $fh1, ">:encoding(utf8)", "$dfile" or die "Error opening $dfile: $!";
my $dir = 'Test';
find (\&HTML_Files, $dir);
print "directory is";
print $dir;
close $fh1 or die "Error closing $dfile: $!";
exit;
sub HTML_Files {
Parse_HTML_Header($File::Find::name) if /\.html?$/;
}
sub Parse_HTML_Header {
my $ifile = shift;
open(my $fh0, '<', $ifile) or die "Error opening $ifile: $!\n";
my $text = '';
{
$/ = undef;
$text = <$fh0>;
}
close $fh0;
my $h = HTTP::Headers->new;
my $p = HTML::HeadParser->new($h);
$p->parse($text);
for ($h->header_field_names) {
my #values = split ',', $h->header($_);
if (/keywords/i) {
$csv1->print ($fh1, \#values);
} elsif (/description/i) {
$csv1->print ($fh1, \#values);
} elsif (/title/i) {
$csv1->print ($fh1, \#values);
}
}
}
It's because File::Find is doing a chdir as it runs. You should pass $_ instead of $File::Find::name. Or set no_chdir:
no_chdir
Does not chdir() to each directory as it recurses. The wanted() function will need to be aware of this, of course. In this case, $_ will be the same as $File::Find::name .
Because you are specifying a relative path, $File::Find::name is also a relative path. You can avoid this by specifying a full path to find as well. (e.g. /full/path/to/dir)

LWP, HTML::TableExtract and the output with Text::CSV - how to add attributes here?

I have a little parser that parses a site - with 6150 records. But I need to have this in a CSV-format.
First of all see here the target site: http://192.68.214.70/km/asps/schulsuche.asp?q=a&a=50&s=1750
I need all the data - with separation in the filed of
number
schoolnumber
school-name
Adress
Street
Postal Code
phone
fax
School-type
website
Well - I have a script: I am very interested what you think about this. Not all the fields are gained yet - I need more of them!
#!/usr/bin/perl
use strict;
use HTML::TableExtract;
use LWP::Simple;
use Cwd;
use POSIX qw(strftime);
my $total_records = 0;
my $alpha = "x";
my $results = 50;
my $range = 0;
my $url_to_process = "http://192.68.214.70/km/asps/schulsuche.asp?q=";
my $processdir = "processing";
my $counter = 50;
my $percent = 0;
workDir();
chdir $processdir;
processURL();
print "\nPress <enter> to continue\n";
<>;
my $displaydate = strftime('%Y%m%d%H%M%S', localtime);
open my $outfile, '>', "webdata_for_$alpha\_$displaydate.txt" or die 'Unable to create file';
processData();
close $outfile;
print "Finished processing $total_records records...\n";
print "Processed data saved to $ENV{HOME}/$processdir/webdata_for_$alpha\_$displaydate.txt\n";
unlink 'processing.html';
sub processURL() {
print "\nProcessing $url_to_process$alpha&a=$results&s=$range\n";
getstore("$url_to_process$alpha&a=$results&s=$range", 'tempfile.html') or die 'Unable to get page';
while( <tempfile.html> ) {
open( FH, "$_" ) or die;
while( <FH> ) {
if( $_ =~ /^.*?(Treffer \<b\>)(\d+)( - )(\d+)(<\/b> \w+ \w+ \<b\>)(\d+).*/ ) {
$total_records = $6;
print "Total records to process is $total_records\n";
}
}
close FH;
}
unlink 'tempfile.html';
}
sub processData() {
while ( $range <= $total_records) {
my $te = HTML::TableExtract->new(headers => [qw(lfd Schul Schulname Telefon Schulart Webseite)]);
getstore("$url_to_process$alpha&a=$results&s=$range", 'processing.html') or die 'Unable to get page';
$te->parse_file('processing.html');
my ($table) = $te->tables;
foreach my $ts ($te->table_states) {
foreach my $row ($ts->rows) {
cleanup(#$row);
# Add a table column delimiter in this case ||
print $outfile join("||", #$row)."\n";
}
}
$| = 1;
print "Processed records $range to $counter";
print "\r";
$counter = $counter + 50;
$range = $range + 50;
}
}
sub cleanup() {
for ( #_ ) {
s/\s+/ /g;
}
}
sub workDir() {
# Use home directory to process data
chdir or die "$!";
if ( ! -d $processdir ) {
mkdir ("$ENV{HOME}/$processdir", 0755) or die "Cannot make directory $processdir: $!";
}
}
with the following output:
1||9752||Deutsche Schule Alamogordo USA Alamogorde - New Mexico || ||Deutschsprachige Auslandsschule||
2||9931||Deutsche Schule der Borromäerinnen Alexandrien ET Alexandrien - Ägypten || ||Begegnungsschule (Auslandsschuldienst)||
3||1940||Max-Keller-Schule, Berufsfachschule f.Musik Alt- ötting d.Berufsfachschule für Musik Altötting e.V. Kapellplatz 36 84503 Altötting ||08671/1735 08671/84363||Berufsfachschulen f. Musik|| www.max-keller-schule.de
4||0006||Max-Reger-Gymnasium Amberg Kaiser-Wilhelm-Ring 7 92224 Amberg ||09621/4718-0 09621/4718-47||Gymnasien|| www.mrg-amberg.de
With the || being the delimiter.
My problem is that I need to have more fields - I need to have the following divided - see an example:
name: Volksschule Abenberg (Grundschule)
street: Güssübelstr. 2
postal-code and town: 91183 Abenberg
fax and telephone: 09178/215 09178/905060
type of school: Volksschulen
website: home.t-online.de/home/vs-abenberg
How to add more fields? This obviously has to be done in this line here, doesn't it!?
my $te = HTML::TableExtract->new(headers => [qw(lfd Schul Schulname Telefon Schulart Webseite)]);
But how? I tried out several things, but I always got bad results.
I played around - and tried another solution - but here I have good CSV-data - but unfortunatly no spider logic...
#!/usr/bin/perl
use warnings;
use strict;
use LWP::Simple;
use HTML::TableExtract;
use Text::CSV;
my $html= get 'http://192.68.214.70/km/asps/schulsuche.asp?q=n&a=50';
$html =~ tr/r//d; # strip the carriage returns
$html =~ s/ / /g; # expand the spaces
my $te = new HTML::TableExtract();
$te->parse($html);
my #cols = qw(
rownum
number
name
phone
type
website
);
my #fields = qw(
rownum
number
name
street
postal
town
phone
fax
type
website
);
my $csv = Text::CSV->new({ binary => 1 });
foreach my $ts ($te->table_states) {
foreach my $row ($ts->rows) {
# trim leading/trailing whitespace from base fields
s/^s+//, s/\s+$// for #$row;
# load the fields into the hash using a "hash slice"
my %h;
#h{#cols} = #$row;
# derive some fields from base fields, again using a hash slice
#h{qw/name street postal town/} = split /n+/, $h{name};
#h{qw/phone fax/} = split /n+/, $h{phone};
# trim leading/trailing whitespace from derived fields
s/^s+//, s/\s+$// for #h{qw/name street postal town/};
$csv->combine(#h{#fields});
print $csv->string, "\n";
}
}
Well - with this I tried another solution - but here I have good CSV-data - but unfortunately no spider logic.
How to add the spider-logic here!?
Well I need some help - either in the first or in the second script!
The website uses br tags to separate the sub-fields within each cell, very much like you want to divide the data. HTML::TableExtract turns these into newlines by default In your first program, but your cleanup routine throws this information away.
In your first program, add something like s/\n/||/sg; (assuming the same separator) before you flatten the rest of the whitespace.