Can you do hg clone without getting default path? - mercurial

BRIEF:
hg clone creates path "default" in /.hg/hgrc, set to where you cloned from.
Q: is it possible to disable this automatically?
DETAIL:
This is already partially answered.
In Can you prevent default-push, but allow pull? we see how to set default-push, in some hgrc file such as /.hg/hgrc, or (my preference), in ~/.hgrc
In Is hg clone equivalent to hg (init→pull)
Tim Henigan says that hg clone = init; pull; hg update default; setting up default path in /.hg/hgrc.
Although elsewhere we see that this is not quite true. hg clone may differ, e.g., in that it does hard link sharing. Lacking an official statement of equivalence...
Now, disabling default-push helps a lot.
But... I have fallen into the habit of doing "hg push default". Which somewhat defaets the p;urpose.
By the way: reason I am doing this, wanting to disable the default: workflow that goes master->workspace->staging_area->master. I do many clones of the master. Modifying /.hg/hgrc to edit [path] default each time I do a cline is a pain. Doing "hg push" or "hg push default" in any of the workspaces can be bad. Instead I need to push to the staging area, ad only from the staging area am I allowed to push to the master.
(I have tried master<->staging_area<->workspace, i.e. always cloning from the sdtaging area. But I found this confusing. Plus, the part that I haven't said yet: my project makes me delete or collapse history, which adds an additional level of confusion and error-prone-ness.)

Here's the post-clone hook I came up with:
[hooks]
post-clone = perl ~/bin/delete-default-from-.hgrc "$HG_PATS"
and the perl script is below.
I'd still like to find a one-liner.
#!/usr/local/bin/perl
use strict;
print "editing <repo>/.hg/hgrc to to remove [paths] / default";
# Hand tested
# cd ~/mips/uarch-perf-sim/psim+stuff/f; rm -rf k; hg clone ../m k
# cd ~/mips/uarch-perf-sim/psim+stuff/f; rm -rf k; hg clone ../m
my $debug = 0;
my $hg_pats = $ARGV[0];
die "Usage: delete-default-from-.hgrc HG_PATS (from post-clone hook)\n" if !exists $ARGV[0] || exists $ARGV[2];
# expect HG_PATS from post-clone hook
#['../m']
#['../m', 'target']"
#['../m', '../target']"
my $from;
my $target;
if( $hg_pats =~ m/^\['([^']+)'\]$/ ) {
$from = $1;
$target = $from;
# delete paths if target implicit
$target =~ s{.*/}{};
print "from-only: $target\n" if $debug;
} elsif( $hg_pats =~ m/^\['([^']+)',\s+'([^']+)'\]$/ ) {
$from = $1;
$target = $2;
# do NOT delete paths if target explicit
print "from to: $target\n" if $debug;
} else {
die "expected HG_PATS, got: $hg_pats\n";
}
die "clone target not found: $target\n" if ! -d $target;
my $hgrc = "$target/.hg/hgrc";
die "clone target/.hg/hgrc not found: $hgrc\n" if ! -r "$hgrc";
open( my $in, "<$hgrc" ) || die "could not open $hgrc to read (edit)\n";
unlink "$hgrc.tmp";
open( my $out, ">$hgrc.tmp" ) || die "could not open $hgrc.tmp to write\n";
my $section = '';
my $paths_found;
my $paths_default_found;
while( my $line = <$in> ) {
chomp($line);
print "line = $line\n" if $debug;
if( $line =~ m/^\[paths\]/ ) {
$section = "[paths]";
$paths_found++;
}
elsif( $line =~ m/^\[/ ) {
$section = $line;
}
elsif( ($section eq "[paths]") and ($line =~ m/^\s*default\s*=/) ) {
$line =~ s/default/default-cloned-from/;
$paths_default_found++;
}
print $out $line."\n";
}
die "[paths] section not found in $hgrc" if ! $paths_found;
die "[paths] / default not found in $hgrc" if ! $paths_default_found;
system("echo '<diff edited $hgrc>';diff -U10 $hgrc $hgrc.tmp; echo '</diff>'");
unlink "$hgrc.orig";
die "could not unlink older $hgrc.orig" if -e "$hgrc.orig";
rename "$hgrc","$hgrc.orig" || die "could not rename $hgrc to $hgrc.orig";
rename "$hgrc.tmp","$hgrc" || die "could not rename $hgrc.tmp to $hgrc";
system("echo '$hgrc:';cat $hgrc");
exit(0);

Related

Compare 2 CSV Huge CSV Files and print the differences to another csv file using perl

I have 2 csv files of multiple fields(approx 30 fields), and huge size ( approx 4GB ).
File1:
EmployeeName,Age,Salary,Address
Vinoth,12,2548.245,"140,North Street,India"
Vivek,40,2548.245,"140,North Street,India"
Karthick,10,10.245,"140,North Street,India"
File2:
EmployeeName,Age,Salary,Address
Vinoth,12,2548.245,"140,North Street,USA"
Karthick,10,10.245,"140,North Street,India"
Vivek,40,2548.245,"140,North Street,India"
I want to compare these 2 files and report the differences into another csv file. In the above example, Employee Vivek and Karthick details are present in different row numbers but still the record data is same, so it should be considered as match. Employee Vinoth record should be considered as a mismatch since there is a mismatch in the address.
Output diff.csv file can contain the mismatched record from the File1 and File 2 as below.
Diff.csv
EmployeeName,Age,Salary,Address
F1, Vinoth,12,2548.245,"140,North Street,India"
F2, Vinoth,12,2548.245,"140,North Street,USA"
I've written the code so far as below. After this I'm confused which option to choose whether a Binary Search or any other efficient way to do this. Could you please help me?
My approach
1. Load the File2 in memory as hashes of hashes.
2.Read line by line from File1 and match it with the hash of hashes in memory.
use strict;
use warnings;
use Text::CSV_XS;
use Getopt::Long;
use Data::Dumper;
use Text::CSV::Hashify;
use List::BinarySearch qw( :all );
# Get Command Line Parameters
my %opts = ();
GetOptions( \%opts, "file1=s", "file2=s", )
or die("Error in command line arguments\n");
if ( !defined $opts{'file1'} ) {
die "CSV file --file1 not specified.\n";
}
if ( !defined $opts{'file2'} ) {
die "CSV file --file2 not specified.\n";
}
my $file1 = $opts{'file1'};
my $file2 = $opts{'file2'};
my $file3 = 'diff.csv';
print $file2 . "\n";
my $csv1 =
Text::CSV_XS->new(
{ binary => 1, auto_diag => 1, sep_char => ',', eol => $/ } );
my $csv2 =
Text::CSV_XS->new(
{ binary => 1, auto_diag => 1, sep_char => ',', eol => $/ } );
my $csvout =
Text::CSV_XS->new(
{ binary => 1, auto_diag => 1, sep_char => ',', eol => $/ } );
open( my $fh1, '<:encoding(utf8)', $file1 )
or die "Cannot not open '$file1' $!.\n";
open( my $fh2, '<:encoding(utf8)', $file2 )
or die "Cannot not open '$file2' $!.\n";
open( my $fh3, '>:encoding(utf8)', $file3 )
or die "Cannot not open '$file3' $!.\n";
binmode( STDOUT, ":utf8" );
my $f1line = undef;
my $f2line = undef;
my $header1 = undef;
my $f1empty = 'false';
my $f2empty = 'false';
my $reccount = 0;
my $hash_ref = hashify( "$file2", 'EmployeeName' );
if ( $f1empty eq 'false' ) {
$f1line = $csv1->getline($fh1);
}
while (1) {
if ( $f1empty eq 'false' ) {
$f1line = $csv1->getline($fh1);
}
if ( !defined $f1line ) {
$f1empty = 'true';
}
if ( $f1empty eq 'true' ) {
last;
}
else {
## Read each line from File1 and match it with the File 2 which is loaded as hashes of hashes in perl. Need help here.
}
}
print "End of Program" . "\n";
Storing data of such magnitude in database is most correct approach to tasks of this kind. At minimum SQLlite is recommended but other databases MariaDB, MySQL, PostgreSQL will work quite well.
Following code demonstrates how desired output can be achieved without special modules, but it does not take in account possibly messed up input data. This script will report data records as different even if difference can be just one extra space.
Default output is into console window unless you specify option output.
NOTE: Whole file #1 is read into memory, please be patient processing big files can take a while.
use strict;
use warnings;
use feature 'say';
use Getopt::Long qw(GetOptions);
use Pod::Usage;
my %opt;
my #args = (
'file1|f1=s',
'file2|f2=s',
'output|o=s',
'debug|d',
'help|?',
'man|m'
);
GetOptions( \%opt, #args ) or pod2usage(2);
print Dumper(\%opt) if $opt{debug};
pod2usage(1) if $opt{help};
pod2usage(-exitval => 0, -verbose => 2) if $opt{man};
pod2usage(1) unless $opt{file1};
pod2usage(1) unless $opt{file2};
unlink $opt{output} if defined $opt{output} and -f $opt{output};
compare($opt{file1},$opt{file2});
sub compare {
my $fname1 = shift;
my $fname2 = shift;
my $hfile1 = file2hash($fname1);
open my $fh, '<:encoding(utf8)', $fname2
or die "Couldn't open $fname2";
while(<$fh>) {
chomp;
next unless /^(.*?),(.*)$/;
my($key,$data) = ($1, $2);
if( !defined $hfile1->{$key} ) {
my $msg = "$fname1 $key is missing";
say_msg($msg);
} elsif( $data ne $hfile1->{$key} ) {
my $msg = "$fname1 $key,$hfile1->{$key}\n$fname2 $_";
say_msg($msg);
}
}
}
sub say_msg {
my $msg = shift;
if( $opt{output} ) {
open my $fh, '>>:encoding(utf8)', $opt{output}
or die "Couldn't to open $opt{output}";
say $fh $msg;
close $fh;
} else {
say $msg;
}
}
sub file2hash {
my $fname = shift;
my %hash;
open my $fh, '<:encoding(utf8)', $fname
or die "Couldn't open $fname";
while(<$fh>) {
chomp;
next unless /^(.*?),(.*)$/;
$hash{$1} = $2;
}
close $fh;
return \%hash;
}
__END__
=head1 NAME
comp_cvs - compares two CVS files and stores differense
=head1 SYNOPSIS
comp_cvs.pl -f1 file1.cvs -f2 file2.cvs -o diff.txt
Options:
-f1,--file1 input CVS filename #1
-f2,--file2 input CVS filename #2
-o,--output output filename
-d,--debug output debug information
-?,--help brief help message
-m,--man full documentation
=head1 OPTIONS
=over 4
=item B<-f1,--file1>
Input CVS filename #1
=item B<-f2,--file2>
Input CVS filename #2
=item B<-o,--output>
Output filename
=item B<-d,--debug>
Print debug information.
=item B<-?,--help>
Print a brief help message and exits.
=item B<--man>
Prints the manual page and exits.
=back
=head1 DESCRIPTION
B<This program> accepts B<input> and processes to B<output> with purpose of achiving some goal.
=head1 EXIT STATUS
The section describes B<EXIT STATUS> codes of the program
=head1 ENVIRONMENT
The section describes B<ENVIRONMENT VARIABLES> utilized in the program
=head1 FILES
The section describes B<FILES> which used for program's configuration
=head1 EXAMPLES
The section demonstrates some B<EXAMPLES> of the code
=head1 REPORTING BUGS
The section provides information how to report bugs
=head1 AUTHOR
The section describing author and his contanct information
=head1 ACKNOWLEDGMENT
The section to give credits people in some way related to the code
=head1 SEE ALSO
The section describing related information - reference to other programs, blogs, website, ...
=head1 HISTORY
The section gives historical information related to the code of the program
=head1 COPYRIGHT
Copyright information related to the code
=cut
Output for test files
file1.cvs Vinoth,12,2548.245,"140,North Street,India"
file2.cvs Vinoth,12,2548.245,"140,North Street,USA"
#!/usr/bin/env perl
use Data::Dumper;
use Digest::MD5;
use 5.01800;
use warnings;
my %POS;
my %chars;
open my $FILEA,'<',q{FileA.txt}
or die "Can't open 'FileA.txt' for reading! $!";
open my $FILEB,'<',q{FileB.txt}
or die "Can't open 'FileB.txt' for reading! $!";
open my $OnlyInA,'>',q{OnlyInA.txt}
or die "Can't open 'OnlyInA.txt' for writing! $!";
open my $InBoth,'>',q{InBoth.txt}
or die "Can't open 'InBoth.txt' for writing! $!";
open my $OnlyInB,'>',q{OnlyInB.txt}
or die "Can't open 'OnlyInB.txt' for writing! $!";
<$FILEA>,
$POS{FILEA}=tell $FILEA;
<$FILEB>,
$POS{FILEB}=tell $FILEB;
warn Data::Dumper->Dump([\%POS],[qw(*POS)]),' ';
{ # Scan for first character of the records involved
while (<$FILEA>) {
$chars{substr($_,0,1)}++;
};
while (<$FILEB>) {
$chars{substr($_,0,1)}--;
};
# So what characters do we need to deal with?
warn Data::Dumper->Dump([\%chars],[qw(*chars)]),' ';
};
my #chars=sort keys %chars;
{
my %_h;
# For each of the characters in our character set
for my $char (#chars) {
warn Data::Dumper->Dump([\$char],[qw(*char)]),' ';
# Beginning of data sections
seek $FILEA,$POS{FILEA},0;
seek $FILEB,$POS{FILEB},0;
%_h=();
my $pos=tell $FILEA;
while (<$FILEA>) {
next
unless (substr($_,0,1) eq $char);
# for each record save the lengthAndMD5 as the key and its start as the value
$_h{lengthAndMD5(\$_)}=$pos;
$pos=tell $FILEA;
};
my $_s;
while (<$FILEB>) {
next
unless (substr($_,0,1) eq $char);
if (exists $_h{$_s=lengthAndMD5(\$_)}) { # It's a duplicate
print {$InBoth} $_;
delete $_h{$_s};
}
else { # (Not in FILEA) It's only in FILEB
print {$OnlyInB} $_;
}
};
# only in FILEA
warn Data::Dumper->Dump([\%_h],[qw(*_h)]),' ';
for my $key (keys %_h) { # Only in FILEA
seek $FILEA,delete $_h{$key},0;
print {$OnlyInA} scalar <$FILEA>;
};
# Should be empty
warn Data::Dumper->Dump([\%_h],[qw(*_h)]),' ';
};
};
close $OnlyInB
or die "Could NOT close 'OnlyInB.txt' after writing! $!";
close $InBoth
or die "Could NOT close 'InBoth.txt' after writing! $!";
close $OnlyInA
or die "Could NOT close 'OnlyInA.txt' after writing! $!";
close $FILEB
or die "Could NOT close 'FileB.txt' after reading! $!";
close $FILEA
or die "Could NOT close 'FileA.txt' after reading! $!";
exit;
sub lengthAndMD5 {
return sprintf("%8.8lx-%32.32s",length(${$_[0]}),Digest::MD5::md5_hex(${$_[0]}));
};
__END__

Check for HTTP Code in fetch_json sub / save previous output for backup in Perl

so I have to update a perl script that goes through a json file, fetches keys called “items”, and transforms these items into perl output.
I’m a noob at Perl/coding in general, so plz bear with me🥺. The offset variable is set as each url is iterated through. A curl command is passed to the terminal, the file is put through a "#lines" array, and in the end, whatever json data is stored in $data gets decoded and transformed. and in the blocks below (where # populate %manager_to_directs, # populate %user_to_management_chain, and # populate %manager_to_followers are commented) is where fetch_json gets called and where the hash variables get the data from the decoded json. (***Please feel free to correct me if I interpreted this code incorrectly)
There’s been a problem where the $cmd doesn’t account for the HTTP Responses every time this program is executed. I only want the results to be processed if and only if the program gets http 200 (OK) or http 204 (NO_CONTENT) because the program will run and sometimes partially refresh our json endpoint (url in curl command output from terminal below), or sometimes doesn’t even refresh at all.
All I’m assuming is that I’d probably have to import the HTTP::Response pragma and somehow pull that out of the commands being run in fetch_json, but I have no other clue where to go from there.
Would I have to update the $cmd to pull the http code? And if so, how would I interpret that in the fetch_json sub to exit the process if anything other than 200 or 204 is received?
Oh and also, how would I save the previous output from the last execution in a backup file?
Any help I can get here would be highly appreciated!
See code below:
Pulling this from a test run:
curl -o filename -w "HTTP CODE: %{http_code}\n" --insecure --key <YOUR KEY> --cert <YOUR CERT> https://xxxxxxxxxx-xxxxxx-xxxx.xxx.xxxxxxxxxx.com:443/api/v1/reports/active/week > http.out
#!/usr/bin/env perl
use warnings;
use strict;
use JSON qw(decode_json);
use autodie qw(open close chmod unlink);
use File::Basename;
use File::Path qw(make_path rmtree);
use Cwd qw(abs_path);
use Data::Dumper;
use feature qw(state);
sub get_fetched_dir {
return "$ENV{HOME}/tmp/mule_user_fetched";
}
# fetch from mulesoft server and save local copy
sub fetch_json {
state $now = time();
my ($url) = #_;
my $dir = get_fetched_dir();
if (!-e $dir) {
make_path($dir);
chmod 0700, $dir;
}
my ($offset) = $url =~ m{offset=(\d+)};
if (!defined $offset) {
$offset = 0;
}
$offset = sprintf ("%03d", $offset);
my $filename = "$dir/offset${offset}.json";
print "$filename\n";
my #fields = stat $filename;
my $size = $fields[7];
my $mtime = $fields[9];
if (!$size || !$mtime || $now-$mtime > 24*60*60) {
my $cmd = qq(curl \\
--insecure \\
--silent \\
--key $ENV{KEY} \\
--cert $ENV{CERT} \\
$url > $filename
);
#print $cmd;
system($cmd);
chmod 0700, $filename;
}
open my $fh, "<", $filename;
my #lines = <$fh>;
close $fh;
return undef if !#lines;
my $data;
eval {
$data = decode_json (join('',#lines));
};
if ($#) {
unlink $filename;
print "Bad JSON detected in $filename.\n";
print "I have deleted $filename.\n";
print "Please re-run script.\n";
exit(1);
}
return $data;
}
die "Usage:\n KEY=key_file CERT=cert_file mule_to_jira.pl\n"
if !defined $ENV{KEY} || !defined $ENV{CERT};
print "fetching data from mulesoft\n";
# populate %manager_to_directs
my %manager_to_directs;
my %user_to_manager;
my #users;
my $url = "https://enterprise-worker-data.eip.vzbuilders.com/api/v1/reports/active/week";
while ($url && $url ne "Null") {
my $data = fetch_json($url);
last if !defined $data;
$url = $data->{next};
#print $url;
my $items = $data->{items};
foreach my $item (#$items) {
my $shortId = $item->{shortId};
my $manager = $item->{organization}{manager};
push #users, $shortId;
next if !$manager;
$user_to_manager{$shortId} = $manager;
push #{$manager_to_directs{$manager}}, $shortId;
}
}
# populate %user_to_management_chain
# populate %manager_to_followers
my %user_to_management_chain;
my %manager_to_followers;
foreach my $user (keys %user_to_manager) {
my $manager = $user_to_manager{$user};
my $prev = $user;
while ($manager && $prev ne $manager) {
push #{$manager_to_followers{$manager}}, $user;
push #{$user_to_management_chain{$user}}, $manager;
$prev = $manager;
$manager = $user_to_manager{$manager}; # manager's manager
}
}
# write backyard.txt
open my $backyard_fh, ">", "backyard.txt";
foreach my $user (sort keys %user_to_management_chain) {
my $chain = join ',', #{$user_to_management_chain{$user}};
print $backyard_fh "$user:$chain\n";
}
close $backyard_fh;
# write teams.txt
open my $team_fh, ">", "teams.txt";
foreach my $user (sort #users) {
my $followers = $manager_to_followers{$user};
my $followers_joined = $followers ? join (',', sort #$followers) : "";
print $team_fh "$user:$followers_joined\n";
}
close $team_fh;
my $dir = get_fetched_dir();
rmtree $dir, {safe => 1};
So, if you want to keep the web fetch and the Perl processing decoupled, you can modify the curl command so that it includes the response header in the output by adding the -i option. That means that the Perl will have to be modified to read and process the headers before getting to the body. A successful http.out will look something like this:
HTTP/1.1 200 OK
Server: somedomain.com
Date: <date retrieved>
Content-Type: application/json; charset=utf-8
Content-Length: <size of JSON>
Status: 200 OK
Maybe: More Headers
Blank: Line signals start of body
{
JSON object here
}
An unsuccessful curl will have something other than 200 OK on the first line next to the HTTP/1.1, so you can tell that something went wrong.
Alternatively, you can let the Perl do the actual HTTP fetch instead of relying on curl; you can use LWP::UserAgent or any of a number of other HTTP client libraries in Perl, which will give you the entire response, not just the body.

widgets can only be called when ZLE is active

I have been dealing with this problem for almost a month now, and I feel frustrated, Any help would be greatly appreciated.
I am trying to write a widget for my takenote command. The purpose of the widget is to feed all the markdown files in ~/notes folder into fzf so that the user can select one of them and starts editing it.
After the user types takenote and presses <tab> I expect the widget to run.
Here is the _takenote.zsh widget definition:
#compdef takenote
local file=$( find -L "$HOME/notes/" -print 2> /dev/null | fzf-tmux +m )
zle reset-prompt
compadd $file
return 1
Unfortunately, the above code doesn't work because of zle reset-prompt, if I remove it then the result would be like this:
And after selecting the file it would turn into:
Which as you see will corrupt the prompt and the command itself.
It appears to me that what I need to do is do a zle reset-prompt
before calling compadd but this can only work when I bind the function to a key otherwise, I will get the following error:
widgets can only be called when ZLE is active
I finally found a workaround for the issue. Although I am not satisfied with the strategy since it is not self contained in the widget itself, but it works. The solution involves trapping fzf-completion after it is invoked and calling zle reset-prompt.
For registering the trap add the following snippet to your .zshrc file (see Zsh menu completion causes problems after zle reset-prompt
):
TMOUT=1
TRAPALRM() {
if [[ "$WIDGET" =~ ^(complete-word|fzf-completion)$ ]]; then
# limit the reset-prompt functionality to the `takenote` script
if [[ "$LBUFFER" == "takenote "* ]]; then
zle reset-prompt
fi
fi
}
The _takenote widget:
#compdef takenote
local file=$( find -L "$HOME/notes/" -print 2> /dev/null | fzf-tmux +m )
compadd $file
return 0
p.s: I would still love to move the trap inside the widget, and avoid registering it in the init script (.zshrc)
After two days, I finally managed to find a hint on how to achieve it thanks to the excellent fzf-tab-completion project:
https://github.com/lincheney/fzf-tab-completion/blob/c91959d81320935ae88c090fedde8dcf1ca70a6f/zsh/fzf-zsh-completion.sh#L120
So actually, all that you need to do is:
#compdef takenote
local file=$( find -L "$HOME/notes/" -print 2> /dev/null | fzf-tmux +m )
compadd $file
TRAPEXIT() {
zle reset-prompt
}
return 0
And it finally works. Cheers!
I was getting the same error when trying to use bindkey for a widget to use vim to open the fzf selected file. Turns out I have to open the file in function1 and then have a function2 calling function1 and then reset-prompt to avoid this widgets can only be called when ZLE is active error. Like you said, it is really frustrating and took me almost a day to figure out!
Example code:
## use rg to get file list
export FZF_DEFAULT_COMMAND='rg --files --hidden'
## file open (function1)
__my-fo() (
setopt localoptions pipefail no_aliases 2> /dev/null
local file=$(eval "${FZF_DEFAULT_COMMAND}" | FZF_DEFAULT_OPTS="--height ${FZF_TMUX_HEIGHT:-40%} --reverse $FZF_DEFAULT_OPTS --preview 'bat --color=always --line-range :500 {}'" $(__fzfcmd) -m "$#" | while read item; do
echo -n "${(q)item}"
done)
local ret=$?
if [[ -n $file ]]; then
$EDITOR $file
fi
return $ret
)
## define zsh widget(function2)
__my-fo-widget(){
__my-fo
local ret=$?
zle reset-prompt
return $ret
}
zle -N __my-fo-widget
bindkey ^p __my-fo-widget

TCL: Not able to write a data to a file

I am trying to write a message to an existing file.
The check is performed to see certain variable values and then append a message accordingly to file, but $data value is not written to a file.
Please let me know is something wrong with my code?
proc run_sanity {} {
global rundir typep corner_name reflib compLib
cd $rundir
set filename "sanity.txt"
set fileId [open $filename "w"]
if {[file exists $filename]} {
exec rm -rf $rundir/sanity.txt
}
exec grep operating_conditions -A 3 $compLib | grep -v default | grep -v lu | grep -v variable > $rundir/sanity.txt
exec grep nom $compLib >> $rundir/sanity.txt
exec grep library $compLib | grep -v slew | grep -v features >> $rundir/sanity.txt
set driver [exec grep -c driver_waveform $compLib]
set recovery [exec grep -c recovery $compLib]
set removal [exec grep -c removal $compLib]
if {$driver > 0 && $recovery > 0 && $removal > 0} {
set data "No data found for Driver waveform, Recovery and Removal Table.\n"
puts $fileId $data
} else {
set data "Driver waveform, Recovery and Removal table is present in .lib.\n"
puts $fileId $data
}
close $fileId
exec emacs $rundir/sanity.txt &
}
Thanks
Dan
Basically, your code is
set fileId [open $filename "w"] ; 1
if {[file exists $filename]} { ; 2
exec rm -rf $rundir/sanity.txt ; 3
}
puts $fileId $data ; 4
So,
You open the file for writing.
The "w" mode makes sure the file is created if it does not exist
or is truncated if it does.
You then check if the file exists.
This check always passes because of the first step
(and unless you have a race on your filesystem—with some other
process deleting the file before this check happens).
You delete the file.
Since this supposedly happens on a filesystem with POSIX semantics
(that is, on some Unix-y OS), any file descriptor opened to that
deleted file counts as a live reference to that file, so while the file
entry is removed from its directory on the file system its data is not,
and reading and writing of this data happens just OK—it's just
inaccessible outside of your process because this data no more has any
name on the filesystem.
By the way, this trick is routinely used by Unix software to keep
temporary data: a file is created and then immediately removed from
the file system while keeping an open file descriptor to it.
You write to the file. Writing happens OK and the data is actually
stored on the file system.
Once you close your file, its data loses the last live reference to it
and so at that very moment that data gets deleted.
So you should supposedly reconsider your approach to managing this file.
So far, there's little sanity in handling "sanity.txt". ;-)
By the way, why do you use rm -rf (and why -r?) instead of just file delete right from Tcl? That would be a tad faster.
This code works as i moved the fileId inside the if loop, taking care of POSIX semantics.
proc run_sanity {} {
global rundir typep corner_name reflib compLib
cd $rundir
set Sname "sanity.txt"
##set fileId [open $filename "a"]
if {[file exists $Sname]} {
file delete $rundir/sanity.txt
}
after 10000 {destroy .dialog4}
tk_dialog .dialog4 "Running Sanity:" "SANITY TEST RUNNING" info 0 OK
exec grep operating_conditions -A 3 $compLib | grep -v default | grep -v lu | grep -v variable > $rundir/sanity.txt
exec grep nom $compLib >> $rundir/sanity.txt
exec echo "" >> $rundir/sanity.txt
exec grep library $compLib | grep -v slew | grep -v features >> $rundir/sanity.txt
exec echo "" >> $rundir/sanity.txt
set driver [exec grep -c driver_waveform $compLib]
set recovery [exec grep -c recovery $compLib]
set removal [exec grep -c removal $compLib]
cd $rundir
if {$driver > 0 && $recovery > 0 && $removal > 0} {
set filename "sanity.txt"
set fileId [open $filename "a"]
set msg "Driver waveform, Recovery and Removal table is present in .lib.\n"
puts $fileId $msg
close $fileId
} else {
set filename "sanity.txt"
set fileId [open $filename "a"]
set msg "No data found for Driver waveform, Recovery and Removal Table.\n"
puts $fileId $msg
close $fileId
}
exec emacs $rundir/sanity.txt &
}

Upload file using Perl CGI

I am able to create my directory but I cannot seem to place the file in the directory.
#!/usr/bin/perl
use Cwd;
use CGI;
my $dir = getcwd();
print "Current Working Directory: $ dir\n";
my $photoDir = "$dir/MyPhotos";
mkdir $photoDir
or die "Cannot mkdir $photoDir: $!"
unless -d $photoDir;
my $query = new CGI;
my $filename = $query->param("Photo");
my $description = $query->param("description");
print "Current filename: $filename\n";
my ( $name, $path, $extension ) = fileparse ( $filename, '\..*' ); $filename = $name . $extension;
print $filename;
my $upload_filehandle = $query->upload("Photo");
open ( UPLOADFILE, ">$photoDir/$filename" )
or die "$!";
binmode UPLOADFILE;
while ( <$upload_filehandle> )
{ print UPLOADFILE; }
close UPLOADFILE;
The CGI stack trace shows no errors but the log shows there is no output
LOG: 5 5020-0:0:0:0:0:0:0:1%0-9: CGI output 0 bytes.
CGI.pm manual suggests this path to saving uploaded files. Try this additional check and write method and see if it helps.
$lightweight_fh = $q->upload('field_name');
# undef may be returned if it's not a valid file handle
if (defined $lightweight_fh) {
# Upgrade the handle to one compatible with IO::Handle:
my $io_handle = $lightweight_fh->handle;
open (OUTFILE,'>>','/usr/local/web/users/feedback');
while ($bytesread = $io_handle->read($buffer,1024)) {
print OUTFILE $buffer;
}
}
Also make sure you have your HTML form has required type like this: <form action=... method=post enctype="multipart/form-data">