Catch a specific warning in PHP cli - warnings

I made a script in php-cli that incurs in some warnings, i need to catch only one of them and kill the script "die();" only if it comes.
this:
Warning: fgets() expects parameter 1 to be resource, boolean given in file.php on line 46
how can I do it?

Your script probably looks like:
$f = fopen($filename, "r");
while ($line = fgets($f)){
...
}
Or maybe:
$f = fopen($filename, "r");
$line = fgets($f);
You can to handle the error between the fopen() and the fgets() calls, without using try[}catch(){}:
$f = fopen($filename, "r");
if (!$f){
die("Error while opening the file.\n");
}
$line = fgets($f);

Related

write into a csv file in multiple cells

I am coding in perl, how can you write into a csv file multiple variables and put each one in a separate cell in the same line.
this a part of my Code:
#!/usr/bin/perl
use feature qw(say);
use strict;
use warnings;
use constant BUFSIZE => 6;
my $year += 1900;
my $input_file = 'path\ZONE0.txt';
my $outputfile = 'path\outputfile.csv';
open (my $BIN, "<:raw", $input_file) or die "can't open the file $input_file: $!";
my $buffer;
open(FH, '>>', $outputfile) or die $!;
while (1) {
my $bytes_read = sysread $BIN, $buffer, BUFSIZE;
die "Could not read file $input_file: $!" if !defined $bytes_read;
last if $bytes_read <= 0;
my #decimal= map { unpack "C", $_ } split //, $buffer;
my $start= $decimal[0];
my $DevType = $decimal[1];
my #hexDevType = sprintf("0x%x", $DevType);
my #DevUID =($decimal[5], $decimal[4], $decimal[3], $decimal[2]);
my #hexDevUID = map { sprintf("0x%x",$_) } #DevUID;
print FH $start, ' ' , print FH $DevType,' ', #hexDevUID , "\n";
}
close $BIN;
this results in puting all the variable next to each other in one cell, which is not what I want. can you help me separate the variables.
CSV files don't have cells. I suspect you're opening the file in a spreadsheet program.
The secret of a CSV file is that the values are separated by commas. So you need to put commas between any values that you want to appear in separate cells in your spreadsheet.
It looks like your data is in #hexDevUID. The simplest way is to turn that into a comma-separated string using join():
join(',', #hexDevUID)
But the more robust approach will be to use Text::CSV_XS.
Bellow is modified OPs code which does not utilize any CVS modules for output.
Added error handling code for read error and insufficient number of read bytes for further processing.
use strict;
use warnings;
use feature 'say';
use constant BUFSIZE => 6;
my($buffer,$bytes_read);
my $infile = shift || 'path\ZONE0.txt';
my $outfile = 'path\outputfile.csv';
open my $in, '<:raw', $infile
or die "Can't open $infile: $!";
open my $out, '+>>', $outfile
or die "Can't open $outfile: $!";
do {
$bytes_read = sysread $in, $buffer, BUFSIZE;
die "Error: read from $infile: $!" unless defined $bytes_read;
error_handler($bytes_read) unless $bytes_read == 6;
my #decimal = map { ord } split //, $buffer;
my($start,$DevType) = #decimal[0,1];
my #hexDevUID = map { sprintf("0x%02x",$_) } #decimal[5,4,3,2];
say $out join(',',($start,$DevType,#hexDevUID));
} while ( $bytes_read );
sub error_handler {
my $bytes = shift;
close $out;
close $in;
say "
Error: called error_handler(\$read_bytes)
Action: Emergency file closure to preserve data
Cause: Read insufficient $bytes bytes
" unless $bytes == 0;
exit $bytes ? 1 : 0;
}
The loop can be rewritten with use of unpack like following
do {
$bytes_read = sysread $in, $buffer, BUFSIZE;
die "Error: read from $infile: $!" unless defined $bytes_read;
error_handler($bytes_read) unless $bytes_read == 6;
my($start,$DevType,#devUID) = unpack('CCC4',$buffer);
my #hexDevUID = reverse map { sprintf "0x%02x", $_ } #devUID;
say $out join(',',($start,$DevType,#hexDevUID));
} while ( $bytes_read );

How to start reading CSV from beginning again?

use Text::CSV_XS;
my $csv = Text::CSV_XS->new;
open my $fh, "test.csv" or die "test.csv: $!";
while (my $row = $csv->getline($fh)) {
my #fields = #$row;
if ($fields[0] eq "A1") {
print "Found A1", "\n";
last;
}
}
# now start searching the CSV again
If I have gone through some of a CSV using Text::CSV_XS, how can I then start again from the beginning? Is there some way to return the pointer/window to the beginning of the file?
use Fcntl qw( SEEK_SET );
seek($fh, 0, SEEK_SET);
You could also just re-open the file.

Check for HTTP Code in fetch_json sub / save previous output for backup in Perl

so I have to update a perl script that goes through a json file, fetches keys called “items”, and transforms these items into perl output.
I’m a noob at Perl/coding in general, so plz bear with me🥺. The offset variable is set as each url is iterated through. A curl command is passed to the terminal, the file is put through a "#lines" array, and in the end, whatever json data is stored in $data gets decoded and transformed. and in the blocks below (where # populate %manager_to_directs, # populate %user_to_management_chain, and # populate %manager_to_followers are commented) is where fetch_json gets called and where the hash variables get the data from the decoded json. (***Please feel free to correct me if I interpreted this code incorrectly)
There’s been a problem where the $cmd doesn’t account for the HTTP Responses every time this program is executed. I only want the results to be processed if and only if the program gets http 200 (OK) or http 204 (NO_CONTENT) because the program will run and sometimes partially refresh our json endpoint (url in curl command output from terminal below), or sometimes doesn’t even refresh at all.
All I’m assuming is that I’d probably have to import the HTTP::Response pragma and somehow pull that out of the commands being run in fetch_json, but I have no other clue where to go from there.
Would I have to update the $cmd to pull the http code? And if so, how would I interpret that in the fetch_json sub to exit the process if anything other than 200 or 204 is received?
Oh and also, how would I save the previous output from the last execution in a backup file?
Any help I can get here would be highly appreciated!
See code below:
Pulling this from a test run:
curl -o filename -w "HTTP CODE: %{http_code}\n" --insecure --key <YOUR KEY> --cert <YOUR CERT> https://xxxxxxxxxx-xxxxxx-xxxx.xxx.xxxxxxxxxx.com:443/api/v1/reports/active/week > http.out
#!/usr/bin/env perl
use warnings;
use strict;
use JSON qw(decode_json);
use autodie qw(open close chmod unlink);
use File::Basename;
use File::Path qw(make_path rmtree);
use Cwd qw(abs_path);
use Data::Dumper;
use feature qw(state);
sub get_fetched_dir {
return "$ENV{HOME}/tmp/mule_user_fetched";
}
# fetch from mulesoft server and save local copy
sub fetch_json {
state $now = time();
my ($url) = #_;
my $dir = get_fetched_dir();
if (!-e $dir) {
make_path($dir);
chmod 0700, $dir;
}
my ($offset) = $url =~ m{offset=(\d+)};
if (!defined $offset) {
$offset = 0;
}
$offset = sprintf ("%03d", $offset);
my $filename = "$dir/offset${offset}.json";
print "$filename\n";
my #fields = stat $filename;
my $size = $fields[7];
my $mtime = $fields[9];
if (!$size || !$mtime || $now-$mtime > 24*60*60) {
my $cmd = qq(curl \\
--insecure \\
--silent \\
--key $ENV{KEY} \\
--cert $ENV{CERT} \\
$url > $filename
);
#print $cmd;
system($cmd);
chmod 0700, $filename;
}
open my $fh, "<", $filename;
my #lines = <$fh>;
close $fh;
return undef if !#lines;
my $data;
eval {
$data = decode_json (join('',#lines));
};
if ($#) {
unlink $filename;
print "Bad JSON detected in $filename.\n";
print "I have deleted $filename.\n";
print "Please re-run script.\n";
exit(1);
}
return $data;
}
die "Usage:\n KEY=key_file CERT=cert_file mule_to_jira.pl\n"
if !defined $ENV{KEY} || !defined $ENV{CERT};
print "fetching data from mulesoft\n";
# populate %manager_to_directs
my %manager_to_directs;
my %user_to_manager;
my #users;
my $url = "https://enterprise-worker-data.eip.vzbuilders.com/api/v1/reports/active/week";
while ($url && $url ne "Null") {
my $data = fetch_json($url);
last if !defined $data;
$url = $data->{next};
#print $url;
my $items = $data->{items};
foreach my $item (#$items) {
my $shortId = $item->{shortId};
my $manager = $item->{organization}{manager};
push #users, $shortId;
next if !$manager;
$user_to_manager{$shortId} = $manager;
push #{$manager_to_directs{$manager}}, $shortId;
}
}
# populate %user_to_management_chain
# populate %manager_to_followers
my %user_to_management_chain;
my %manager_to_followers;
foreach my $user (keys %user_to_manager) {
my $manager = $user_to_manager{$user};
my $prev = $user;
while ($manager && $prev ne $manager) {
push #{$manager_to_followers{$manager}}, $user;
push #{$user_to_management_chain{$user}}, $manager;
$prev = $manager;
$manager = $user_to_manager{$manager}; # manager's manager
}
}
# write backyard.txt
open my $backyard_fh, ">", "backyard.txt";
foreach my $user (sort keys %user_to_management_chain) {
my $chain = join ',', #{$user_to_management_chain{$user}};
print $backyard_fh "$user:$chain\n";
}
close $backyard_fh;
# write teams.txt
open my $team_fh, ">", "teams.txt";
foreach my $user (sort #users) {
my $followers = $manager_to_followers{$user};
my $followers_joined = $followers ? join (',', sort #$followers) : "";
print $team_fh "$user:$followers_joined\n";
}
close $team_fh;
my $dir = get_fetched_dir();
rmtree $dir, {safe => 1};
So, if you want to keep the web fetch and the Perl processing decoupled, you can modify the curl command so that it includes the response header in the output by adding the -i option. That means that the Perl will have to be modified to read and process the headers before getting to the body. A successful http.out will look something like this:
HTTP/1.1 200 OK
Server: somedomain.com
Date: <date retrieved>
Content-Type: application/json; charset=utf-8
Content-Length: <size of JSON>
Status: 200 OK
Maybe: More Headers
Blank: Line signals start of body
{
JSON object here
}
An unsuccessful curl will have something other than 200 OK on the first line next to the HTTP/1.1, so you can tell that something went wrong.
Alternatively, you can let the Perl do the actual HTTP fetch instead of relying on curl; you can use LWP::UserAgent or any of a number of other HTTP client libraries in Perl, which will give you the entire response, not just the body.

How to optimize this script performing INSERTS into a database?

So i already complete a script that will insert data into mysql table and move those file into a directory until all files are none. There around 51 files and it took around 9 sec to complete the execution. So my question is . is there a better way to speed up the execution process?
the codes are
our $DIR="/home/aimanhalim/LOG";
our $FILENAME_REGEX = "server_performance_";
# mariaDB config hash
our %db_config = ( "username"=>"root", "password"=> "", "db"=>"Top_Data", "ip" => "127.0.0.1", "port" => "3306");
main();
exit;
sub main()
{
my $start = time();
print "Searching file $FILENAME_REGEX in $DIR...\n";
opendir (my $dr , $DIR) or die "<ERROR> Cannot open dir: $DIR \n";
while( my $file = readdir $dr )
{
print "file in $DIR: [$file]\n";
next if (($file eq ".") || ($file eq "..") || ($file eq "DONE"));
#Opening The File in the directory
open(my $file_hndlr, "<$DIR/$file");
#Making Variables.
my $line_count = 0;
my %data = ();
my $dataRef = \%data;
my $move = "$DIR/$file";
print "$file\n";
while (<$file_hndlr>)
{
my $line = $_;
chomp($line);
print "line[$line_count] - [$line]\n";
if($line_count == 0)
{
# get load average from line 0
($dataRef) = get_load_average($line,$dataRef);
print Dumper($dataRef);
}
elsif ($line_count == 2)
{
($dataRef) = get_Cpu($line,$dataRef);
print Dumper($dataRef);
}
$line_count++;
}
#insert db
my ($result) = insert_record($dataRef,\%db_config,$file);
my $Done_File="/home/aimanhalim/LOG/DONE";
sub insert_record(){
my($data,$db_config,$file)=#_;
my $result = -1; # -1 fail; 0 - succ
# connect to db
# connect to MySQL database
my $dsn = "DBI:mysql:database=".$db_config->{'db'}.";host=".$db_config->{'ip'}.";port=".$db_config->{'port'};
my $username = $db_config->{'username'};
my $password = $db_config->{'password'};
my %attr = (PrintError=>0,RaiseError=>1 );
my $dbh = DBI->connect($dsn,$username,$password,\%attr) or die $DBI::errstr;
print "We Have Successfully Connected To The Database \n";
$stmt->execute(#param_bind);
****this line is insert data statement***
$stmt->finish();
print "The Data Has Been Inserted Successfully\n";
$result = 0;
return($result);
# commit
$dbh->commit();
# return succ / if fail rollback and return fail
$dbh->disconnect();
}
exit;
editted
so pretty much this is my code with some sniping here and there.
i tried to put the 'insert_record' below the comment #insert db but i dont think that do anything :U
You are connecting to the database for every file that you want to insert (if I read your code correctly, there seems to be a closing curly brace missing, it won't actually compile). Opening new database connections is (comparably) slow.
Open the connection once, before inserting the first file and re-use it for subsequent inserts into the database. Close the connection after your last file was inserted into the database. This should give you a noticable speed up.
(Depending on the amount of data, 9 seconds might actually not be too bad; but since there is no information on that, it's hard to say).

Remove trailing commas at the end of the string using Perl

I'm parsing a CSV file in which each line look something as below.
10998,4499,SLC27A5,Q9Y2P5,GO:0000166,GO:0032403,GO:0005524,GO:0016874,GO:0047747,GO:0004467,GO:0015245,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
There seems to be trailing commas at the end of each line.
I want to get the first term, in this case "10998" and get the number of GO terms related to it.
So my output in this case should be,
Output:
10998,7
But instead it shows 299. I realized overall there are 303 commas in each line. And I'm not able to figure out an easy way to remove trailing commas. Can anyone help me solve this issue?
Thanks!
My Code:
use strict;
use warnings;
open my $IN, '<', 'test.csv' or die "can't find file: $!";
open(CSV, ">GO_MF_counts_Genes.csv") or die "Error!! Cannot create the file: $!\n";
my #genes = ();
my $mf;
foreach my $line (<$IN>) {
chomp $line;
my #array = split(/,/, $line);
my #GO = splice(#array, 4);
my $GO = join(',', #GO);
$mf = count($GO);
print CSV "$array[0],$mf\n";
}
sub count {
my $go = shift #_;
my $count = my #go = split(/,/, $go);
return $count;
}
I'd use juanrpozo's solution for counting but if you still want to go your way, then remove the commas with regex substitution.
$line =~ s/,+$//;
I suggest this more concise way of coding your program.
Note that the line my #data = split /,/, $line discards trailing empty fields (#data has only 11 fields with your sample data) so will produce the same result whether or not trailing commas are removed beforehand.
use strict;
use warnings;
open my $in, '<', 'test.csv' or die "Cannot open file for input: $!";
open my $out, '>', 'GO_MF_counts_Genes.csv' or die "Cannot open file for output: $!";
foreach my $line (<$in>) {
chomp $line;
my #data = split /,/, $line;
printf $out "%s,%d\n", $data[0], scalar grep /^GO:/, #data;
}
You can apply grep to #array
my $mf = grep { /^GO:/ } #array;
assuming $array[0] never matches /^GO:/
For each your line:
foreach my $line (<$IN>) {
my ($first_term) = ($line =~ /(\d+),/);
my #tmp = split('GO', " $line ");
my $nr_of_GOs = #tmp - 1;
print CSV "$first_term,$nr_of_GOs\n";
}