How can I prevent direct access to files using the .htaccess file? - html

I have a problem, I tried to restrict access to files with direct link (etc. www.domen.com/folder/subfolder/file.ext), to can only access to them using HTML code like "< img src ='/folder/subfolder/file.ext' >"...
I create .htaccess file with next lines
# enable mod_rewrite
RewriteEngine On
# RewriteCond = define rule condition
# HTTP_REFERER = check from where the request originated
# ! = exclude
# ^ = start of string
# [NC] = case insensitive search
RewriteCond %{HTTP_REFERER} !^http://domen.com/folder/subfolder [NC]
# \ = match any
# . = any character
# () = pattern, group
# $ = end of string
# [F] = forbidden, 403
# [L] = stop processing further rules
RewriteRule \.(gif|jpg|jpeg|png|mp4|mov|mkv|flv)$ - [F,L]
Permission code of my files is 0644 in folders and subfolders, and permission code of my folders and subfolders is 0755
Problem is next.. When I use this code in .htaccess file I restrict direct access to files but at the same time I cant access them with HTML code..
<Directory platform/courses/*>
Order Allow,Deny
Allow from 1.1.1.1
Deny from All
</Directory>
<Directory>
Order Allow, Deny
Allow from All
</Directory>
I tried something like this (with IP addres taken from my cPanel) but I get this result:

This is a cutdown version of what I use, so may have syntax errors.
The code is laid out to demonstrate the process. It can be made more robust.
The user is shown a url with a filename using download.php.
This can be posted.
Once the page is shown, it saves the filename with an expiry of 1 hour
in the cookie. If it expires, the page has to be refreshed.
When the button is pressed, sendfile.php gets all the information
from the cookie, validates the expiry and filename and sends it.
download.php
This is the landing page that the user can link to.
Show the user a link like /download/document.pdf
Use .htaccess to map it to /download?name=document.pdf
<?php
// Returns the param by name, if not found then get it from the current_url.
// So this works regardless of how the .htaccess redirection has been done.
function Get_Param ($name, $current_url)
{ $par = filter_var ($_GET [$name] ?? '', FILTER_SANITIZE_STRING);
if ($par)
{return $par;}
if ($only)
{return '';}
$pi = pathinfo ($current_url);
$pi = $pi['filename'] ?? '';
if (strpos ($pi, '=') == 0)
{return $pi;}
else
{return '';};
}
//Save Information in session with an expiry of 1hr
function Save_Info ($file)
{ $expiry = time()+3610;
setcookie ('Download',$file, $expiry, '/');
}
//Get the Filename from the param or url
$file = Get_Param ('name', $current_url);
//Save to Session
Save_Info ($file)
// Show page with filename and details
// Show button with link "/sendfile"
?>
sendfile.php
This basically pretends to be the file thats being downloaded and then dies.
<?php
// Sends the file and dies
function Send_File ($file)
{ header ('Content-Description: File Transfer');
header ('Content-Type: application/octet-stream');
header ('Content-Disposition: attachment; filename="'.basename($file).'"');
header ('Expires: 0');
header ('Cache-Control: must-revalidate');
header ('Pragma: public');
header ('Content-Length: ' . filesize($file));
flush ();
if (readfile ($file)) {
//LogIt ($file);
}
die ();
}
//Load Info from session and unset it
function Load_Info ()
{ $file = $_COOKIE ['Download'] ?? '';
unset ($_COOKIE ['Download']);
return $file
}
//Validate FileName
function Is_Valid ($file,$path)
{ if (!$file) {
return 'FileName is Blank';
}
$file = urldecode($file);
if (!preg_match('/^[^.][-a-z0-9_.]+[a-z]$/i', $file)) {
return 'Invalid FileName';
}
if (!file_exists($path.$file)) {
return 'File does not exist';
}
return false;
}
//Your actual path to the file
$path ='';
// Load Info from Session
$file = Load_Info ();
// Die if filename invalid or session expired
$error = Is_Valid ($file,$path);
if (!$error) {
//Show Message and die
}
else
{Send_File ($path.$file);
}
?>

Related

Check for HTTP Code in fetch_json sub / save previous output for backup in Perl

so I have to update a perl script that goes through a json file, fetches keys called “items”, and transforms these items into perl output.
I’m a noob at Perl/coding in general, so plz bear with me🥺. The offset variable is set as each url is iterated through. A curl command is passed to the terminal, the file is put through a "#lines" array, and in the end, whatever json data is stored in $data gets decoded and transformed. and in the blocks below (where # populate %manager_to_directs, # populate %user_to_management_chain, and # populate %manager_to_followers are commented) is where fetch_json gets called and where the hash variables get the data from the decoded json. (***Please feel free to correct me if I interpreted this code incorrectly)
There’s been a problem where the $cmd doesn’t account for the HTTP Responses every time this program is executed. I only want the results to be processed if and only if the program gets http 200 (OK) or http 204 (NO_CONTENT) because the program will run and sometimes partially refresh our json endpoint (url in curl command output from terminal below), or sometimes doesn’t even refresh at all.
All I’m assuming is that I’d probably have to import the HTTP::Response pragma and somehow pull that out of the commands being run in fetch_json, but I have no other clue where to go from there.
Would I have to update the $cmd to pull the http code? And if so, how would I interpret that in the fetch_json sub to exit the process if anything other than 200 or 204 is received?
Oh and also, how would I save the previous output from the last execution in a backup file?
Any help I can get here would be highly appreciated!
See code below:
Pulling this from a test run:
curl -o filename -w "HTTP CODE: %{http_code}\n" --insecure --key <YOUR KEY> --cert <YOUR CERT> https://xxxxxxxxxx-xxxxxx-xxxx.xxx.xxxxxxxxxx.com:443/api/v1/reports/active/week > http.out
#!/usr/bin/env perl
use warnings;
use strict;
use JSON qw(decode_json);
use autodie qw(open close chmod unlink);
use File::Basename;
use File::Path qw(make_path rmtree);
use Cwd qw(abs_path);
use Data::Dumper;
use feature qw(state);
sub get_fetched_dir {
return "$ENV{HOME}/tmp/mule_user_fetched";
}
# fetch from mulesoft server and save local copy
sub fetch_json {
state $now = time();
my ($url) = #_;
my $dir = get_fetched_dir();
if (!-e $dir) {
make_path($dir);
chmod 0700, $dir;
}
my ($offset) = $url =~ m{offset=(\d+)};
if (!defined $offset) {
$offset = 0;
}
$offset = sprintf ("%03d", $offset);
my $filename = "$dir/offset${offset}.json";
print "$filename\n";
my #fields = stat $filename;
my $size = $fields[7];
my $mtime = $fields[9];
if (!$size || !$mtime || $now-$mtime > 24*60*60) {
my $cmd = qq(curl \\
--insecure \\
--silent \\
--key $ENV{KEY} \\
--cert $ENV{CERT} \\
$url > $filename
);
#print $cmd;
system($cmd);
chmod 0700, $filename;
}
open my $fh, "<", $filename;
my #lines = <$fh>;
close $fh;
return undef if !#lines;
my $data;
eval {
$data = decode_json (join('',#lines));
};
if ($#) {
unlink $filename;
print "Bad JSON detected in $filename.\n";
print "I have deleted $filename.\n";
print "Please re-run script.\n";
exit(1);
}
return $data;
}
die "Usage:\n KEY=key_file CERT=cert_file mule_to_jira.pl\n"
if !defined $ENV{KEY} || !defined $ENV{CERT};
print "fetching data from mulesoft\n";
# populate %manager_to_directs
my %manager_to_directs;
my %user_to_manager;
my #users;
my $url = "https://enterprise-worker-data.eip.vzbuilders.com/api/v1/reports/active/week";
while ($url && $url ne "Null") {
my $data = fetch_json($url);
last if !defined $data;
$url = $data->{next};
#print $url;
my $items = $data->{items};
foreach my $item (#$items) {
my $shortId = $item->{shortId};
my $manager = $item->{organization}{manager};
push #users, $shortId;
next if !$manager;
$user_to_manager{$shortId} = $manager;
push #{$manager_to_directs{$manager}}, $shortId;
}
}
# populate %user_to_management_chain
# populate %manager_to_followers
my %user_to_management_chain;
my %manager_to_followers;
foreach my $user (keys %user_to_manager) {
my $manager = $user_to_manager{$user};
my $prev = $user;
while ($manager && $prev ne $manager) {
push #{$manager_to_followers{$manager}}, $user;
push #{$user_to_management_chain{$user}}, $manager;
$prev = $manager;
$manager = $user_to_manager{$manager}; # manager's manager
}
}
# write backyard.txt
open my $backyard_fh, ">", "backyard.txt";
foreach my $user (sort keys %user_to_management_chain) {
my $chain = join ',', #{$user_to_management_chain{$user}};
print $backyard_fh "$user:$chain\n";
}
close $backyard_fh;
# write teams.txt
open my $team_fh, ">", "teams.txt";
foreach my $user (sort #users) {
my $followers = $manager_to_followers{$user};
my $followers_joined = $followers ? join (',', sort #$followers) : "";
print $team_fh "$user:$followers_joined\n";
}
close $team_fh;
my $dir = get_fetched_dir();
rmtree $dir, {safe => 1};
So, if you want to keep the web fetch and the Perl processing decoupled, you can modify the curl command so that it includes the response header in the output by adding the -i option. That means that the Perl will have to be modified to read and process the headers before getting to the body. A successful http.out will look something like this:
HTTP/1.1 200 OK
Server: somedomain.com
Date: <date retrieved>
Content-Type: application/json; charset=utf-8
Content-Length: <size of JSON>
Status: 200 OK
Maybe: More Headers
Blank: Line signals start of body
{
JSON object here
}
An unsuccessful curl will have something other than 200 OK on the first line next to the HTTP/1.1, so you can tell that something went wrong.
Alternatively, you can let the Perl do the actual HTTP fetch instead of relying on curl; you can use LWP::UserAgent or any of a number of other HTTP client libraries in Perl, which will give you the entire response, not just the body.

yii2 can not read json file into backend/web folder

I have js file in backend/web/js/my.js
var conf = function() {
return ('../conf.json');
}
But can not read conf.json file. I got 403 error. What is the problem ?
Thanks anyway
you should use
var conf = function() {
return ('../js/conf.json');
}
or build the proper url using urlHelper
<?= 'var url_base = "' . \yii\helpers\Url::base() .'";'; ?>
var conf = function() {
return ( url_base + '/js/conf.json');
}
you need add some row in .htacces file into the root
#if files in the root
RewriteRule conf.php conf.php [L]
#or
RewriteRule conf.json conf.json [L]
#if files in the directory into the root
RewriteRule ^js/(.*)$ js/$1 [L]
#or images directory
RewriteRule ^images/(.*)$ images/$1 [L]

Perl mechanize print HTML form names

I'm trying to automate hotmail login. How can I find what the appropriate fields are? When I print the forms I just get a bunch of hex information.
what's the correct method and how is it used?
use WWW::Mechanize;
use LWP::UserAgent;
my $mech = WWW::Mechanize->new();
my $url = "http://hotmail.com";
$mech->get($url);
print "Forms: $mech->forms";
if ($mech->success()){
print "Successful Connection\n";
} else {
print "Not a successful connection\n"; }
this may help you
use WWW::Mechanize;
use Data::Dumper;
my $mech = WWW::Mechanize->new();
my $url = "http://yoururl.com";
$mech->get($url);
my #forms = $mech->forms;
foreach my $form (#forms) {
my #inputfields = $form->param;
print Dumper \#inputfields;
}
Sometimes it is useful to look at what the web site is asking in advance of coding up a reader or interface to it.
I wrote this bookmarklet that you save in your browser bookmarks and when you click it while visiting any html web page will show in a pop-up all the forms actions and fields with values even hidden. Simply copy the text below and paste into a new bookmark location field, name it and save.
javascript:t=%22<TABLE%20BORDER='1'%20BGCOLOR='#B5D1E8'>%22;for(i=0;i<document.forms.length;i++){t+=%22<TR><TH%20colspan='4'%20align='left'%20BGCOLOR='#336699'>%22;t+=%22<FONT%20color='#FFFFFF'>%20Form%20Name:%20%22;t+=document.forms[i].name;t+=%22</FONT></TH></TR>%22;t+=%22<TR><TH%20colspan='4'%20align='left'%20BGCOLOR='#99BADD'>%22;t+=%22<FONT%20color='#FFFFFF'>%20Form%20Action:%20%22;t+=document.forms[i].action;t+=%22</FONT></TH></TR>%22;t+=%22<TR><TH%20colspan='4'%20align='left'%20BGCOLOR='#99BADD'>%22;t+=%22<FONT%20color='#FFFFFF'>%20Form%20onSubmit:%20%22;t+=document.forms[i].onSubmit;t+=%22</FONT></TH></TR>%22;t+=%22<TR><TH>ID:</TH><TH>Element%20Name:</TH><TH>Type:</TH><TH>Value:</TH></TR>%22;for(j=0;j<document.forms[i].elements.length;j++){t+=%22<TR%20BGCOLOR='#FFFFFF'><TD%20align='right'>%22;t+=document.forms[i].elements[j].id;t+=%22</TD><TD%20align='right'>%22;t+=document.forms[i].elements[j].name;t+=%22</TD><TD%20align='left'>%20%22;t+=document.forms[i].elements[j].type;t+=%22</TD><TD%20align='left'>%20%22;if((document.forms[i].elements[j].type==%22select-one%22)%20||%20(document.forms[i].elements[j].type==%22select-multiple%22)){t_b=%22%22;for(k=0;k<document.forms[i].elements[j].options.length;k++){if(document.forms[i].elements[j].options[k].selected){t_b+=document.forms[i].elements[j].options[k].value;t_b%20+=%20%22%20/%20%22;t_b+=document.forms[i].elements[j].options[k].text;t_b+=%22%20%22;}}t+=t_b;}else%20if%20(document.forms[i].elements[j].type==%22checkbox%22){if(document.forms[i].elements[j].checked==true){t+=%22True%22;}else{t+=%22False%22;}}else%20if(document.forms[i].elements[j].type%20==%20%22radio%22){if(document.forms[i].elements[j].checked%20==%20true){t+=document.forms[i].elements[j].value%20+%20%22%20-%20CHECKED%22;}else{t+=document.forms[i].elements[j].value;}}else{t+=document.forms[i].elements[j].value;}t+=%22</TD></TR>%22;}}t+=%22</TABLE>%22;mA='menubar=yes,scrollbars=yes,resizable=yes,height=800,width=600,alwaysRaised=yes';nW=window.open(%22/empty.html%22,%22Display_Vars%22,%20mA);nW.document.write(t);
I tried to mimc the post request that sends your login info, but the web site seems to be dynamically adding a bunch of id's ---long generated strings etc to the url and I couldn't figure out how to imitate them. So I wrote the hacky work-around below.
#!/usr/bin/perl
use strict;
use warnings;
use WWW::Curl::Easy;
use Data::Dumper;
my $curl = WWW::Curl::Easy->new;
#this is the name and complete path to the new html file we will create
my $new_html_file = 'XXXXXXXXX';
my $password = 'XXXXXXXX';
my $login = 'XXXXXXXXX';
#escape the .
$login =~ s/\./\\./g;
my $html_to_insert = qq(<script src="//ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min.js"></script><script type="text/javascript">setTimeout('testme()', 3400);function testme(){document.getElementById('res_box').innerHTML = '<h3 class="auto_click_login_np">Logging in...</h3>';document.f1.passwd.value = '$password';document.f1.login.value = '$login';\$("#idSIButton9").trigger("click");}var counter = 5;setInterval('countdown()', 1000);function countdown(){document.getElementById('res_box').innerHTML = '<h3 class="auto_click_login_np">You should be logged in within ' + counter + ' seconds</h3>';counter--;}</script><h2 style="background-color:#004c00; color: #fff; padding: 4px;" id="res_box" onclick="testme()" class="auto_click_login">If you are not logged in after a few seconds, click here.</h2>);
$curl->setopt(CURLOPT_HEADER,1);
my $url = 'https://login.live.com';
$curl->setopt(CURLOPT_URL, $url);
# A filehandle, reference to a scalar or reference to a typeglob can be used here.
my $response_body;
$curl->setopt(CURLOPT_WRITEDATA, \$response_body);
open( my $fresh_html_handle, '+>', 'fresh_html_from_login_page.html');
# Starts the actual request
my $curl_return_code = $curl->perform;
# Looking at the results...
if ($curl_return_code == 0) {
print("Transfer went ok\n");
my $response_code = $curl->getinfo(CURLINFO_HTTP_CODE);
# judge result and next action based on $response_code
print $fresh_html_handle $response_body;
} else {
# Error code, type of error, error message
print("An error happened: $curl_return_code ".$curl->strerror($curl_return_code)." ".$curl->errbuf."\n");
}
close($fresh_html_handle);
#erase whatever a pre-existing edited file if there is one
open my $erase_html_handle, ">", $new_html_file or die "Hork! $!\n";
print $erase_html_handle;
close $erase_html_handle;
#open the file with the login page html
open( FH, '<', 'fresh_html_from_login_page.html');
open( my $new_html_handle, '>>', $new_html_file);
my $tracker=0;
while( <FH> ){
if( $_ =~ /DOCTYPE/){
$tracker=1;
print $new_html_handle $_;
} elsif($_ =~ /<\/body><\/html>/){
#now add the javascript and html to automatically log the user in
print $new_html_handle "$html_to_insert\n$_";
}elsif( $tracker == 1){
print $new_html_handle $_;
}
}
close(FH);
close($new_html_handle);
my $sys_call_res = system("firefox file:///usr/bin/outlook_auto_login.html");
print "\n\nresult: $sys_call_res\n\n";

Number of times a file (e.g., PDF) was accessed on a server

I have a small website with several PDFs free for download. I use StatCounter to observe the number of page loads. It also shows me the number of my PDF downloads, but it considers only those downloads where a user clicks on the link from my website. But what's with the "external" access to the PDFs (e.g., directly from Google search)? How I can count those? Is there any possibility to use a tool such as StatCounter?
Thanks.
.htaccess (redirect *.pdf requests to download.php):
RewriteEngine On
RewriteRule \.pdf$ /download.php
download.php:
<?php
$url = $_SERVER['REQUEST_URI'];
if (!preg_match('/([a-z0-9_-]+)\.pdf$/', $url, $r) || !file_exists($r[1] . '.pdf')) {
header('HTTP/1.0 404 Not Found');
echo "File not found.";
exit(0);
}
$filename = $r[1] . '.pdf';
// [do you statistics here]
header('Content-type: application/pdf');
header("Content-Disposition: attachment; filename=\"$filename\"");
readfile($filename);
?>
You can use a log analyzer to check how many times a file was accessed. Ask you hosting provider if they provide access to access logs and a log analyzer software.
in php, it would be something like (untested):
$db = mysql_connect(...);
$file = $_GET['file'];
$allowed_files = {...}; // or check in database
if (in_array($file, $allowed_files) && file_exists($file)) {
header('Content-Description: File Transfer');
header('Content-Type: application/pdf');
header('Content-Disposition: attachment; filename='.basename($file));
header('Content-Transfer-Encoding: binary');
header('Expires: 0');
header('Cache-Control: must-revalidate');
header('Pragma: public');
header('Content-Length: ' . filesize($file));
ob_clean();
flush();
mysql_query('UPDATE files SET count = count + 1 WHERE file="' . $file . '"')
readfile($file);
exit;
} else {
/* issue a 404, or redirect to a not-found page */
}
You would have to create a way to capture the request from the server.
If you are using php, probably the best would be to use mod_rewrite.
If you are using .net, an HttpHandler.
You must handle the request, call statcounter and then send the pdf content to the user.

nginx: location condition + secure_link

I have a hard time figuring out the correct configuration.
I have an URL - e.g.
http://example.com/[md5-checksum]/[num-value-1]/[num-value-2]/[file-name]
http://example.com/ac64392dba67d618ea6a76843c006708/123/56789/test.jpg
I want to make sure that the md5-checksum matches salt + num-value-2. So the file name and num-value-1 should be ignored (only needed for the filename header) in order to build the md5 checksum.
Following configuration does not result in what I want to achieve.
location ~* ^/download/([a-zA-Z0-9]+)/([0-9]+)/([0-9]+)/(.*)$ {
secure_link $3;
secure_link_md5 segredo$3;
if ($secure_link = "") {
return 500;
}
set $filename $4;
add_header Content-Disposition "attachment; filename=$filename";
rewrite ^/download/([a-zA-Z0-9]+)/([0-9]+)/([0-9]+)/(.*)$ /$2/$3 break;
}
I appreciate any help!
secure_link $3
should have been
secure_link $1