SSI - test if a file exists - html

I'm dynamically adding ssi includes based on variables and I would like to be able to have a default include in case a file doesn't exist. ie:
if /file/testthisfile.ssi exists
add /file/testthisfile.ssi
else
add /file/default.ssi
Is this possible?
Thanks!

No - I was afraid of that answer. But for anyone who might come across this question in the future I did find a work around for simple cases. You can edit the error message and in my case, output an image:
<!--#config errmsg="<img src='/file/testthisfile.jpg' alt='' />" -->
So if the file doesn't exist you can set a default.
Must underline that this will only work for simple cases, but it's a nice little work around!

Actually contrary to the answers here, SSI does in fact support file existence tests. this is the syntax
<!--#if expr="-A /private" -->
Click here to access private information.
<!--#endif -->
Support for the -A flag may need to be enabled in your apache configuration.
The expressions used in this spot of SSI have been factored out into an apache expressions module documented here
http://httpd.apache.org/docs/current/expr.html
but the -A flag is also available in "legacy" SSI expression parsers.

SSI does not support file detection.

I thought about this for a while, and indeed, ahgood was correct, SSI does not have a built-in file detection function, so flow control is limited.
As an aside, I did find a reference to an extended version of SSI (a VMS based system)
http://wasd.vsm.com.au/doc/env/env_0400.html
and there were some extensions that would allow you to check for file existence in some sort of a fashion.
However, more often then not, if one were using SSI, one would probably be running in a LAMP environment, so one could take advantage of SSI's ability to run a CGI/PHP script in the include statement.
Without too much trouble, one could resort to:
<body>
<!--#include virtual="insert_intro.html" -->
<h2>Insert An Existing File</h2>
<!--#include
virtual='checkFileExists.php?fn=insert_help.html&df=insert_default.html' -->
<h2>Insert a Non-Existing File</h2>
<!--#include
virtual='checkFileExists.php?fn=insert_no_help.html&df=insert_default.html' -->
</body>
which uses a PHP script to do all the file checking:
<?php
$theFileName = $_GET['fn'];
$theDefault = $_GET['df'];
if ( file_exists($theFileName) === TRUE ) {
include($theFileName);
} else {
include($theDefault);
}
?>
I pass two file names, the intended file and the backup/default file, the script checks for the first and if it is not found, uses the second.
This approach begs the question, why use SSI when PHP is available? In some cases, especially in a legacy system, there may be a big website based on SSI and a work-around, though less elegant, would solve a problem.
PHP is not mandatory, a PERL script would also work.
Finally, I did experiment with trying to use PHP's apache_setenv but I could not figure out how to pass environment variables between PHP, Apache and SSI (I also tried setting $_SERVER and $_ENV variables but without success).

Assuming you are running Apache 2.4 you can use the -F option (note the quoting).
<!--#if expr='-F "/private"' -->
Click here to access private information.
<!--#endif -->
From the docs (http://httpd.apache.org/docs/current/expr.html):
True if string is a valid file, accessible via all the server's
currently-configured access controls for that path. This uses an
internal subrequest to do the check, so use it with care - it can
impact your server's performance!
For the example to work the Apache user will need access to the direcotory/flag that you are testing. You may also need the following in a .htaccess or httpd.conf file:
<Directory /private>
Require all granted
</Directory>

You can do it, like this:
<!--#include virtual="/file/testthisfile.ssi" onerror="/file/default.ssi" -->
Please note that "-F" unary operator, as well as "-A" unary operator, only refer to path accessibility and not to actual existence of the resource.
Have a look here: http://httpd.apache.org/docs/2.4/expr.html (Unary operators).
Operators performing such task (-e, -s, -f) are not available under mod_include.

Related

Asterisk as a SIP client dynamic configuration

I am moving from asterisk 1.x to 13.6.In current implementation to dynamically register/unregister asterisk as different sip clients I use following trick: In sip.conf file I include my custom conf file which I update(add/remove) with "register =>..." and then "sip reload".
Do we have better way to do this in new asterisk version?
As variant I would like to include in sip.conf not single file but several from specific folder. Is it possible in asterisk config files?
Thank you in advance!
Asides from using realtime (https://wiki.asterisk.org/wiki/display/AST/Realtime+Database+Configuration) and sorcery (https://wiki.asterisk.org/wiki/display/AST/Sorcery+Caching), you can use "exec".
I'm not sure this is the desired way to do this, but you can take advantage of the "exec" include, see: https://wiki.asterisk.org/wiki/display/AST/Using+The+include,+tryinclude+and+exec+Constructs
So Asterisk would execute a script of yours (shell, php, ruby, etc) that will output everything you need, and there's no need to add multiple "include" statements.
For this to work you should have in your asterisk.conf:
execincludes = yes
Not performant, not pretty, might have some security issues if you are not careful, but could do the job if you don't want to use any realtime or sorcery configuration.

Difference between name.html.erb vs name.erb

What is the difference between name.html.erb vs name.erb?
In particular, are there any reasons why name.erb could be bad to use?
I understand that name.html.erb is the convention - this indicates a HTML template and ERB Engine. But I can't find information if are there any reasons not to use name.html.erb, but name.erb instead.
My new workplace asks me to use name.erb, so I want to know: might there be any problems with this?
In short, no, there won't be any problems. Erb files simply output text. In many cases the file extension is ignored by the reading app as the reading app reads/interprets the containing text and its syntax validity. As #taglia suggests, the file extensions are mostly a 'hint' for you and may also be used by the OS to select a default app to open the file with. See here for a more thorough explanation: Output Type for an ERB File
Rails convention dictates template files to include the extension of the output type and the name of the file should end with the .erb extension. As you mentioned, name.html.erb indicates an HTML template and ERB extension that allows any instance variables in your controller's index action to get passed into the template and used. Similarly, name.js.erb indicates a JavaScript template. See here under 'Conventions or Template Files': An Introduction to ERB Templating
ERB is just a templating language, it is not limited to HTML (you could have name.txt.erb, or name.js.erb). Removing html from the name is just going to make your life more difficult (assuming it works), because you won't be able to know what file you are dealing with unless you open it.

Software error while executing CGI script

I have a cgi script for upload which is as follows
#!/usr/bin/perl
use CGI;
use CGI::Carp qw(fatalsToBrowser);
my $cgi = new CGI;
my $file = $cgi->param('file');
$file=~m/^.*(\\|\/)(.*)/; # strip the remote path and keep the filename
my $name = $2;
open(LOCAL, ">/home/Desktop/$name") or die $!;
while(<$file>) {
$data .= $_;
}
print $cgi->header();
print "$file has been successfully uploaded... thank you.\n";
print $data;
The HTML file is as follows
<html>
<head>
<title>Test</title>
</head>
<body>
<form enctype="multipart/form-data" action="upload.cgi" method="post">
<input type="hidden" name="MAX_FILE_SIZE" value="30000" />
Send this file: <input name="userfile" type="file" />
<input type="submit" value="Send File" />
</form>
</body>
</html>
I am getting a weird error now..
Software error:
Is a directory at htdocs/upload.cgi line 9.
For help, please send mail to this site's webmaster, giving this error message and the time and date of the error.
TL;DR
Stop. Shelve your script right now. It is a gaping security hole just waiting to be exploited. Read the following resources:
perlsec
the CERT Perl Secure Coding Standard
(particularly the section on Input Validation and Data Sanitization)
the OWASP page on Unrestricted File Upload
the InfoSec page on Complete File Upload Vulnerabilities
the CWE page on Unrestricted Upload of File with Dangerous Type
the SANS recommendations for 8 Basic Rules to Implement Secure File Uploads.
When you have read--and understood--all of them, stop and think if you really need to let users upload files onto your server. Think long and hard. Can you really account for all of the listed vulnerabilities? If you still feel like you need to do this, consider enlisting the help of a security expert. Follow the guidelines laid out in the above resources carefully and understand that a mistake in your design could compromise your entire site.
I understand that this is just a test script, not a production application (at least, I really hope that's the case), but even so, what you are doing (and particularly how you are doing it) is a very, very bad idea. Here are a select few of
the reasons why, from OWASP's page on Unrestricted File
Upload:
The website can be defaced.
The web server can be compromised by uploading and executing a web-shell which can: run a command, browse the system files, browse the local resources, attack to other servers, and exploit the local vulnerabilities, and so on.
This vulnerability can make the website vulnerable to some other types of attacks such as XSS.
Local file inclusion vulnerabilities can be exploited by uploading a malicious file into the server.
More from OWASP:
Uploaded files represent a significant risk to applications. The first step in
many attacks is to get some code to the system to be attacked. Then the attack
only needs to find a way to get the code executed. Using a file upload helps
the attacker accomplish the first step.
The consequences of unrestricted file upload can vary, including complete
system takeover, an overloaded file system, forwarding attacks to backend
systems, and simple defacement.
Pretty scary stuff, huh?
The problems
Your code
Let's start by looking at some of the problems with the code you posted.
No strict, no warnings
Start putting use strict; use warnings; at the top of every Perl script you
ever write. I recently had the pleasure of fixing a CGI script that contained
a snippet something like this:
my ($match) = grep { /$usrname/ } #users;
This code was used to check that the username entered in an HTML form matched a
list of valid users. One problem: the variable $usrname was
misspelled (it should have been $username with an 'e'). Since strict
checking was off, Perl happily inserted the value of the (undeclared) global
variable $usrname, or undef. That turned the innocent-looking snippet into this monstrosity:
my ($match) = grep { // } #users;
which matches everything in the valid users list and returns the first
match. You could enter anything you wanted into the username field in the form
and the script would think you were a valid user. Since warnings were also off,
this was never caught during the development process. When you turn warnings on,
the script will still run and return a user, but you also get something like
this:
Name "main::usrname" used only once: possible typo at -e line 1.
Use of uninitialized value $usrname in regexp compilation at -e line 1.
When you also turn on strict, the script fails to compile and won't even run at
all. There are other problems with this snippet (for example, the string 'a' will match the username 'janedoe'), but strict and warnings at least alerted us to one major issue. I cannot stress this enough: always, always use strict; use
warnings;
No taint mode
The first rule of web development is,
"Always sanitize user input." Repeat after me: Always sanitize user input. One more time: Always sanitize user input.
In other words, never
blindly trust user input without validating it first. Users (even those that are not malicious) are very good at entering creative values into form
fields that can break your application (or worse). If you don't restrict their creativity,
there is no limit to the damage a malicious user can do to your site (refer to the perennial #1
vulnerability on the OWASP Top 10,
injection).
Perl's taint mode can help with this. Taint mode forces you
to check all user input before using it in certain potentially dangerous operations like the
system() function. Taint mode is like the safety on a gun: it can prevent a lot of painful
accidents (although if you really want to shoot yourself in the foot, you can
always turn off the safety, like when you untaint a variable without actually removing dangerous characters).
Turn on taint mode in every CGI script you ever write. You can enable it by passing the -T flag, like this:
#!/usr/bin/perl -T
Once taint mode is enabled, your script will throw a fatal error if you try to
use tainted data in dangerous situations. Here's an example of such a dangerous situation that I found in a random script on the internet:
open(LOCAL, ">/home/Desktop/$name") or die $!;
Ok, I lied, that snippet isn't from a random script, it's from your code. In isolation, this snippet is just begging to be hit with a directory traversal attack, where a malicious user enters a relative path in order to access a file that they shouldn't have access to.
Fortunately, you've done something right here: you ensured that $name will contain no directory separators by using a regex*. This is exactly what taint mode would require you to do. The benefit of taint mode is that if you forget to sanitize your input, you will be alerted immediately with an error like this:
Insecure dependency in open while running with -T switch at foo.cgi line 5
Like strict, taint mode forces you to address problems in your code immediately by causing the program to fail, instead of allowing it to quietly limp along.
* You did something right, but you also did some things wrong:
Your program will die if the user passes in only a filename with no directory separators, e.g. foo
You don't remove special characters that could be interpreted by a shell, like |
You never sanitize the variable $file and yet you try to use it to read a file later in your code
You don't check if the file you're writing to already exists (see "No check for file existence" below)
You allow the user to choose the name of the file that will be stored on your server, which gives them far more control than you should be comfortable with (see "Allowing the user to set the file name" below)
CGI::Carp fatalsToBrowser
I'll give you the benefit of the doubt on this one since you're still testing your script, but just in case you weren't aware and since I'm already talking about CGI security issues, never enable CGI::Carp's fatalsToBrowser option in a production environment. It can reveal intimate details about the inner workings of your script to attackers.
Two-argument open() and global filehandles
Two-argument open(), e.g.
open FH, ">$file"
has a host of security risks associated with it when users are allowed to specify the file path. Your script mitigates many of these by using a hard-coded directory prefix, but that in no way diminishes the fact that using two-argument open can be very dangerous. In general, you should use the three-argument form:
open my $fh, ">", $file
(which is still plenty dangerous if you allow the user to specify the file name; see "Allowing the user to set the file name" below).
Also note that instead of the global filehandle FH I switched to a lexical filehandle $fh. See CERT's page Do not use bareword filehandles for some reasons why.
No check for file existence
You don't check whether a file already exists at /home/Desktop/$name when you open it for writing. If the file already exists, you will truncate it (erase its contents) as soon as the open() call succeeds, even if you never write anything to the file. Users (malicious and otherwise) are likely to clobber each other's files, which doesn't make for a very happy user base.
No limit on file size
"But wait," you say, "I set MAX_FILE_SIZE in my HTML form!" Understand that this is merely a suggestion to the browser; attackers can easily edit HTTP requests to remove this condition. Never rely on hidden HTML fields for security. Hidden fields are plainly visible in the HTML source of your page and in the raw HTTP requests. You must limit the maximum request size on the server side to prevent users from loading massive files to your server and to help alleviate one type of denial of service attack. Set the $CGI::POST_MAX variable at the beginning of your CGI script like this:
$CGI::POST_MAX=1024 * 30; # 30KB
Or even better, find CGI.pm on your system and change the value of $POST_MAX to set it globally for all scripts that use the CGI module. That way you don't have to remember to set the variable at the beginning of every CGI script you write.
CGI doesn't match the HTML form
The POST variable you use for the file path in your HTML form, userfile, does not match the variable you look for in your CGI script, file. This is why your script is failing with the error
Is a directory
The value of
$cgi->param('file')
is undef so your script tries to open the path
/home/Desktop/
as a regular file.
Obsolete method for handling upload
You are using the old (and obsolete) method of handling uploads with CGI.pm where param() is used to get both the file name and a lightweight filehandle. This will not work with strict and is insecure. The upload() method was added in v2.47 (all the way back in 1999!) as a preferred replacement. Use it like this (straight out of the documentation for CGI.pm):
$lightweight_fh = $q->upload('field_name');
# undef may be returned if it's not a valid file handle
if (defined $lightweight_fh) {
# Upgrade the handle to one compatible with IO::Handle:
my $io_handle = $lightweight_fh->handle;
open (OUTFILE,'>>','/usr/local/web/users/feedback');
while ($bytesread = $io_handle->read($buffer,1024)) {
print OUTFILE $buffer;
}
}
where field_name is the name of the POST variable that holds the file name (in your case, userfile). Notice that the sample code does not set the output filename based on user input, which leads to my next point.
Allowing the user to set the file name
Never allow users to choose the file name that will be used on your server. If an attacker can upload a malicious file to a known location, it becomes significantly easier for them to exploit. Instead, generate a new, unique (to prevent clobbering), difficult-to-guess file name, preferably in a path outside your web root so users cannot access them directly with a URL.
Other issues
You haven't even begun to address the following issues.
Authentication
Who is allowed to upload files using your web app? How will you ensure that only authorized users are uploading files?
Access control
Are users allowed to see the files uploaded by other users? Depending on the file content, there could be major privacy issues at stake.
Number and rate of uploads
How many files is one user allowed to upload? How many files is a user allowed to upload in a fixed period of time? If you don't restrict these, one user could easily eat up all of your server resources very quickly, even if you enforce a maximum file size.
Dangerous file types
How will you check that users are not uploading dangerous content (for example, executable PHP code) to your server? Simply checking the file extension or content type header is not enough; attackers have found some very creative methods for circumventing such checks.
"But, but, I'm only running this on my corporate intranet..."
You may be tempted to disregard these security issues if your script is not accessible from the internet. However, you still need to consider
In-office pranksters
Disgruntled coworkers
Collaborators and outside contractors who either need access to your app or who shouldn't have access
Managers who love your app so much that they decide to open it up to users on the internet without your knowledge, possibly after you've transferred to another group or left the company
"What should I do?"
Scrap your existing code. Read the resources I listed in the first paragraph, carefully. Here they are again:
perlsec
the CERT Perl Secure Coding Standard (particularly the section on Input Validation and Data
Sanitization)
OWASP's Unrestricted File Upload
InfoSec's Complete File Upload Vulnerabilities
CWE's Unrestricted Upload of File with Dangerous Type
SANS recommendations for 8 Basic Rules to Implement Secure File Uploads
Consider carefully if you really need to do this. If you just need to give users a place to store files, consider using (S)FTP instead. This would certainly not eliminate all of the security risks, but it would eliminate a big one: your custom CGI code.
If after careful consideration you still think this is necessary, work through some recent Perl tutorials to make sure you can use and understand modern Perl programming conventions. Instead of CGI.pm, use a framework like Catalyst, Dancer, or Mojolicious, all of which have plugins that can handle tricky areas like user authentication and sessions so you don't have to re-invent the wheel (poorly).
Follow all of the security guidelines listed in the above resources and consider enlisting the help of an expert in web security. Tread carefully: a single mistake in your code could allow an attacker to compromise your entire site and possibly even other machines on your network. Depending on what country your company and your users are in, this could even have legal ramifications.
</soapbox>
A few suggestions that might get you closer to a solution.
use strict and use warnings (always use these).
CGI->new() instead of new CGI (not essential, but a good habit to get into).
Your file upload form input is called "userfile", but in your CGI code you call it "file". That inconsistency needs to be fixed.
You get the filename with $cgi->param('file'), that gets you the filename correctly (well, once you've fixed the error in my previous point), but you later try to treat that filename as a file handle (while (<$file>)). That's not going to work.
You should probably read the documentation about how to process a file upload field using CGI.pm.

Watch file(s) for modifications algorithm

I was simply wondering how file watching algorithms are implemented. For instance, let's say I want to apply a filter (i.e., search/replace a string) to a file every time it is modified, what technique should I use? Obviously, I could run an infinite loop that would check every file in a directory for modifications, but it might not be very efficient. Is there any way to get notified directly by the OS instead? For the sake of demonstration, let's assume a *nix OS and whatever language (C/Ruby/Python/Java/etc.).
Linux has inotify, and judging from the wikipedia links, Windows has something similar called 'Directory Management'. Without something like inotify, you can only poll..
In Linux there is the Inotify subsystem which will alert you to file modification.
JavaSE 7 will have File Change Notification as part of NIO.2 updates.
There are wrappers to inotify that make it easy to use from high-level languages. For example, in ruby you can do the following with rb-inotify:
notifier = INotify::Notifier.new
# tell it what to watch
notifier.watch("path/to/foo.txt", :modify) {puts "foo.txt was modified!"}
notifier.watch("path/to/bar", :moved_to, :create) do |event|
puts "#{event.name} is now in path/to/bar!"
end
There's also pyinotify but I was unable to come up with an example as concise as the above.

Best way/practice to ensure links are going to proper location when not on root of domain?

I've been wondering this for a while now, but what is the best way to ensure that in a web app (RoR, Sinatra, PHP, anything) that when you are creating links (either generating with a method, or writing in by hand) that they go to the proper place whether you are on the root of a domain or not: http://www.example.com/ or http://www.example.com/this/is/where/the/app/is/
My thoughts are get the end-user to specify a document root somewhere in the config of your app, and use that, however I'm trying to think of a nice way to do it without the end-user having to configure anything.
Edit: By end-user, I mean the person setting up the application on a server.
Edit: I can use the beginning '/' to always get the link relative to the domain, but the problem is what if the app itself is not at the root, but some place like http://www.example.com/this/is/where/the/app/is/ so i want to say gen_link('/') and have it return /this/is/where/the/app/is/ or gen_link('/some/thing') and return /this/is/where/the/app/is/some/thing
How about trying to set the base element in the head of you html layout?
First, get the URL, eg. in a way Ilya suggests (if PHP is OK for you). After that you can use the base tag as follows:
<base href="<?= $full_site_url ?>" />
That will set the default URL for all the links and the browser will prepend it to every relative link on the page.
First of all you need to route all your urls through some kind of url re-writer function.
So you no longer do:
Foo
But instead something like:
Foo
All the web frameworks out there have a function like this. While they usually do all kinds of magic in there (to do with MVC controller paths and views and what not), at the end of the function (conceptually) they all prepend your url with a "root" (eg "/this/is/where/the/app/is/"), so as to allow you to create urls in your application that are independent of a hard-coded base path.
RoR uses a configuration directive called "relative_url_root".
Symfony (php) uses a configuration directive also called "relative_url_root".
CakePHP uses a configuration directive called "WEBROOT_DIR".
In cases where these frameworks are running on Apache, this value is often calculated dynamically (if you haven't set it explicitly). On other webservers the environment variables are often not available or are incorrect so this value cannot be determined consistently.
ilya's answer is a good one, but I think a simpler way to do this is just to precede all your links with a leading "/". This will ensure that they are always relative to the root of the domain:
Something <!-- Always links to www.domain.com/some/thing -->
Something <!-- Acutal destination depends current path -->
You can determine everything you need yourself, no need for configs.
Here’s a PHP example (let’s say index.php is your script name):
<?
$folder_on_server = substr ($_SERVER['PHP_SELF'], 0, strpos ($_SERVER['PHP_SELF'], '/index.php'));
$server_name = $_SERVER['SERVER_NAME'];
if (80 != $_SERVER['SERVER_PORT']) {
$server_name .= ':'. $_SERVER['SERVER_PORT'];
}
$full_site_url = 'http://'. $server_name . $folder_on_server;
?>
Now, you can always make a link like this:
Something
See also discussion in comments.