I was wondering how to keep images secure on my website. We have a site that requires login then then user can view thousands of different images all named after their ID in the database.
Even though you need to login to view the images the proper way...nothing is stopping a user from browsing through the images by typing <website-director>/image-folder/11232.jpg or something.
this is not the end of the world but definitely not ideal. I see that to stop this facebook just names the images something much more complicated + stores them in hashed folders.
Gmail does a very interesting thing, their image tags looks like this:
<img src=/mail/?attid=0.1&disp=emb&view=att&th=12d7d49120a940e5>
I thought the src attribute has to contain a reference to an image??...how does gmail get around this?
This is more for educational purposes at this point, as I think this gmail scheme might be overkill for our implementation.
Thanks for your feedback in advance,
Andrew
I thought the src attribute has to contain a reference to an image?
GMail is referencing an image. It's just being pulled dynamically, probably based off of that th=12d7d49120a940e5 string.
Try browsing to http://mail.google.com/mail/?attid=0.1&disp=emb&view=att&th=12d7d49120a940e5
Instead of it being a direct path to its location on the server's filesystem, it uses a dynamic script (the images may even be in a database, who knows).
Besides serving up an image dynamically from your webapp, it's also possible to use a webapp to dynamically authorize access to static resources that the webserver will serve -- commonly by putting the files somewhere that the webserver has access to, but not mapped to any public URI, and then using something like X-Sendfile (lighttpd, Apache with mod_sendfile, others), X-Accel-Redirect (nginx), X-Reproxy-File (Perlbal), etc. etc. Or with FastCGI you can configure an application in a FastCGI "authorizer" role rather than a content provider.
Any of these will let you check the image being authorized, and the user's session, and make whatever decision you need to, without tying up a proceses of your backend application for the entire time that the image is being sent to the client. It's not universally true, but usually a connection to the backend app represents a lot more resources being reserved than a connection to the webserver, so freeing them up ASAP is smart.
The code that runs after this GET request is issued:
/mail/?attid=0.1&disp=emb&view=att&th=12d7d49120a940e5
outputs an image to the browser. Something doesn't have to be named with a .jpg or .png or whatever ending to be considered an image by a browser. This is how captcha algorithms are able to serve up different images depending on a value in the id. For example, this link:
http://www.google.com/recaptcha/api/image?c=03AHJ_VusfT0XgPXYUae-4RQX2qJ98iyf_N-LjX3sAwm2tv1cxWGe8pkNqGghQKBbRjM9wQpI1lFM-gJnK0Q8G3Nirwkec-nY8Jqtl9rwEvVZ2EoPlwZrmjkHT7SM32cCE8PLYXWMpEOZr5Uo6cIXz1mWFsz5Qad1iwA
Serves up this image:
So the answer really is to just obfuscate your image names/links a bit like Facebook does so that people can't easily guess them.
Related
How exactly does one do something like create a unique URL.
Like how facebook does it facebook.com/mynamehere
One way would be to create multiple folders each time we have a new user..but that doesn't seem to be the best approach
You can try a program like Elgg if you are trying to build a social media site. Otherwise, a person's profile can be custom in a couple of ways. Most of them mentioned. You, as mentioned, can use .htaccess for rewrites. You can use an automated custom url plugin (this may help: How to generate a custom URL from a html input?). Similarly, you can use the previously mentioned Elgg for social media, and but also as a last resort can use your folder method, but only if absolutely required.
I think the question is: how is it done technically, so we don't need to have physical file for every valid URL?
The answer is URL rewriting. In case of Apache server, you want to enable mod_rewrite and configure it to translate particular URL pattern (like myfbclone.com/mynamehere to myfbclone.com/index.php?username=mynamehere). This way you need to have one script file that handles all the URLs accordingly.
Different servers have different means of rewriting URLs, like Nginx or IIS, so the exact way of configuration depends on your server, but the concept is usually the same.
I have a splash page hosted at www.someserver.com and I'm looking to have one link on the page lead to a site hosted on another server, www.anotherserver.com. I probably need to keep these on distinct servers for the time being.
I'm hoping to have the whole thing appear under a a single company domain, www.company.com, and to have it so that when a user clicks on this link, you see neither www.someserver.com nor www.anotherserver.com, just www.company.com.
I know that I can set up some kind of domain masking (we're using GoDaddy for our hosting), but I'm worried that clicking out to www.anotherserver.com is going to keep this from all appearing under the same domain.
Is there a way to set this up so that links from and to both of these servers appear as www.company.com?
The good news is that what you are wanting to do is very common, it is called "proxying". I believe that godaddy uses apache so specifically you will want to look for mod_rewrite. You will more than likely simply have to provide a .htaccess file with the rewrite rules in it.
Here are the gotchas that you will run into though:
Client hits www.company.com
www.company.com pulls content from www.anotherserver.com
www.company.com sends content back to the client.
The issues come in with the data that www.anotherserver.com sends back to the client. Enter the world of relative and absolute paths.
Let's say that the actual request is www.company.com/widget:
Client hits www.company.com/widget
www.company.com requests www.anotherserver.com/widget
www.anotherserver.com/widget returns a page but on that page it has a link to www.anotherserver.com/widget/image.jpg
www.company.com returns the content back to the client but now the client has a link to www.anotherserver.com/widget/image.jpg
You will need to make sure that your backend servers use relative paths instead of absolute paths. http://en.wikipedia.org/wiki/Path_(computing)
I was wondering what's the best way to switch a website to a temporary "under costruction" page and switch it back to the new version.
For example, in a website, my customer decided to switch from Joomla to Drupal and I had to create a subfolder for the new CMS, and then move all the content to the root folder.
1) Moving all the content back to the root folder always create some problems with file permissions, links, etc...
2) Creating a rewrite rule in .htaccess or forward with php is not a solution because another url is shown including the top folder.
3) Many host services do not allow to change the root directory, so this is not an option since I don't have access to apache config file.
Thanks
Update: I can maybe forward only the domain (i.e. www.example.com) and leave the ip on the root folder (i.e. 123.24.214.22), so the access is finally different for me and other people? Can I do this in .htaccess file ?
One thing to consider is you don't want search engines to cache your under construction page - and you also don't want them to drop your homepage from the search index either (Hence just adding a "noindex" meta tag isn't the perfect solution).
A good way to deal with this is do a 302 redirect (temporarily moved) from your homepage to your under construction page - that way the search engine does not cache your homepage as an under construction page, does not index your under construction page (assuming it has a NOINDEX meta tag), and does not drop your homepage from the search index either.
One way would be the use of an include on your template page.
When you want the construction page to show, you set a redirect in the include to take all traffic to the construction page.
When you are done your remove the redirect.
What about hijacking your index.php file?
Something simple, along the lines of
<?php
if (SITE_OFFLINE)
include 'under_construction.html';
else
//normal content of your index page
?>
where you would naturally define SITE_OFFLINE in an appropriate place for your needs.
What I did when I used PHP for websites was to configure Apache to direct all requests to a front controller. You then would have full access to all requests no matter where they are pointing to. Then in your front controller (PHP file, static html file, etc.), you would do whatever you need to do there.
I believe you need to configure pathinfo in Apache and some other settings, it has been about 3 years since I have used that approach. But, this approach is also good for developing your own CMS or application so that you have full control over security.
You have to do something similar to this:
http://www.phpwact.org/pattern/front_controller
I am looking for more details, I know my configuration had more to it than that.
This is part of what I'm looking for too:
http://httpd.apache.org/docs/2.0/mod/core.html
Enabling path_info passes path information to the script, so all requests now go through a single point of entry. Let me find my configuration, I know vaguely how this works, but I'm sure it looks like a lot of hand waving.
Also, keep in mind that because all requests are going through this single PHP file, you are responsible for serving images, JavaScript, CSS, etc. So, if a requests is coming in for /css/default.css, that will go through your php script (index.php, most likely), then you'll need to determine how to handle the request. Serving static files is trivial, but it is a little more work.
If you don't want to go that route, you could possibly do something with mod_rewrite so that it only looks for .html, .htm pages or however you have your site configured. For me, I don't do extensions, so that made my regex a little more difficult. I also wanted to secure access to all files. The path_info was the solution for me, but if you don't need that granularity, then writing a front controller might be a bit too much work.
Walter
It's an important security issue and I'm sure this should be possible.
A simple example:
You run a community portal. Users are registered and upload their pictures.
Your application gives security rules whenever a picture is allowed to be displayed. For example users must be friends on each sides by the system, in order that you can view someone else's uploaded pictures.
Here comes the problem: it is possible that someone crawls the image directories of your server. But you want to protect your users from such attacks.
If it's possible to put the binary data of an image directly into the HTML markup, you can restrict the user access of your image dirs to the user and group your web application runs of and pass the image data to your Apache user and group directly in the HTML.
The only possible weakness then is the password of the user that your web app runs as.
Is there already a possibility?
There are other (better) ways, described in other answers, to secure your files, but yes it is possible to embed the image in your html.
Use the <img> tag this way:
<img src="data:image/gif;base64,xxxxxxxxxxxxx...">
Where the xxxxx... part is a base64 encoding of gif image data.
If I needed security on my images directory I wouldn't expose the directory at all. Instead my img src attributes would reference a page that would take a userid and an image id as a parameter.
The page would validate that that user did indeed have access to see that picture. If everythings good, send the binary back. Otherwise send nothing.
for example:
<img src="imgaccess.php?userid=1111&imgid=223423" />
Also, I wouldn't use guessable id's. Instead sticking to something like base 64 encoded guid's.
I'm not sure I understand, but here goes. Instead of serving up static images that reside in an images folder - why couldn't you, using your server side technology of choice, have the images dynamically sent down to the client? That way your server side code can get in the mix and allow or deny access programmatically?
<img src="/images/getImage.aspx?id=123353 />
You could move the pictures out of the document root into a private directory and deliver them through your application, which has access to that directory. Each time your app generates an image tag, it then also generates a short-lived security token which must be specified when accessing a particular image:
<img src="/app/getImage.xyz?image=12345&token=12a342e32b321" />
Chances are very rare that someone will brute force the right token at the right time with the right image.
There are at least to possibilities to verify the token in "getImage":
Track all image tags in your app and store records in a database which link the randomly generated tokens and image IDs to the requesting users. The "getImage" action then checks the supplied parameters against that database.
Generate the token as a checksum (MD5, CRC, whatever) over the user ID, the image ID and maybe the current day of the year, and be sure to mix in an unguessable salt. The "getImage" action will then recompute the checksum und check it against the specified one in order to verify the user's access. This method will produce less overhead than the first one.
PHP example:
$token = md5($_SESSION['user_id'].' '.$imageID.' '.$SECRET_SALT.' '.date('z'));
With HTML5 you could use the canvas tag and JavaScript to do this.
You could perhaps do something with either CSS or a table layout to draw a picture (probably really bad performance, resolution, portability).
Either way, there is no stopping people from taking your pics. They could take a screenshot and crop it out.
As Chris mentioned in his answer, having long picture id's so that the URL for each image is not easy to guess or brute force is important. And no directory listing on your webserver directories is also.
https://www.base64-image.de/
I used this website to generate base64 code fir given image, and then this website provide code to directly paste . It worked.
Title: Rotate Homepage Image (for website)- No longer works.
I am a physicist/wildlife artist with a website (I created in 2002) to display & market my artwork. I have set it up with an underlying (homepage) image map - having links to: "tigers", "leopards", "birds", artist info, etc., with the overlying image changing (swapping out) every time the user navigates to/from homepage. The links for each homepage have the same numerical coordinates and do not change locations from page to page, just the image changes. You can see my blank-page site at www.querryart.com. Note links below DO work.
The website was fabulous until last year. At that time my former webhost went out of business, and I changed to Jumpline.com. Since then, the commands which call canned subroutines do not work.
The routine which swaps out the image is named pid.cgi (stored in the cgi-bin).
Another one-line page-counter cgi routine I used at the end of each page called a canned program "count.cgi" which counted visitors to that page, incremented "hits" per page, and stored them in a table displayed only to me. This was a way I could determine the popularity of various images. This cgi routine also does not now work - giving me an error message on each page.
Anyway, I am lost without these routines (particularly the first one to swap out images). Is it progress that my Cadillac website has turned into an empty wagon? Hope someone can help. I'm not a programmer.
My first guess is that you may need to change the line(s) at the top of your CGI file in order for the server to process them. For example, if using Perl, #!/usr/bin/perl is a common directory, and so is #!/usr/local/bin/perl.
Oh, and have you set the permissions to 755?
For starters: http://www.querryart.com/cgi-bin/pid.cgi does not exist. You might want to make sure the file is uploaded to the correct place.
Make sure that your host supports CGI scripts.
Make sure, your CGI scripts are uploaded at the correct location according to the info from your host regarding the installation of CGI scripts.
Make sure the scripts are executable (chmod 755)
Make sure, that the scripts are calling the correct interpreter (as pointed out by Steve).
From a quick check at your web site, it looks like the scripts are not in the right place because the webserver gives a 404 - not found. when I try to get /cgi-bin/pid.cgi
Furthermore, the fact that the script takes an absolute path as a parameter (cfile=/home/querryar/httpdocs/cgi-bin/dicont.cnf) looks like a glaring security problem allowing access to any files in your account. You should really consider a different solution