I have a web app that stores videos. I am using a java servlet (over https) which verifies a username and password. Once the details are verified, i redirect the user to a video stored in AWS S3. For those who don't know how S3 works, its just a web service that stores objects (basically think of it as storing files). It also uses https. Now obviously to make this work, the s3 object (file) is public. I've given it a random name full of numbers and letters.
So the servlet basically looks like this:
void doGet(request, response){
if (authenticateUser(request.getParameter("Username"), request.getParameter("Password")){
response.sendRedirect("https://s3.amazonaws.com/myBucket/xyz1234567.mp4");
}
}
This is obviously simplified but it gets the point across. Are there any very obvious security flaws here? The video tag will obviously have a source of something like https://www.mysite.com/getVideo?Username="Me"&Password="randomletters". At first blush it seems like it should be as secure as anything else assuming i give the file names sitting at AWS s3 sufficiently random names?
The obvious security flaw is that anybody could detect which URL the authentication servlet redirects to, and share this URL with all his friends, thus allowing anyone to access the resource directly, without going through the euthentication servlet.
Unfortunately, I don't know S3 at all, so I can't recommend any fix to the security problem.
All this mechanism does is provide a very limited obfuscation - using developer tools in most modern browsers (or a proxy such as Fiddler) a user will be able to watch the URL of the video that's being loaded and, if it's in a Public S3 bucket, then simply share the link.
With S3 your only real solution would be to secure the bucket and then either require that the user is logging in or use the temporary tokens for access [http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html] ... though this does complicate your solution
I would also mention that including the password and username in plaintext on the link to the video asset (from the question above) is very insecure
Related
Consider a system where we want to send someone a plain HTML+JS file and when loaded in a browser, it "executes" itself. (The inspiration is Portable Secret, which password-protects secrets for a file that can be shared offline, for a very convenient user experience).
The system has lots of flaws. One of them is that the HTML file could be modified while it's sitting around on the operating system, to do anything - for instance, you could tamper with it so when the password is supplied, it sends its secrets over the network to the attacker.
Now, we don't have this problem with most apps these days because they are signed. If you tamper with them, when the OS launches the app, it will (greatly simplifying) hash its contents, and notice that it no longer matches the signature. The signature can't be faked for the usual public-key crypto reasons, blah blah.
So, the question, finally: is there any equivalent anti-tampering standard we can use for an HTML page, stored offline?
I thought that maybe there would be something in Progressive Web Apps, perhaps putting a signature in the manifest, but I don't see anything immediately relevant. The behavior can't be anything defined in the HTML+JS file itself, obviously; it must be something the browser does automatically to check the contents. It might be acceptable if it has to do a network request to do it.
There are a few approaches you could take to try to prevent tampering of an HTML+JS file stored offline:
Sign the file: One approach you could take is to sign the file using
a private key and then include the signature in the file. When the
file is loaded in the browser, the browser could verify the signature
using the corresponding public key. This would prevent tampering with
the file because any changes to the file would invalidate the
signature.
Use a Content Security Policy: You could use a Content Security
Policy (CSP) to specify which sources are allowed to be used by the
HTML+JS file. This would prevent tampering with the file by blocking
any attempts to load external resources or execute malicious code.
Use a Service Worker: Another option is to use a Service Worker to
cache the HTML+JS file and serve it from the cache. This would
prevent tampering with the file because any changes to the file would
not be reflected in the cached version served by the Service Worker.
Ultimately, it's important to note that there is no foolproof way to prevent tampering with an HTML+JS file stored offline. It's always possible for an attacker to modify the file, so it's essential to be aware of this risk and take steps to mitigate it as much as possible.
I am building my personal website using Jekyll and hosting it at github-pages. I would like to have a password protected area (just password protected directory, not the whole website). I have tried a few options and tricks to get htaccess to work but failed.
I would like to know if someone managed to use htaccess, or any other method, to protect a directory on github-pages.
Listing solutions which did not work for me (or I failed to get them to work):
*Flohei.
*Jeremy Ricketts.
GitHubPages (like Bitbucket Pages and GitLab Pages) only serve static pages, so the only solution is something client side (Javascript).
A solution could be, instead of using real authentication, just to share only a secret (password) with all the authorized persons and implement one of the following scheme:
put all the private files in a (not listed) subdirectory and name that with the hash of the chosen password. The index page asks you (with Javascript) for the password and build the correct start link calculating the hash.
See for example: https://github.com/matteobrusa/Password-protection-for-static-pages
PRO:
Very simple approach protecting a whole subdirectory tree
CONS:
possible attack: sniffing the following requests to obtain the name of the subdirectory
the admins on the hosting site have access to the full contents
crypt the page with password and decrypt on the fly with javascript
see for example: https://github.com/robinmoisson/staticrypt
PRO: no plaintext page code around (decrypting happens on the client side)
CONS:
just a single page, and need to reinsert the password on every refresh
an admin could change your Javascript code to obtain the password when you insert it
One option is to use Cloudflare Access to control access at the DNS level.
After setting up a custom domain for your Git pages using Cloudflare for DNS, you can use their Access rules policy to require authentication at the specified url path.
This could still be bypassed if someone is familiar with bypassing DNS blocks.
https://www.cloudflare.com/products/cloudflare-access/
You can give a try to Jekyll Auth and if you run into troubles, this issue can be useful.
You can use Render to deploy your static Web app. It has a npm package that encrypted your html files and user can not see it in browser. So you can use frontend password validation.
I have crawled the web quite a lot these days, but couldn't get any accurate information on how crossdomain.xml files behave in case of 302 redirects; especially with the sandboxes having changed significantly over the last versions!
I am relatively new to flash... so any advice is more than appreciated!
I have been working on a project lately that uses audio streams with some sort of CDN distribution! what happens is that a common url is triggered, and then the user is dynamically redirected to the next best server available. In my case, i have no access at the server side of things (at least not anytime soon). And the only path providing an appropriate crossdomain.xml is the one performing the redirect. All the other dynamic paths provide exclusively content!
http://resource.domain.com (valid crossdomain.xml)
302 => http://dyn1.domain.com/...
302 => http://dyn2.domain.com/...
302 => http://dyn3.domain.com/...
I noticed that flash doesn't care much if i try to load the audio stream with something like...
var req :URLRequest = new URLRequest("http://resource.domain.com");
var sound :Sound = new Sound(req); // ie. effectively playing http://dyn3.domain.com
sound.play();
It gets both redirecting, and streaming done well! and doesn't bother for any crossdomain file and starts playing!
Although when i try something different, like setting up some custom headers to the request and loading the file with URLStream instead, everything gets messy! Well, the redirect gets done, as expected but all of a sudden i need another crossdomain file in the redirected location!
Is there any explanation to whats happening and eventually ways to resolve this?!
Thanks for your time!
It comes as a site question : i noticed everything to work flawlessly while being in the local-trusted sandbox and errors happening mainly if not exclusively in the remote sandbox. is it possible that the local-trusted sandbox doesn't care about crossdomain policy files at all!?
Summary
Add crossdomain.xml to each CDN host or adopt to limited Sound functionality.
Details
SWF files that are assigned to the local-trusted sandbox can interact with any other SWF files and can load data from anywhere (remote or local).
Sound can load stuff from other domains that don't allow access using cross-domain policy with certain restrictions:
Certain operations dealing with sound are restricted. The data in a
loaded sound cannot be accessed by a file in a different domain unless
you implement a cross-domain policy file. Sound-related APIs that fall
under this restriction are Sound.id3, SoundMixer.computeSpectrum(),
SoundMixer.bufferTime, and the SoundTransform class.
Flash in general has pretty complex cross-domain policies but in your case the bottom line is that you'll need to have proper crossdmain.xml on each host except the one that serves the SWF:
3.1. If your file is served from http://resource.domain.com it's not required to have http://resource.domain.com/crossdomain.xml but it's really good to have one.
3.2. You will need to have proper http://dyn2.domain.com/crossdomain.xml explicitly allowing your SWF to access dyn2.domain.com to be able to use URLLoader and other APIs that provide access to raw loaded data.
3.3. There's a reason for these restrictions - cookies (and other ambient user credentials). If Flash would not require proper cross-domains after a redirect, one could access any domain with user cookies attached by simply loading his own redirector first. This means accessing all user cookie-protected data (e.g. mail.google.com) from any SWF on the internet that's running in your browser.
I've got a slightly unusual scenario. A web app running on a local network can perform various operations on any file on the network it can access. At present the user copies/pastes the UNC path to the file into a text input and clicks submit.
The server retrieves the file, performs some operations and returns the results to the user.
I'd like to allow the user to browse for the file using the webpage - but I don't want to upload the file, just get the full path to it. Is this possible?
I'm aware there will be a couple of scenarios which are doomed to failure - eg browsing to a local path not a UNC share but I can cover this with some validation. There will also be scenarios when the server can access a path the user can't (this is intentional) so browsing wouldn't work here.
All users will be techies who should get the point. Of course, if there were a way to limit the browse dialog to a UNC path, that would be even better but I suspect it's impossible.
Note, we already limit support to the latest versions of the main browsers and since this is just a utility feature, limited support is acceptable.
Sorry, that can't be done. It's a security feature.
I was wondering how to keep images secure on my website. We have a site that requires login then then user can view thousands of different images all named after their ID in the database.
Even though you need to login to view the images the proper way...nothing is stopping a user from browsing through the images by typing <website-director>/image-folder/11232.jpg or something.
this is not the end of the world but definitely not ideal. I see that to stop this facebook just names the images something much more complicated + stores them in hashed folders.
Gmail does a very interesting thing, their image tags looks like this:
<img src=/mail/?attid=0.1&disp=emb&view=att&th=12d7d49120a940e5>
I thought the src attribute has to contain a reference to an image??...how does gmail get around this?
This is more for educational purposes at this point, as I think this gmail scheme might be overkill for our implementation.
Thanks for your feedback in advance,
Andrew
I thought the src attribute has to contain a reference to an image?
GMail is referencing an image. It's just being pulled dynamically, probably based off of that th=12d7d49120a940e5 string.
Try browsing to http://mail.google.com/mail/?attid=0.1&disp=emb&view=att&th=12d7d49120a940e5
Instead of it being a direct path to its location on the server's filesystem, it uses a dynamic script (the images may even be in a database, who knows).
Besides serving up an image dynamically from your webapp, it's also possible to use a webapp to dynamically authorize access to static resources that the webserver will serve -- commonly by putting the files somewhere that the webserver has access to, but not mapped to any public URI, and then using something like X-Sendfile (lighttpd, Apache with mod_sendfile, others), X-Accel-Redirect (nginx), X-Reproxy-File (Perlbal), etc. etc. Or with FastCGI you can configure an application in a FastCGI "authorizer" role rather than a content provider.
Any of these will let you check the image being authorized, and the user's session, and make whatever decision you need to, without tying up a proceses of your backend application for the entire time that the image is being sent to the client. It's not universally true, but usually a connection to the backend app represents a lot more resources being reserved than a connection to the webserver, so freeing them up ASAP is smart.
The code that runs after this GET request is issued:
/mail/?attid=0.1&disp=emb&view=att&th=12d7d49120a940e5
outputs an image to the browser. Something doesn't have to be named with a .jpg or .png or whatever ending to be considered an image by a browser. This is how captcha algorithms are able to serve up different images depending on a value in the id. For example, this link:
http://www.google.com/recaptcha/api/image?c=03AHJ_VusfT0XgPXYUae-4RQX2qJ98iyf_N-LjX3sAwm2tv1cxWGe8pkNqGghQKBbRjM9wQpI1lFM-gJnK0Q8G3Nirwkec-nY8Jqtl9rwEvVZ2EoPlwZrmjkHT7SM32cCE8PLYXWMpEOZr5Uo6cIXz1mWFsz5Qad1iwA
Serves up this image:
So the answer really is to just obfuscate your image names/links a bit like Facebook does so that people can't easily guess them.