Serving local file:/// links and AppCache - html

I'm making a webapp for members of my caving club to search through and view cave survey note PDFs. It works fine, and I got the AppCache working for the web version of it.
However, since the PDFs are quite large and slow to download, and many members have the PDFs on their local machines from the same SVN the website gets them from, it would be ideal for them to be able to use a page with links to a local SVN folder of their choosing.
The design goals:
The site displays links to PDF files on the local filesystem
Whenever I add features to the site, users get them automatically the next time they open the page and they're connected to the internet
But after the first time they open the page, the site works offline.
Sadly web browsers don't appear to support this useful combination of design goals at once.
I can satisfy #1 by having users download a copy of the site, add their local SVN path in a JS, and open their local copy in the browser, so that file:/// links work.
I can satisfy #2 by having absolute links to JS bundles on the server.
I can satisfy #3 by using the AppCache.
I thought I could get clever by having the copy of the page on the local file system have <html manifest="https://myserver.com/myapp.appcache">, but unfortunately Chrome doesn't seem to allow a local file to use an app cache manifest hosted on a server, for seemingly no good reason to me.
Does anyone know of another way I could satisfy all 3 goals?
Perhaps there's some simple program/config I could give my friends that would intercept web requests to https://myserver.com/some/folder and instead serve them out of a folder on their local file system?

Andy,
I know this post is a bit old but came across it looking for something else related to AppCache. My understanding it that the html page and the manifest must reside in the same domain for it to work. So I think you need to modify your design:
Create a JavaScript function that acts as a setting for the user to enter the path to their local copy of the PDF's. Store this information in localstorage.
Create a html template page for the document links.
Create a JavaScript function that populates the html template page with any documents and links the user enters.
This way, the users visit your application online and it uses appcache to store itself and the JS files for offline use. To access the PDF's, the user clicks a settings button that launches a page to collect path information and saves the information in localstorage. The users can then access the template page which will populate with the documents they entered.
Here is a good intro to localstorage: [http://www.smashingmagazine.com/2010/10/local-storage-and-how-to-use-it/]

Related

How to disable parent directory access in web file browsing without web server

I am writing a command line application that produces an index.html with links to other generated HTML files, but also some links to filesystem subdirectories. Here is an example of such a link:
Invoices
The intention for sharing this content is for the user to zip up the directory tree and send it to other parties for review. However, some users might think to use ngrok, or use screen sharing, to share their web browser to allow other people to access their local system. With ngrok they would be running a web server and might be able to configure the web server to protect against this, but with screen sharing that would not be possible. (Consider the case where a user might leave their web browser open to the remote user and step away, not realizing that the remote user can now examine their entire filesystem.)
The problem is the "Parent Directory" links. Using those links, the others could navigate above the intended directory root and navigate their entire filesystem. Here is an image to illustrate:
The directories linked to can have arbitrary numbers and levels of subdirectories, so hard-coding links on custom pages would probably be prohibitively complicated.
There is no web server involved here; the files are displayed by just opening index.html in a web browser, so .htaccess is not a solution. Also, I don't want to disable navigation, I only want to limit its upper bound.
Is there a way to prevent this access?
If there is no web server involved at all, there is no way to prevent that behaviour.
Edit:
You could of course write a browser plugin that limits the access to the parent directory using JavaScript. But every client would have to install that plugin.

How can I access the URLs of the all the HTML files I have uploaded to my website on 000webhost

Summary:
I am a beginner to HTML and need to work out the URLs (or how to make them) of files I have added to 000webhost. I have been given the URL of my index file, and can access it easily, but the links I have placed in it do not work, as I cannot find/don't have the URL of the links (though I do have the code). Is there a way of finding/making a URL for each of the files I have added to my 000webhost project?
So I'm a beginner to HTML and after making a basic website (including links to other pages I have created)I have decided to try and upload it to the internet. I watched a couple YouTube videos on how I should do so and ended up using htmlsave.net . I copy and pasted my code in, changing all my links from places om my desktop to the URLs provided by the website, and everything worked. However, since I had not paid money for a membership I quickly reached my limit on how many pages I could reach. Because of this, I decided to use another(free) web host that would not limit me on how many web pages I could add to a website.
After some research I settled on 000webhost. Everything started off smoothly, I created the project, added my files and got my index file up and running. However, from my index page (which I could now access on the internet), I could not use the links inside it, as they were still still linked to locations on my computer.
Therefore, I opened the code to edit it, but then quickly realized I did not know the new URL of all the files(excluding the index file (named index.html) which 000webhost had provided me as stated earlier) I had added to my 000webhost project.
So after looking around on google and stack overflow I have not been able to find a solution on how to find out the new URL given to the files I have added to 000webhost.
(Apologies for any incorrect use of terminology, as stated I am extremely new to HTML)

Google Sites HTML export keeps redirecting to live site

I was trying to export a Google Site I made for a project. I used wget to spider through every page and to download the html files and linked content. When I try to open "index.html" in Chrome, it does open the local HTML file, but it redirects me to the live version immediately after.
Is there anyway I could modify the HTML code so that it won't head straight to the actual website? I just want to have a local copy of it for reference, and I don't want to store it on Drive.
As the HTML file is too big to type out, I have provided it on Pastebin here.
.
You need a better question. No website works offline, or they do if you download all the files to your user’s computer so the user can view it offline. But at some point they had to visit it online to get it.
Or you save it as an html site and hand it to them on a USB drive. That’s offline to that extent. But then it’s not really a website, its an html file.
Or otherwise, if you need a website for your school which can be used by anyone through internet / intranet, you have two options -
1. Create and host a website in an online server
. a. You have to buy space and deploy a server yourself.
. b. They will a run website in their webserver for you. You just need to give money
2. Deploy a webserver in the school's any one machine and get it in other machines.
Rephrase the question for a better answer.

Generating a web site from xsn files

As we all know, the infopath forms service residing on a sharepoint server generates a web site each time we publish an inforpath form template to the sharepoint server.
Here is the question: how does sharepoint do that. Is there any way for us to do that programmatically via some kind of api provided by MS?
In fact, what I need to do is getting all the html, js, css etc. files and applying some kind of operations like deleting some divs or insert some html code into the particular web page. I have come up with two ways to do this.
Generating the web page via sharepoint api and apply those operations at the same time
Extracting the web page files from the IIS server and apply those operations
I am totally new to this kind of work. All in my mind is that each time we right click on a web page in the browser and choose to save the web page, the browser gets some of the files we need to render the web page and makes it possible for us to browse the web page offline.
httrack
WinWSD
and tools like that seems to work fine with extracting html files from online web pages but not that well with js, css files.
Now I am trying to dig into the chromium project for some kind of inspiration, although whether it helps or not is unpredictable.
Any kind of advice will be appreciated.
Best Regards,
Jordan
Infopath xsn files are just zip files with a different extension. you can rename the extension to .zip and extract out the files. you will find a number of files that make up the form. the two main ones are the .xml and .xsl files. the .xsl will have the html to generate when applied to the xml.

How do I create a link to a saved html page on my computer?

I'm working on a web application that caches html pages and saves it on the user's computer. I want to create a link, so that the user can click on the link and access the cached webpage.
Following is my link to a cached page:
BBC
When I click on the link, nothing happens. I'm not even getting any error.
Can someone please suggest how to create a link to a cached html page?
First of all, not all browsers handle local files equally, indeed, not all computers will be running windows or have a C: drive. Secondly, you don't have much control over a user's cache. Cached pages are usually handled by the browser automatically. You can use headers to specify how a browser ought to cache files, but it's not even required to do so. You can read the W3C recs on caching for more information.
It's unclear what you're trying to do here, but it sounds like it might make more sense for you to use HTML5 local storage or offline files than trying to mess around with their file system directly. The security model of most browsers is such that web apps don't interact with local files, which may be why it's not working for you with your current setup. Dive Into HTML5 has a good overview of HTML5 local storage and offline pages.
Edited based on comment below:
Most browsers' security settings won't let a page on a website access files stored locally. Only locally saved files can link to other locally saved files. Therefore, if the page with a link is on a website, your link won't work. Try creating a link to your file from another locally stored file and see if that works.
Instead of providing the .html extension in the main page where you provide the link you should do something as below:
< href="file:///C:/Users/xxx/yyy/bbc">BBC</a>