Can I use protocol-agnostic URLs in my HTML5 Application Cache Manifest? - html

We have a number of websites which use the same codebase to run different sites depending on the domain name used, and we're looking to use the HTML5 Application Cache to improve the performance of these sites to cache things like web fonts and other large, rarely-updated files.
Currently, we're hard-coding fully qualified URLs, using HTTPS, just in case one of the websites is using SSL. Our 'static' website server can handle both HTTP and HTTPS, so instead of doing this:
CACHE MANIFEST
# Cache Version 3198.729
https://static.ourdomain.co.uk/fonts/webfont1.eot
https://static.ourdomain.co.uk/fonts/webfont1.ttf
https://static.ourdomain.co.uk/fonts/webfont1.woff
We'd like to be able to do this:
CACHE MANIFEST
# Cache Version 3198.729
//static.ourdomain.co.uk/fonts/webfont1.eot
//static.ourdomain.co.uk/fonts/webfont1.ttf
//static.ourdomain.co.uk/fonts/webfont1.woff
Are we likely to run into any issues by doing this?

//static.ourdomain.co.uk/fonts/webfont1.eot is just a relative URL. It is just as permissible as /fonts/webfont1.eot, wherever relative URLs are acceptable.

Related

get around cross-origin resource sharing on Amazon Aws

I am creating a Virtual reality 360-degree video website using the krpano html5 player.
This was going great until testing on safari and I realised it didn't work. The reason for this is because safari does not support CORS (cross-origin resource sharing) for videos going through WebGL.
To clarify, if my videos where on the same server with my application files it would work, but because I have my files hosted on Amazon s3 , they are CORS. Now I'm unsure what to do because I have built my application on digital ocean which connects to my amazon s3 bucket, but I cannot afford to up my droplet just to get the storage I need(which is around 100GB to start and will increase in the future to Terrabytes and my video collection gets bigger).
So does anyone know a way I can get around this to make it seem like the video is not coming from a different origin or alternatively anything I can do to get past this obstacle?
Is there any way that I could set up amazon s3 and amazon EC2 so that they dont see each other as cross-origin resource sharing?
EDIT:
I load my videos like this:
<script>
function showVideo(){
embedpano({
swf:"/krpano/krpano.swf",
xml:"/krpano/videopano.xml",
target:"pano",
html5:"only",
});
}
</script>
This then calls my xml file which calls the video file:
<krpano>
<!-- add the video sources and play the video -->
<action name="add_video_sources">
videointerface_addsource(‘medium', 'https://s3-eu-west-1.amazonaws.com/myamazonbucket/Shoots/2016/06/the-first-video/videos/high.mp4|https://s3-eu-west-1.amazonaws.com/myama…ideos/high.webm');
videointerface_play(‘medium');
</action>
</krpano>
I don't know exactly how krpano core works, I assume it the javascript gets the URLs from the XML file and then makes a request to pull them in.
#datasage mentions in comments that CloudFront is a common solution. I don't know if this is what he was thinking of but it certainly will work.
I described using this solution to solve a different problem, in detail, on Server Fault. In that case, the question was about integrating the main site and "/blog/*" from a different server under a single domain name, making a unified web site.
This is exactly the same thing you need, for a different reason.
Create a CloudFront distribution, setting the alternate domain name to your site's name.
Create two (or more) origin servers pointing to your dynamic and static content origin servers.
Use one of them as default, initially handling all possible path patterns (*, the default cache behavior) and then carve out appropriate paths to point to the other origin (e.g. /asset/* might point to the bucket, while the default behavior points to the application itself).
In this case, CloudFront is being used other than for its primary purpose as a CDN and instead, we're leveraging a secondary purpose, using it as a reverse proxy that can selectively route requests to multiple back-ends, based on the path of the request, without the browser being aware that there are in fact multiple origins, because everything sits behind the single hostname that points to CloudFront (which, obviously, you'll need to point to CloudFront in DNS.)
The caching features can be disabled if you don't yet want/need/fully-understand them, particularly on requests to the application itself, where disabling caching is easily done by selecting the option to forward all request headers to the origin, in any cache behavior that sends requests to the application itself. For your objects in S3, be sure you've set appropriate Cache-Control headers on the objects when you uploaded them, or you can add them after uploading, using the S3 console.
Side bonus, using CloudFront allows you to easily enable SSL for the entire site, with a free SSL certificate from Amazon Certificate Manager (ACM). The certificate needs to be created in the us-east-1 region of ACM, regardless of where your bucket is, because that is the region CloudFront uses when fetching the cert from ACM. This is a provisioning role only, and has no performance implications if your bucket is in another region.
You need to allow your host in CORS Configuration of your AWS-S3 bucket .
Refer to Add CORS Configuration in Editing Bucket Permissions.
Hence after that, every request you make to the S3 bucket files, will have CORS headers set.
In case you need to serve the content via AWS-CDN CloudFront then follow these steps, ignore if you server content directly via S3 :
Go to AWS CloudFront Console.
Select your CloudFront Distribution.
Go to Behaviors Tab.
Create a Behavior(for the files which needs to be served with CORS Header).
Enter Path Pattern, Select Protocol & Methods.
Select All in Forward Headers option.
Save the behavior.
If needed, Invalidate the CloudFront Edge Caches by running an Invalidation request for the Files you just allowed for CORS.

HTML5 Load page from offline cache only when not online

I am attempting to make a webpage available offline.
I have added <html lang="en" manifest="cache.manifest"> to my page.
I have created cache.manifest with the following content:
CACHE MANIFEST
css/base.css
http://cdnjs.cloudflare.com/ajax/libs/jquery.selectboxit/3.8.0/jquery.selectBoxIt.css
http://ajax.googleapis.com/ajax/libs/jqueryui/1.11.4/themes/smoothness/jquery-ui.css
http://cdnjs.cloudflare.com/ajax/libs/entypo/2.0/entypo.woff
http://maxcdn.bootstrapcdn.com/font-awesome/4.2.0/css/font-awesome.min.css
css/affixed-sidebar.css
css/bootstrap.css
css/components.css
css/dodfont.css
css/helpers.css
http://ajax.googleapis.com/ajax/libs/jquery/1.11.2/jquery.min.js
http://maxcdn.bootstrapcdn.com/bootstrap/3.3.4/js/bootstrap.min.js
http://www.parsecdn.com/js/parse-1.3.0.min.js
http://cdnjs.cloudflare.com/ajax/libs/toastr.js/2.1.1/toastr.min.js
http://ajax.googleapis.com/ajax/libs/jqueryui/1.11.4/jquery-ui.min.js
http://cdnjs.cloudflare.com/ajax/libs/moment.js/2.10.3/moment.min.js
http://gitcdn.github.io/bootstrap-toggle/2.2.0/js/bootstrap-toggle.min.js
http://cdnjs.cloudflare.com/ajax/libs/jquery.selectboxit/3.8.0/jquery.selectBoxIt.min.js
http://google-code-prettify.googlecode.com/svn/loader/run_prettify.js?lang=js&skin=sunburst
js/components.js
js/so-cat.js
http://maxcdn.bootstrapcdn.com/font-awesome/4.2.0/fonts/fontawesome-webfont.woff?v=4.2.0
fonts/glyphicons-halflings-regular.woff
http://maxcdn.bootstrapcdn.com/font-awesome/4.2.0/fonts/fontawesome-webfont.ttf?v=4.2.0
fonts/glyphicons-halflings-regular.ttf
http://fonts.googleapis.com/css?family=Raleway:400,200
http://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css
fonts/glyphicons-halflings-regular.woff2
fonts/glyphicons-halflings-regular.woff2
fonts/glyphicons-halflings-regular.woff
fonts/glyphicons-halflings-regular.ttf
When I first visit the page in chrome, the browser will deliver the page and cache the page and all of the resources.
I expect that when I leave the page and come back, I will be served the live version of the page and I will only ever see the cached version if the server is not available.
Instead, on every visit after the first, I am served the cached version of the page. I can confirm I am seeing the cached version because, if I change the html file and refresh the webpage, I do not see the changes. If I clear or disable the cache and refresh, I see the changes as expected.
What do I need to do to ensure that, if the server is reachable, I am always served the live version of the page and all of its resources?
What do I need to do to ensure that, if the server is reachable, I am
always served the live version of the page and all of its resources?
Unfortunately, the simple answer here is: You can’t. For real. That is, at least not if you’re using a cache manifest. This a well-known serious design bug in the HTML5 appcache/offline-cache mechanism. It’s essentially broken by design.
And that's why using appcache is basically no longer recommended. It's just too broken.
And that's why the Offline Web applications section of the HTML Standard now says this:
This feature is in the process of being removed from the Web platform.
(This is a long process that takes many years.) Using any of the
offline Web application features at this time is highly discouraged.
Use service workers instead.
The only way to work around it and make clients quit using the cached contents is: completely remove your cache.manifest file from the server.
Do that and they’ll go back to fetching the current content.
The good news is that there’s a much better solution for offline Web applications in the works: Service Worker—more specifically, the Service Worker Cache and CacheStorage interfaces.
well ..
Solution 1 :
nowadays you can use ServiceWorkers but will only work over HTTPS .
Solution 2 :
you may add a hashed line and a version number to the manifest file like
# version 1.0.0
and change it every time you want to update the files in cache
Solution 3 :
you can use manifest in away that if online the browser will get the online data and if not you can write some js code to get some saved data .. maybe from indexedDB or localStorage
for more info read This Article

background-image full-path url to my isp ftp?

I'm hosting my own server for my website, and I would like to store all the resources on my isps ftp instead (like images, scripts and stuff like that) to prevent unnecessary strain on my server, and because my isps network speed ought to be quicker than the service they provide me. Now the fonts and the javascripts work fine, but when I try the following in my css:
background-image:url("-url-");
It does not want to display on my website, and in chrome I get this:
XMLHttpRequest cannot load -url-. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://my-domain-name/' is therefore not allowed access.
What does this mean?
-edit-
Actually it does seem to display, but I don't think it is supposed to. But they load up in a weird manner so I think I might just host the files on my own server after all.
I don't know the format of your URL (I assume you are using http, not ftp to link the image) or the nature of your "IPS's server" but the error suggests it does not allow hotlinking of images (I.e. cross-domain access to the files is not allowed by the server).
What is the service you are using to host the images? If it isn't indended for this purpose and you want to reduce strain on your own server it would make more sense to use a proper CDN service like AWS (they have a free tier I think).

HTML5 application cache - SSL and cross domain - any workaround?

from http://appcachefacts.info/:
Over SSL, all resources in the manifest must respect the same-origin
policy.
The exception is Google Chrome, which doesn't follow the specification in this regard. Over SSL, Chrome will load resources from different origins so long as they are still served over SSL.
I would really like to load static assets like images,css and javascripts from a CDN close to the user and avoid serving them from my webserver just because i use HTTPS
Is there any way we can work arround those security limitations ?
my goal:
Main html loaded from : https://mydomain.com.
Assets loaded from : https://cdn.mydomain.com (subdomain but not same origin..)
Appcache file I use at the moment, but does not seem to work on safari and iOS iphone :
CACHE MANIFEST
CACHE:
https://cdn.mydomain.com/main.css
https://cdn.mydomain.com/main.zepto.js
NETWORK:
/
*
Unfortunately no, Sorry actually according to http://en.wikipedia.org/wiki/Same_origin_policy currently the only browser that allows for Cross-Domain caching is Chrome and that is only because they are willfully not adhering to the same origin policy. If you want to make your offline site exclusively for chrome users you can do dual servers, otherwise you'll have to stick with one until the different browsers come up with a new policy.
If you want to get tricky you could attempt something like running a jQuery to a html file on your asset server that loads the manifest there, but I doubt that will work during offline use.

How do I specify a wildcard in the HTML5 cache manifest to load all images in a directory?

I have a lot of images in a folder that are used in the application. When using the cache manifest it would be easier maintenance wise if I could specify a wild card to load all the images or files in a certain directory to be cached.
E.g.
CACHE MANIFEST
# 2011-11-3-v0.1.8
#--------------------------------
# Pages
#--------------------------------
../index.html
../edit.html
#--------------------------------
# JavaScript
#--------------------------------
../js/jquery.js
../js/main.js
#--------------------------------
# Images
#--------------------------------
../img/*.png
Can this be done? Have tried it in a few browsers with ../img/* as well but it doesn't seem to work.
It would be easier, but how's it going to work? The manifest file is something which is parsed and acted upon in the browser, which has no special knowledge of files on your server other than what you've told it. If the browser sees this:
../img/*.png
What is the first image the browser should request from the server? Let's start with these:
../img/1.png
../img/2.png
../img/3.png
../img/4.png
...
../img/2147483647.png
That's all the images that might exist with a numeric name, stopping semi-arbitrarily at 231-1. How many of those 2 billion files exist in your img directory? Do you really want a browser making all those requests only to get 2 billion 404s? For completeness the browser would probably also want to request all the zero-filled equivalents:
../img/01.png
../img/02.png
../img/03.png
../img/04.png
...
../img/001.png
../img/002.png
../img/003.png
../img/004.png
...
../img/0001.png
../img/0002.png
../img/0003.png
../img/0004.png
...
Now the browser's made more than 4 billion HTTP requests for files which mostly aren't there, and it's not yet even got on to letters or punctuation in constructing the possible filenames which might exist on the server. This is not a feasible way for the manifest file to work. The server is where the files in the img directory are known, so it's on the server that the list of files has to be constructed.
I don't think it works that way. You'll have to specify all of the images one by one, or have a simple PHP script to loop through the directory and output the file (with the correct text/cache-manifest header of course).
It would be a big security issue if browsers could request folder listings - that's why Tomcat turns that capability off by default now.
But, the browser could locate all matches to the wildcards referenced by the pages it caches. This approach would still be problematic (like, what about images not initially used but set dynamically by JavaScript, etc., and it would require that all cached items not only be downloaded but parsed as well).
If you are trying automate this process, instead of manually doing it. Use a script, or as I do I use manifestR. It will output your manifest/appcache file and all you have to do is copy and paste. I've used it successfully and usually only have to make a few changes.
Also, I recommend using the network header with the wild card:
NETWORK:
*
This allows all assets from other linked domains via JSON, for instance, to download into the cache. I believe that this is the only header where you can specify a wildcard. Like the others have said here, it's for security reasons.
The cache manifest is now deprecated and you should use HTML headers to control caching.
For example:
<meta http-equiv="Cache-control" content="public">
Public - may be cached in public shared caches.
Private - may only be cached in private cache.
No-Cache - may not be cached.
No-Store - may be cached but not archived.