Distributing changes to an existing SWF application theory - actionscript-3

Lets say we have an online application written in AS3 and served from a server as a SWF. The first version to go out is V1.0.
Several users, some behind a proxy server, use the V1.0 application and now there is a cached copy on the client machines and the proxy server.
V1.1 is released and placed on the server. All users should see V1.1, but results will vary, some will be served the fresh copy, some will for a period receive their cache's copy, and finally some will be served the cache'd copy in the proxy server.
I can be code in the V1.0 that checks a server variable to see if it's out of date. BUT if it is out of date, is there a way in AS3 to force it download a fresh copy, or apply the differences into itself?
Considering that I don't have access to the proxy server and can't manually or automatically clear the cache.

Is the SWF the only thing getting cached? If so then you could use a cachebuster when the page gets loaded.
Refer to your SWF like so: main.swf?timestamp=7062956829 this will cause the proxy server to think its a different file and not serve up the cached resource.

You may have a wrapper-preloader which requests the latest version number from the server each time the application starts, and then loads the application swf of the latest version.
There are also wise solutions for updates distribution like http://treetide.com/swfcontrol/

I think citizen conn provides a simple solution, albeit it introduces unnecessary server load.
You could use citizen conn's approach but instead of a timestamp, just use the app's version tag.
main.swf?app_version=1.1
A different approach would be to force the refresh from the app itself using DigitalD's approach.
But, since you are using this in the container html page, the container might be cached as well, so you need to force the reload of the container...

Related

PDF in Object tag does not read latest version

I am hosting a website where a couple of PDF-files which are currently in Object-tags are updated weekly.
The name of these PDF-files stay the same, but the data changes.
Currently I'm using:
<object id="men" data="seasons/S2223/Men2023.pdf?" type="application/pdf" width="100%" height="750px">
<p>The file could not be read in the browser
Click here to download
</p>
</object>
When I update the PDF I'm expecting the
data="seasons/S2223/Men2023.pdf?"
to be reading the latest PDF however it stays the same as before.
I added the ? at the end of the filename which should check for the latest version but it doesn't seem to work.
When I clear my browser's cache it's updated but ofcourse this isn't a suitable option for users.
All help is appreciated.
Caching, in this context, is where the browser has loaded the data from a URL in the past and still has a local copy of it. To speed things up and save bandwidth, it uses its local copy instead of asking the server for a fresh copy.
If you want the browser to fetch a fresh copy then you need to do something to make it think that the copy it has in the cache is no good.
Cache Busting Query String
You are trying to use this approach, but it isn't really suitable for your needs and your implementation is broken.
This technique is designed for resources that change infrequently and unpredictable such as the stylesheet for a website. (Since your resources change weekly, this isn't a good option for you.)
It works by changing the URL to the resource whenever the resource changes. This means that the URL doesn't match the one the browser has cached data for. Since the browser doesn't know about the new URL it has to ask for it fresh.
Since you have hardcoded the query to n=1, it never changes which defeats the object.
Common approaches are to set the value of the query to a time stamp or a checksum of the file. (This is usually done with the website's build tool as part of the deployment process.)
Cache Control Headers
HTTP provides mechanisms to tell the browser when it should get a new copy. There are a variety of headers and I encourage you to read this Caching Tutorial for Web Authors and Webmasters as it covers the topic well.
Since your documents expire weekly, I think the best approach for you would be to set an Expires header on the HTTP resource for the PDF's URL.
You could programmatically set it to (for example) one hour after the time a new version is expected to be uploaded.
How you go about this would depend on the HTTP server and/or server-side programming capabilities of the host where you deploy the PDF.

Emptying Windows 8 AppCache

My Windows 8 Store App uses AppCache to cache web resources. For testing purposes, I need to be able to empty or invalidate the entire cache from a separate application. Simply purging my WinINET cache (e.g. from Fiddler) doesn't delete my application's files, because in Win8 each application gets an isolated cache.
I have found that I can empty the cache from within my application by creating a new cache and swapping the existing one with that, but I need to be able to do this from a separate application. Scouring MSDN, I haven't been able to come up with a method for this. Any ideas?
There's a couple of ways to do this. I think the easiest will be to use [Protocol Activation].
Make a protocol similar to clearmyapp://, then have your app, on activation by that protocol, clear the cache then close itself. Then you can create a shortcut on your desktop or a script that will call clearmyapp://.
The harder way would be to create a script that finds your app's folder in the Packages folder (in Users/Username/Local/Packages/PackageName), then finding the cache contents and deleting them.

Loading external files for a non-hosted HTML5 application

I am currently developing a HTML5 game which loads in an external resource. Currently, I am using XMLHttpRequest to read in the file, but this does not work on Chrome, resulting in a
XMLHttpRequest cannot load file:///E:/game/data.txt
Cross origin requests are only supported for HTTP.
The file is in the same directory as the HTML5 file.
Questions:
Is there any way for a HTML5 application to use XMLHttpRequest (or
other method) to load in an external file without requiring it to be
hosted on a webserver?
If I package the HTML5 code as an application on a tablet/phone
which supports HTML5, would XMLHttpRequest be able to load external
files?
(a) Yes and no. As a matter of security-policy, XHR has traditionally been both same-protocol (ie: http://, rather than file:///), and on top of that, has traditionally been same-domain, as well (as in same subdomain -- http://pages.site.com/index can't get a file from http://scripts.site.com/). Cross-domain requests are now available, but they require a webserver, regardless, and the server hosting the file has to accept the request specifically.
(b) So in a roundabout way, the answer is yes, some implementations might (incorrectly) allow you to grab a file through XHR, even if the page is speaking in file-system terms, rather than http requests (older versions of browsers)... ...but otherwise you're going to need a webserver of one kind or another. The good news is that they're dirt-simple to install. EasyPHP would be sufficient, and it's pretty much a 3-click solution. There are countless others as well. It's just the first that comes to mind in terms of brain-off installation, if all you want is a file-server in apache, and you aren't planning on using a server-side scripting language (or if you do plan on using PHP).
XMLHttpRequest would absolutely be able to get external files...
IF they're actually external (ie: not bundled in a phone-specific cache -- use the phone's built-in file-access API for that, and write a wrapper to handle each one with the same, custom interface), AND the phone currently has reception -- be prepared to handle failure-conditions (like having a default-settings object, or having error-handling or whatever the best-case is, for whatever is missing).
Also, look into Application Cache Manifests. Again, this is an html5 solution which different versions of different phones handle differently (early-days versus more standardized formats). DO NOT USE IT DURING DEVELOPMENT, AS IT MAKES TESTING CODE/CONTENT CHANGES MISERABLY SLOW AND PAINFUL, but it's useful when your product is pretty much finished and bug-free, and seconds away from launch, where you tell users' browsers to cache all of the content for eternity, so that they can play offline, and they can save all kinds of bandwidth, not having to download everything the next time they play.

HTML5 - Web sql setting up offline storage

How do I setup the basic switching of offline storage modes (offline/online) in Web SQL? I know there's javascript
window.navigator.onLine. I can check the mode and then go through a process...
//All GET/POST performed with AJAX
//On Startup pulldown entire accessible database into offline storage (Doesn't seem secure IMO)
//if(read) pull from offline
//if(create, update, delete and online) pull from standard db, mark changes with offline expiration flag
//if(create, update, delete and offline) perform operation on offline storage, persist with POST when next online (change flag)
I'm asking if there is any OOB integration for these standard tasks?
The navigator.online property generally isn't very useful - in a desktop browser all it does is hook into the File -> Work Offline menu. It may be more useful on an iPad, I don't know because I don't have one, and I'm guessing there's not a File menu, but I would recommend you test.
A common approach to this issue is to set up two easily distinguishable files in the fallback section of your manifest. Every time you want to connect back to the server attempt to fetch the file with AJAX and, in the callback, check it to see if you got the online file or the fallback, then branch accordingly.
You shouldn't be using Web SQL as that spec was nixed a new months ago. You should be using Localstorage. Unless you are specifically coding for something like the iphone, but even then you dont know how long the spec will be in webkit.

Pros and Cons of a separate image server (e.g. images.mydomain.com)?

We have several images and PDF documents that are available via our website. These images and documents are stored in source control and are copied content on deployment. We are considering creating a separate image server to put our stock images and PDF docs on - thus significantly decreasing the bulk of our deployment package.
Does anyone have experience with this approach?
I am wondering about any "gotchas" - like XSS issues and/or browser issues delivering content from the alternate sub-domain?
Pro:
Many browsers will only allocate two sockets to downloading assets from a single host. So if index.html is downloaded from www.domain.com and it references 6 image files, 3 javascript files, and 3 CSS files (all on www.domain.com), the browser will download them 2 at a time, with the other blocking until a socket is free.
If you pull the 6 image files off onto a separate host, say images.domain.com, you get an extra two sockets dedicated to download your images. This parallelizes the asset download process so, in theory, your page could render twice as fast.
Con:
If you're using SSL, you would need to either get an additional single-host SSL certificate for images.domain.com or a wildcard SSL certificate for *.domain.com (matches any subdomain). Failure to do so will generate a warning in the browser saying the page contains mixed secure and insecure content.
You will also, with a different domain, not send the cookies data with every request. This can increase performance.
Another thing not yet mentioned is that you can use different web servers to serve different sorts of content. For example, your static content could be served via lighttpd or nginx while still serving your dynamic content off Apache.
Pros:
-load balancing
-isolating a different functionality
Cons:
-more work (when you create a page on the main site you would have to maintain the resources on the separate server)
Things like XSS is a problem of code not sanitizing input (or output for that matter). The only issue that could arise is if you have sub-domain specific cookies that are used for authentication.. but that's really a trivial fix.
If you're serving HTTPS and you serve an image from an HTTP domain then you'll get browser security alert warnings pop up when you use it.
So if you do HTTPS, you'll need to buy HTTPS for your image domain awell if you don't want to annoy the hell out of your users :)
There are other ways around this, but it's not particularly in the scope of this answer - it was just a warning!