Currently deploying CDN using vm instances + HTTPS LB, all right was set, im checking if works correclty, when make some test to CDN Url's from browser or cURL appear content was cached, but when another user request same URL from different location but using same edge cache it doesnt found and create another new, someone having same problem ?:
sample:
URL: https://www.sample.com/url.htm
User1/Location1(Dallas) ------------------------> DAL (not found first time and for second or third try, generate CachedID DAL-XXXXXX1)
after that User1 create cache another user2 request same URL but from diferrent location but using same edge cache
URL: https://www.sample.com/url.htm
User2/Location2(McAllen) ------------------------> DAL (not found generate Cached ID DAL-XXXXXX2)
Why if edge pop already have cached this URL is not serving and generate new cache fills for same URL?
Note:im not using query on any URL.
In your example, DAL-XXXXXX1 and DAL-XXXXXX2 are separate caches. Google Cloud CDN operates multiple caches in many metropolitan areas, and content does not automatically replicate from one cache to another. You won't see cache hits from the DAL-XXXXXX2 cache on the first response served from DAL-XXXXXX2.
There's more information at cloud.google.com/cdn/docs/overview#inserting-into-cache and cloud.google.com/cdn/docs/logging#what_is_logged.
Related
In one of the online documents that talks about appcache for HTML5, it indicates that the cached files get updated once an offline user reconnects. I checked the original HTML5 appcache definition by W3, and I am not able to find anything that supports this statement.
Does anyone know if this is to be true?
Thanks in advance
MDN says the following, although if you scroll up on that page it says it's being deprecated.
If an application cache exists, the browser loads the document and its associated resources directly from the cache, without accessing the network. This speeds up the document load time.
The browser then checks to see if the cache manifest has been updated on the server.
If the cache manifest has been updated, the browser downloads a new version of the manifest and the resources listed in the manifest. This is done in the background and does not affect performance significantly.
And logic tells me that it would also depend on the app you're using, server you're trying to connect to and any special settings it might have, how long your browser keeps it's history, what it keeps, and if you saved the page to view offline - whether or not you have all the code/images saved in the right location(s).
Example:
Imagine you saved a page to view offline, and that page has a JS event handler that ran a while loop that did an ajax request every n seconds to do something, like make a number on a page change as long as you were online... As long as the loop is running, you suddenly connect to the internet, and it makes the request to the proper url with the right arguments, then it should go through, even though the url in your browser might say something like file:///C:/Users/you/Desktop/....
I've done this before, even though my url was like the one above. One time I was using braintree's drop-in javascript to a website, and using it's api on my backend. Trying to load the page when offline = Nothing. Online = Updated the spot on the page just fine when I had the required arguments, and it was pointing to the right url. If I got offline again, I could refresh the page, see the same images loaded in the <div>, but I couldn't send any data with it.
Problem:
Chrome caches too much data, so when i create entries like Post and Comment in my db, its isn't loaded, but all the existing entries are being displayed.
Chrome refuses to run through my script, and just displays it from cache, and therefor not showing the new entry.
I can solve this problem partly by using
Location.reload(true);
But when i create a post i route back to the overview of all posts, which isnt loaded properly from my API, since the new post isnt showing.
I route back to the overview with
Location.replace('../nyheder');
How do i clear cache while routing to another page?
I rather think it is an issue with your HTTP -mainly response- headers. The headers will tell Chrome whether to fetch new data or use its cache.
You should use
If-none-match
and
ETag
Headers.
After hosting website on s3, how can we make changes in text in its webpages. I deleted older html files from bucket and uploaded new files by same name with updated text in the code. But no changes were reflected after refreshing those webpages.
Is there is any other way to update webpages of a website already hosted on s3 ? If so would somebody please post steps here to make those updates ? TIA.
I notice you have CloudFront in your tags so that is most likely the issue. When you upload a file to S3, CloudFront won't know about it right away if it's an existing file. Instead it's set to a default of 24 hours where it checks your origin (in this case your S3 bucket) to see if any changes have been made and if it needs to update the cache. There are a few ways to make it update the cache for those files:
Using files with versions in their names, and updating links. The downside is that you have to make more changes than normal to get this to work.
Invalidating the cache. This is not what Amazon recommends, but is nonetheless a quick way to make the cache pickup new changes right away. Note that there can be charges if you do a lot of invalidations:
No additional charge for the first 1,000 paths requested for invalidation each month. Thereafter, $0.005 per path requested for invalidation
Using Behaviors:
Here is where you can assign a path (individual file, folders, etc.) and adjust certain properties. One of them is the TTL(Time To Live) of the path in question. If you make the TTL a smaller value CloudFront will pickup changes more quickly. However since you have an S3 origin note that you'll have to deal with request allocations. Also CloudFront will need some time to distribute these changes to all the edge servers.
Hope this helps.
In case you are not using cloud-front, but just a normal static S3 website: check if your browser may be caching the pages.
Chrome at least does. So, updating the pages in S3 might not be visible until you clear the browser cache.
In chrome you can remove the cache as follows:
Open settings
Search for 'cache'
and remove pictures & files.
I am creating a Virtual reality 360-degree video website using the krpano html5 player.
This was going great until testing on safari and I realised it didn't work. The reason for this is because safari does not support CORS (cross-origin resource sharing) for videos going through WebGL.
To clarify, if my videos where on the same server with my application files it would work, but because I have my files hosted on Amazon s3 , they are CORS. Now I'm unsure what to do because I have built my application on digital ocean which connects to my amazon s3 bucket, but I cannot afford to up my droplet just to get the storage I need(which is around 100GB to start and will increase in the future to Terrabytes and my video collection gets bigger).
So does anyone know a way I can get around this to make it seem like the video is not coming from a different origin or alternatively anything I can do to get past this obstacle?
Is there any way that I could set up amazon s3 and amazon EC2 so that they dont see each other as cross-origin resource sharing?
EDIT:
I load my videos like this:
<script>
function showVideo(){
embedpano({
swf:"/krpano/krpano.swf",
xml:"/krpano/videopano.xml",
target:"pano",
html5:"only",
});
}
</script>
This then calls my xml file which calls the video file:
<krpano>
<!-- add the video sources and play the video -->
<action name="add_video_sources">
videointerface_addsource(‘medium', 'https://s3-eu-west-1.amazonaws.com/myamazonbucket/Shoots/2016/06/the-first-video/videos/high.mp4|https://s3-eu-west-1.amazonaws.com/myama…ideos/high.webm');
videointerface_play(‘medium');
</action>
</krpano>
I don't know exactly how krpano core works, I assume it the javascript gets the URLs from the XML file and then makes a request to pull them in.
#datasage mentions in comments that CloudFront is a common solution. I don't know if this is what he was thinking of but it certainly will work.
I described using this solution to solve a different problem, in detail, on Server Fault. In that case, the question was about integrating the main site and "/blog/*" from a different server under a single domain name, making a unified web site.
This is exactly the same thing you need, for a different reason.
Create a CloudFront distribution, setting the alternate domain name to your site's name.
Create two (or more) origin servers pointing to your dynamic and static content origin servers.
Use one of them as default, initially handling all possible path patterns (*, the default cache behavior) and then carve out appropriate paths to point to the other origin (e.g. /asset/* might point to the bucket, while the default behavior points to the application itself).
In this case, CloudFront is being used other than for its primary purpose as a CDN and instead, we're leveraging a secondary purpose, using it as a reverse proxy that can selectively route requests to multiple back-ends, based on the path of the request, without the browser being aware that there are in fact multiple origins, because everything sits behind the single hostname that points to CloudFront (which, obviously, you'll need to point to CloudFront in DNS.)
The caching features can be disabled if you don't yet want/need/fully-understand them, particularly on requests to the application itself, where disabling caching is easily done by selecting the option to forward all request headers to the origin, in any cache behavior that sends requests to the application itself. For your objects in S3, be sure you've set appropriate Cache-Control headers on the objects when you uploaded them, or you can add them after uploading, using the S3 console.
Side bonus, using CloudFront allows you to easily enable SSL for the entire site, with a free SSL certificate from Amazon Certificate Manager (ACM). The certificate needs to be created in the us-east-1 region of ACM, regardless of where your bucket is, because that is the region CloudFront uses when fetching the cert from ACM. This is a provisioning role only, and has no performance implications if your bucket is in another region.
You need to allow your host in CORS Configuration of your AWS-S3 bucket .
Refer to Add CORS Configuration in Editing Bucket Permissions.
Hence after that, every request you make to the S3 bucket files, will have CORS headers set.
In case you need to serve the content via AWS-CDN CloudFront then follow these steps, ignore if you server content directly via S3 :
Go to AWS CloudFront Console.
Select your CloudFront Distribution.
Go to Behaviors Tab.
Create a Behavior(for the files which needs to be served with CORS Header).
Enter Path Pattern, Select Protocol & Methods.
Select All in Forward Headers option.
Save the behavior.
If needed, Invalidate the CloudFront Edge Caches by running an Invalidation request for the Files you just allowed for CORS.
I'm workin' on a web project where performance is a very important issue.
EDIT:
The situation:
I wanna add some details about the user's workflow:
The user visits the welcome page of my website http://example.org/ .
He clicks a link in order to visit the page http://example.org/mypage
onclick-Handler of the link's executed.
The handler loads data usin' XHR.
The handler creates http://example.org/mypage dynamically.
The handler saves mypage locally usin' FileSystem API at filesystem:http://example.org/mypage. EDIT: ( filesystem:http://example.org/mypage is a local ressource stored in the FileSystem at the client side)
The handler extends the history and changes the URL of the location bar usin' History API from http://example.org/ (URL of the welcome page) to http://example.org/mypage (the page which the user wants to see) .
The user vists another page in the meantime.
Later on, the user types http://example.org/mypage directly into the location bar.
The browser shows/loads filesystem:http://example.org/mypage (which is the locally stored version of http://example.org/mypage) instead of http://example.org/mypage. That means: The browser doesn't create a new request, it uses the local stored copy of http://example.org/mypage .
How can I get the browser to use the locally stored version of the page instead of creating a new request? EDIT: - That's what I want to do in #10 of the list above.
EDIT:
My Question:
A client-side has already created/generated http://example.org/mypage in #2 to #7 of the list above. I don't need to create that page some other time. That's why I don't want the browser to create a request for http://example.org/mypage.
That's what I wanna do:
If filesystem:http://example.org/mypage has already been created (respectively if the user has already visited http://example.org/mypage):
Use filesystem:http://example.org/mypage instead of http://example.org/mypage.
Otherwise:
Send a request for http://example.org/mypage
Tries to solve:
I can't use the Fallback section of the manifest file to do something like: EDIT: (aside from the orgin)
FALLBACK:
http://example.org/mypage filesystem:http://example.org/mypage
In order to get the browser to use the local version stored in the FileSystem because Fallback directives are just used if the user is offline, otherwise they are ignored. EDIT: But I want to use filesystem:http://example.org/mypage instead of http://example.org/mypage, even if the user's online.
I know that I can use the Expire field in the response header of a server-generated page in order to not create a new request and to use the cached version.
But what if I create an page dynamically on the client side using JS and XHRs. EDIT: (I described that case in The situation) When create a page at the client side there's no way to get the client to cache that page. That's why I "cache" the page manually usin' FileSystem API to store it on the client side.
In order to improve the performance I'm trying to store any page which the user has already visited locally. When the user visits a page again then I show him the old, locally stored version of the page and my script creates an XHR to find out if the page changed in the meantime.
But how can I get the browser to use the local version of the page?
I can save the generated page locally on the client side using the FileSystem API and I can choose an URL for the generated page to display it at the browser's location bar using the History API.
When the user now visits another site and then presses the back button I can catch the onPopState event by an event handler.
And that event handler can load the dynamically created file using the FileSystem API.
But what should I do if the user doesn't use the back button and if he types the URL, which I have registered using the History API, directly into the location bar?
Then the browser wouldn't use the locally stored version of the page, the browser would create a request to load the page from the server.
Don't put dynamic data in the application cache. If you want to put dynamic data in your pages then get it from the server with AJAX, store the data in Local Storage, and populate the page with the data from storage through JavaScript (you can hook into the History API for this).
By the way, this won't work because fallback entries have to be on the same domain:
FALLBACK:
http://example.org/mypage filesystem:http://example.org/mypage
Once your page is in the Application Cache (ie. it is locally stored) the browser will always use the version from the Application Cache until the manifest is updated or the user deletes the cache. It doesn't really matter what expiry headers you put on the page, except if you put a long expiry and you frequently update the manifest then it's likely the Application Cache will be populated from the browser cache rather than refreshed from the server. This is why the stuff you put in the Application Cache should be static files. Get your dynamic stuff with AJAX.
You might use URLs that encode the actual link within your hierarchy, e.g. "mypage", in the anchor part of the URL, i.e. http://example.com/#mypage. Then you can use window.location.hash to obtain the string after the # and do whatever magic you want. Just make sure your root (or whatever you want in front of the #) is in AppCache.