Caching of web pages and SPA - html

I have read about SPA (single page application) and learned that biggest advantage of those is that save network traffic because SPA downloads all (at least most of them) application resources when loading the page.
But I am not clear on this - suppose in my index.jsp I have specified all my resources and downloaded when loading index.jsp. Now my application navigation starts from index.jsp, so for navigation I submit my form and which has action="user.jsp"
Now, since I have action="user.jsp" so on submitting the form my web browser will send a request to server to get user.jsp. Please correct me if I am wrong. Or will be taken from HTTP cache. But lets say through some Apache setting (I have read somewhere that it is possible but don't know how to do it) I have disabled the HTTP caching of web page then user.jsp will be downloaded from server.
Much appreciated if somebody can throw good insight on it. Basically I am confused with the fact that action="user.jsp" will lead a call to server and HTTP/browser can cache web pages.
P.S.: I accidentaly posted my question here as a guest user but now unable to remove that, so if you have moderation authorities then please get that question remove to avoid duplication.

Related

How to get the URL to fully reload each time?

Issue: appears to be that banno framework is "remembering" the urls. This is happening in a mobile browser when the user does not close the tab or browser. When the user opens the page, banno is remembering the url from last time and trying to load the same url.
What needs to happen is that banno needs to fully reload the page so that we can go retrieve a new url and log the user in again.
Could it be how they treat plugins when a browser is left open. A url that is loaded is not good forever.
Odds are good that the situation you're encountering is described in https://stackoverflow.com/a/71267143/6680761
Essential info from that link is:
Part of keeping state of the page is keeping authentication data. The OAuth flow used to initially authenticate the user is not intended to be used on every page refresh. It's expected that the embedded web application will keep its own authentication state. How this is done is usually very specific to the language and platform used for the embedded web application. However all strategies almost exclusively use a cookie which is destroyed when the application closes.
The Oauth callback URL with an authentication code should be redirected away from once the code is exchanged for an access token. From that point forward your app should be using its own authentication mechanism.

How to execute web worker task continuously even though the location url is redirect?

I use gifjs to generate a lot of gifs from png/jpg files once user logins success. At same time, I want to change the location.url to direct user to my website main page. But the problem is that the web worker task stopped once the url is changed. So how to execute web worker task continuously even though the location url is redirect?
If the browser does a load of a new page, the short answer is that workers will be terminated. Definitely dedicated workers, and according to Do Shared Web Workers persist across a single page reload, link navigation shared workers (that aren't being used by other windows/tabs) will be too.
But...
If you didn't use any full page loads, and made the entire site use Javascript for navigation / posting of forms, and use the HTML history API to change the URL, then the worker will survive as the user logs in and navigates the site. The worker will only be terminated when they leave the site, or force a reload in the browser.
Depending on the current setup of your site, this might mean considerable change of both the browser and server architecture, the details of which I suspect are beyond the scope of this question.

getting information from a website in processing?

I am currently making a processing program, where a part of it will be to acess some information from at website. The website will be an HTML file, where some information is stored, which i need to acess and parse. I know how to open a html file, but my problem is that it is supposed to acess a list, which is generated after a login on the website. How do i do that?
This is the website, right after loading the HTML file:
http://i.imgur.com/kGIkyle.png
After a login, the website will begin to spit out data every two seconds.
I wanna acess the data in the ordered list, and i wanna acess it every two seconds in my processing program. How do i do that?
This is the website, after a login, after a moment.
http://i.imgur.com/O743fNJ.png
When you use a web browser to submit a login, you're really interacting with the server. Usually the web browser submits a POST request containing the login information (like a username and password), and the server responds with the next webpage to load.
The details of this are going to depend on the website you're interacting with. Some websites might use AJAX to submit the data and then trigger some JavaScript to run.
The point is, you're going to have to understand exactly how the underlying web server and webpage works. Then you're going to have to use the rules of those interactions to issue the appropriate requests from your Processing code.
It might be as simple as submitting the login credentials in the url itself and then just scraping the information from the webpage.
More likely, you're going to have to interact with some kind of web API and do the requests yourself. Google "Java post request" for more info.
Of course, all of this assumes that the website is open to people using it. If this website isn't yours, it could also be locked down and unavailable to you.

Disable https to http redirect

My site is using HTTPS only.
I allow using BBCodes to show images. Users are placing images like "https://imagehoster.net/img.png" and the imagehoster is using a redirect so the browser loads it via HTTP "http://imagehoster.net/img.png". This makes the browser showing annoying mixed content warnings. Is there a way to prevent this?
Short: NO
Long:
the have no really web server listening to ssl.
in fact, there is only a firewall/proxy which sends a http locate to the browser.
You can't intercept that request. even if you could, where to redirect to?
they don't provide a ssl server, because it takes to much resources for encryption or it takes to much traffic, because proxy#s can't cache.
An idea to solve that problem:
detect those links, download them and store a copy on your server.
replace the link. maybe you need only to store a preview. if the click on it redirect to the original link on a new browser window.

How to solve this issue with the HTML5 manifest?

From my experiences so far, I've concluded that the HTML5 Manifest scheme was really terribly designed.
My site serves a manifest file when a user is logged in. Unfortunately, when they log out, they can still access the cached protected materials. Can anyone think of a way to fix this?
A manifest file is designed to take a website offline and still be able to navigate. It essentially just tells the browser to download and keep that stuff in cache. If your adding secret stuff to the manifest and the user goes offline, he needs to be able to still access it - or whats the point of having a special logged-in-manifest-file if he has to be loggedin (therefor online)?
You could add javascript that checks if the user is online again and if he is, tries to validate the "login state" and redirects or removes the secret stuff from localstorage (if you would use localstorage to save the "secret" stuff and javascript to display it instead of a manifest file )
Lets say the secret stuff is an image and you are not using a manifest file, but just displaying images when the user is logged in and its crusial, the user cant view that image after logout, you would need to set the http headers to no-cache and cache-expire to some random date of the past, so that a normal user would see it anymore. Problem then is, that the image is downloaded everytime somebody visits the website..
You need to approach the HTML5 Application Cache in a different way. It is not useful for caching server-side dynamically generated pages, especially those that require a login to reach. The Application Cache has no concept of logins, nor securing a page from somebody with a different/no login.
It is much more appropriate for an AJAX-based site, where all HTML/CSS/JavaScript is static and registered in the Application Cache, and data is instead fetched via AJAX then used to populate pages. If you need to cache data in the application for offline use, then use one of the offline data storage mechanisms such as Local Storage/Session Storage, or IndexedDB, for data.
You can then make your own judgement on how much data you want to cache offline, since there's no way to validate a login without making a call to the server that is naturally inaccessable whilst offline.
What if when the user logs out or is not logged in they get a manifest with only network:*