Why does PWA cache storage usage does not go down? - google-chrome

Every time I put a request to the storage the cache usage grows (judging by the indicator)
Cache.put(request, response)
But, when I delete a request from the storage the usage does not go down.
Cache.delete(request, options)
I check the list of the cached resources and the request is not there, so it's successfully deleted, but the indicator tells another story.
What am I missing?

Cache.add and Cache.delete both return a promise. Make sure it is resolved.
Cache.delete(request, options).then(result => {
// got final result
}).catch(err => {
// got error
});
Additionally you can see if you operations are changing the cache by opening it in the Application Dev tool

Related

Handling intensive server-side tasks? Do I still use async/await in the front-end?

How do I handle really intensive server-side tasks, that can take multiple minutes? It's a user-facing task, so the user can give me some data, and the server will then work in the backend.
I am fairly new to this, but I think my browser won't "wait" for this long, if I am using async/await ? But then if I don't use async await, I won't know whether the task was completed successfully?
Or am I missing something here?
The bigger the task, the more brittle is a solution that depends on a single HTTP request/response. Imagine that the connection breaks after the task has been 99% completed. The client would have to repeat the whole thing.
Instead, I suggest a pattern like the following that depends on several HTTP requests:
The client (browser) makes a request like POST /starttask to start the task and receives a "task ID" in the response.
The task runs on the server while the client can do other things. Any results that the task computes are stored in a database under the task ID.
The client can check the task progress by making regular requests like GET /task/<taskID> and receive a progress notification (50% completed). This can be used to animate a "progress bar" on the UI.
When the task is 100% completed and has yielded a result that the client needs to know, it can retrieve that result with a request like GET /taskresult/<taskID>.
If the task result is huge, the client may want to repeat the result retrieval, perhaps with paging (GET /taskresult/<taskID>?page=1 and so on) until it has received and processed the entire result. This should not burden the server much, because it simply reads the task result from the database.
Finally, the client can delete the task result from the server database with another request like POST /taskcleanup/<taskID>.
Using await / async will work as this will wait forever until a promise (request to backend) has been fulfilled. You could show some kind of loading graphic to the user which is how other websites handle lengthy tasks.
Depends how big the task is, but an example if the task is fairly small (eg 10 seconds) we could use a 'loading' state as the way to identify if we should display loading graphic:
function example() {
setLoading(true);
try {
const response = await axios.get('/user?ID=12345');
console.log(response);
} catch (error) {
console.error(error);
} finally {
setLoading(false);
}
}
Axios Minimal Example
I think it would be bad to keep the connection open waiting for the response for couple of minutes.
Instead, I would recommend SignalR server side notifications (or equivalent) to notify front end about tasks updates.
Notification DTO would contain all needed information about the task.
Backend:
// Post method
void startTask(params) {
// start backend processing
// after completion notify
signalRHub.notify();
}
On front end you just need subscribe to notifications and add handlers for them.

Cypress: How to visit a url of a different origin?

I'm new to cypress and have ran into an issue. I have my base URL set to the domain I want to test, the issue is when I want to test the ability to login on my base url site I need to verify the user on another site, once I click apply on site number 2 the page on my base url reloads and I would then be able to test the rest of the site.
When I try to visit site 2 from my test I get an error
cy.visit() failed because you are attempting to visit a URL that is of
a different origin.
The new URL is considered a different origin because the following
parts of the URL are different:
superdomain
You may only cy.visit() same-origin URLs within a single test.
I read this https://docs.cypress.io/guides/guides/web-security.html#Set-chromeWebSecurity-to-false I've tried setting "chromeWebSecurity": false in cypress.json but I still get the same issue (I'm running in chrome)
Is there something I am missing?
As a temporary but solid work around, I was able to find this script in one of the Cypress Git issue threads (I don't remember where I found it so I can't link back to it)
Add the below to your cypress commands file
Cypress.Commands.add('forceVisit', url => {
cy.window().then(win => {
return win.open(url, '_self');
});
});
and in your tests you can call
cy.forceVisit("www.google.com")
From version 9.6.0 of cypress, you can use cy.origin.
If you want to use it, you must first set the "experimentalSessionAndOrigin" record to true.
{
"experimentalSessionAndOrigin": true
}
And here's how to use it.
cy.origin('www.example.com', () => {
cy.visit('/')
})
cy.origin change the baseurl, so you can link to another external link via cy.visit('/').
You can stub the redirect from login site to base site, and assert the URL that was called.
Based on Cypress tips and tricks here is a custom command to do the stubbing.
The login page may be using one of several methods to redirect, so besides the replace(<new-url>) stub given in the tip I've added href = <new-url> and assign(<new-url>).
Stubbing command
Cypress.Commands.add('stubRedirect', () => {
cy.once('window:before:load', (win) => {
win.__location = { // set up the stub
replace: cy.stub().as('replace'),
assign: cy.stub().as('assign'),
href: null,
}
cy.stub(win.__location, 'href').set(cy.stub().as('href'))
})
cy.intercept('GET', '*.html', (req) => { // catch the page as it loads
req.continue(res => {
res.body = res.body
.replaceAll('window.location.replace', 'window.__location.replace')
.replaceAll('window.location.assign', 'window.__location.assign')
.replaceAll('window.location.href', 'window.__location.href')
})
}).as('index')
})
Test
it('checks that login page redirects to baseUrl', () => {
cy.stubRedirect()
cy.visit(<url-for-verifying-user>)
cy.wait('#index') // waiting for the window load
cy.('button').contains('Apply').click() // trigger the redirect
const alias = '#replace' // or '#assign' or '#href'
// depending on the method used to redirect
// if you don't know which, try each one
cy.get(alias)
.should('have.been.calledOnceWith', <base-url-expected-in-redirect>)
})
You can't!
But, maybe it will be possible soon. See Cypress ticket #944.
Meanwhile you can refer to my lighthearted comment in the same thread where I describe how I cope with the issue while Cypress devs are working on multi-domain support:
For everyone following this, I feel your pain! #944 (comment) really gives hope, so while we're patiently waiting, here's a workaround that I'm using to write multi-domain e2e cypress tests today. Yes, it is horrible, but I hope you will forgive me my sins. Here are the four easy steps:
Given that you can only have one cy.visit() per it, write multiple its.
Yes, your tests now depend on each other. Add cypress-fail-fast to make sure you don't even attempt to run other tests if something failed (your whole describe is a single test now, and it makes sense in this sick alternate reality).
It is very likely that you will need to pass data between your its. Remember, we're already on this crazy “wrong” path, so nothing can stop us naughty people. Just use cy.writeFile() to save your state (whatever you might need), and use cy.readFile() to restore it at the beginning of your next it.
Sue me.
All I care about at this point is that my system has tests. If cypress adds proper support for multiple domains, fantastic! I'll refactor my tests then. Until that happens, I'd have to live with horrible non-retriable tests. Better than not having proper e2e tests, right? Right?
You could set the window.location.href manually which triggers a page load, this works for me:
const url = 'http://localhost:8000';
cy.visit(url);
// second "visit"
cy.window().then(win => win.location.href = url);
You will also need to add "chromeWebSecurity": false to your cypress.json configuration.
Note: setting the window to navigate won't tell cypress to wait for the page load, you need to wait for the page to load yourself, or use timeout on get.

Chrome xhr call stays in pending status even though server sent a response back

I have a page where it has 2 different endpoint calls to 2 different routes around the same time. The second one returns successfully, however, the first one stays in pending status.
I have checked the server logs and the request is received by the server and a response is sent back.
For some reason, the status code of the pending one is 200 even though it says pending.
I tried to replicate this problem in multiple machines but failed. However, in the users' machine, it can be replicated every single time.
The user does not have any browser extensions. (Tried on incognito and problem still occurs)
All calls are in https
The page which does the requests generally has ~100% CPU for a few seconds.
After waiting for a while the user gets the Page unresponsive tab.
Users Chrome version: 81.0.4044.26 / macOS Mojave. I also tested with the same versions and couldn't replicate.
I'm using axios and the following code to fetch data.
const fetchData = async () => {
try {
const result = await axios(url);
..
} catch (error) {
..
}
};
I couldn't figure out why this was happening and how to fix it. Would appriciate help.
Thanks
Related Topic: What does "pending" mean for request in Chrome Developer Window?

Interrupted downloads when downloading a file from Web Api (remote host closed error 0x800704CD)

I have read near 20 other posts about this particular error, but most seem to be issues with the code calling Response.Close or similar, which is not our case. I understand that this particular error means that typically a user browsed away from the web page or cancelled the request midway, but in our case we are getting this error without cancelling a request. I can observe the error just after a few seconds, the download just fails in the browser (both Chrome and IE, so it's not browser specific).
We have a web api controller that serves a file download.
[HttpGet]
public HttpResponseMessage Download()
{
//
// Enumerates a directory and returns a Read-only FileStream of the download
var stream = dataProvider.GetServerVersionAssemblyStream(configuration.DownloadDirectory, configuration.ServerVersion);
if (stream == null)
{
return new HttpResponseMessage(HttpStatusCode.NotFound);
}
var response = new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StreamContent(stream)
};
response.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment");
response.Content.Headers.ContentDisposition.FileName = $"{configuration.ServerVersion}.exe";
response.Content.Headers.ContentType = new MediaTypeHeaderValue(MediaTypeNames.Application.Octet);
response.Content.Headers.ContentLength = stream.Length;
return response;
}
Is there something incorrect we are doing in our Download method, or is there something we need to tweak in IIS?
This happens sporadically. I can't observe a pattern, it works sometimes and other times it fails repeatedly.
The file download is about 150MB
The download is initiated from a hyperlink on our web page, there is no special calling code
The download is over HTTPS (HTTP is disabled)
The Web Api is hosted on Azure
It doesn't appear to be timing out, it can happen just after a second or two, so it's not hitting the default 30 second timeout values
I also noticed I can't seem to initiate multiple file downloads from the server at once, which is concerning. This needs to be able to serve 150+ businesses and multiple simultaneous downloads, so I'm concerned there is something we need to tweak in IIS or the Web Api.
I was able to finally fix our problem. For us it turned out to be a combination of two things: 1) we had several memory leaks and CPU intensive code in our Web Api that was impacting concurrent downloads, and 2) we ultimately resolved the issue by changing MinBytesPerSecond (see: https://blogs.msdn.microsoft.com/benjaminperkins/2013/02/01/its-not-iis/) to a lower value, or 0 to disable. We have not had an issue since.

Service Worker not caching API content on first load

I've created a service worker enabled application that is intended to cache the response from an AJAX call so it's viewable offline. The issue I'm running into is that the service worker caches the page, but not the AJAX response the first time it's loaded.
If you visit http://ivesjames.github.io/pwa and switch to airplane mode after the SW toast it shows no API content. If you go back online and load the page and do it again it will load the API content offline on the second load.
This is what I'm using to cache the API response (Taken via the Polymer docs):
(function(global) {
global.untappdFetchHandler = function(request) {
// Attempt to fetch(request). This will always make a network request, and will include the
// full request URL, including the search parameters.
return global.fetch(request).then(function(response) {
if (response.ok) {
// If we got back a successful response, great!
return global.caches.open(global.toolbox.options.cacheName).then(function(cache) {
// First, store the response in the cache, stripping away the search parameters to
// normalize the URL key.
return cache.put(stripSearchParameters(request.url), response.clone()).then(function() {
// Once that entry is written to the cache, return the response to the controlled page.
return response;
});
});
}
// If we got back an error response, raise a new Error, which will trigger the catch().
throw new Error('A response with an error status code was returned.');
}).catch(function(error) {
// This code is executed when there's either a network error or a response with an error
// status code was returned.
return global.caches.open(global.toolbox.options.cacheName).then(function(cache) {
// Normalize the request URL by stripping the search parameters, and then return a
// previously cached response as a fallback.
return cache.match(stripSearchParameters(request.url));
});
});
}
})(self);
And then I define the handler in the sw-import:
<platinum-sw-import-script href="scripts/untappd-fetch-handler.js">
<platinum-sw-fetch handler="untappdFetchHandler"
path="/v4/user/checkins/jimouk?client_id=(apikey)&client_secret=(clientsecret)"
origin="https://api.untappd.com">
</platinum-sw-fetch>
<paper-toast id="caching-complete"
duration="6000"
text="Caching complete! This app will work offline.">
</paper-toast>
<platinum-sw-register auto-register
clients-claim
skip-waiting
base-uri="bower_components/platinum-sw/bootstrap"
on-service-worker-installed="displayInstalledToast">
<platinum-sw-cache default-cache-strategy="fastest"
cache-config-file="cache-config.json">
</platinum-sw-cache>
</platinum-sw-register>
Is there somewhere I'm going wrong? I'm not quite sure why it works on load #2 instead of load #1.
Any help would be appreciated.
While the skip-waiting + clients-claim attributes should cause your service worker to take control as soon as possible, it's still an asynchronous process that might not kick in until after your AJAX request is made. If you want to guarantee that the service worker will be in control of the page, then you'd need to either delay your AJAX request until the service worker has taken control (following, e.g., this technique), or alternatively, you can use the reload-on-install attribute.
Equally important, though, make sure that your <platinum-sw-import-script> and <platinum-sw-fetch> elements are children of your <platinum-sw-register> element, or else they won't have the intended effect. This is called out in the documentation, but unfortunately it's just a silent failure at runtime.