Any workaround for Chrome M40 redirect bug for service workers? - google-chrome

We have images that redirect from our media server to a CDN that I'm trying to exclude from my service worker logic to work around the bug in Chrome 40. In Canary the same worker is able to work just fine. I thought there was an event.default() to fall back to the standard behavior but I don't see that in Chrome's implementation, and reading the spec it seems like the current recommendation is to just use fetch(event.request).
So the problem I have is do I have to wait until 99% of all of our users move to Chrome 41+ in order to use service workers in this scenario, or is there some sort of way I can opt out for certain requests?
The core of my logic is below:
worker.addEventListener('install', function(event){
event.waitUntil(getDefaultCache().then(function(cache){
return cache.addAll(precacheUrls);
}));
});
worker.addEventListener('fetch', function(event){
event.respondWith(getDefaultCache().then(function(cache){
return cache.match(event.request).then(function(response){
if (!response){
return fetch(event.request.clone()).then(function(response){
if (cacheablePatterns.some(function(pattern){
return pattern.test(event.request.url);
})) {
cache.put(event.request, response.clone());
}
return response;
});
}
return response;
});
}));
});

Once you're inside a event.respondWith() you do need to issue a response or you'll incur a Network Error. You're correct that event.default() isn't currently implemented.
A general solution is to not enter the event.respondWith() if you can determine synchronously that you don't want to handle the event. A basic example is something like:
function fetchHandler(event) {
if (event.request.url.indexOf('abc') >= 0) {
event.respondWith(abcResponseLogic);
} else if (event.request.url.indexOf('def') >= 0) {
event.respondWith(defResponseLogic);
}
}
self.addEventListener('fetch', fetchHandler);
If event.respondWith() isn't called, then this fetch handler is a no-op, and any additional registered fetch handlers get a shot at the request. Multiple fetch handlers are called in the order in which they're added via addEventListener, one at a time, until the first one calls event.respondWith().
If no fetch handlers call event.respondWith(), then the user agent makes the request exactly as it normally would if there were no service worker involvement.
The one tricky thing to take into account is that the determination as to whether to call event.respondWith() needs to be done synchronously inside each fetch handler. Anything that relies on asynchronous promise resolution can't be used to determine whether or not to call event.respondWith(). If you attempt to do something asynchronous and then call event.respondWith(), you'll end up with a race condition, and likely will see errors in the service worker console about how you can't respond to an event that was already handled.

Related

Target closed outside of normal flow

I'm trying to automate a process with puppeteer. When adding a new feature that implied the usage of a new tab opened in a different window, I started having a Target closed error (stack below). I'm familiar with this error in other situations, but now I don't have a clue as to why this is happening. The version of puppeteer I'm using is 19.0.0.
This is the error stack:
Target closed
at node_modules/puppeteer-core/src/common/Page.ts:1599:26
at onceHandler (node_modules/puppeteer-core/src/common/EventEmitter.ts:130:7)
at node_modules/puppeteer-core/lib/cjs/third_party/mitt/index.js:3:232
at Array.map (<anonymous>)
at Object.emit (node_modules/puppeteer-core/lib/cjs/third_party/mitt/index.js:3:216)
at CDPSessionImpl.emit (node_modules/puppeteer-core/src/common/EventEmitter.ts:118:18)
at CDPSessionImpl._onClosed (node_modules/puppeteer-core/src/common/Connection.ts:457:10)
at Connection.onMessage (node_modules/puppeteer-core/src/common/Connection.ts:164:17)
at WebSocket.<anonymous> (node_modules/puppeteer-core/src/common/NodeWebSocketTransport.ts:50:24)
at WebSocket.onMessage (node_modules/puppeteer-core/node_modules/ws/lib/event-target.js:199:18)
When skipping the procedure that involves the usage of a second window, the error doesn't show.
This is my cleanup method, which is run when the process finished running:
public async destroy() {
let browserIsConnected: boolean = !!this._browser?.isConnected();
if (this._browser && browserIsConnected) {
for (let pg of await this._browser.pages()) {
this.logger.debug(`Closing page ${pg.url()}`);
await pg.close();
}
this.logger.debug(`Closing browser instance...`);
await this._browser?.close();
this.logger.log(`Closed browser connection`);
} else
this.logger.log(`Browser already destroyed`);
delete this._browser;
}
I tried omitting the page.close() calls but it didn't affect anything, and try/catching every library call in the method, but none throw. When running the code, the error is logged in parallel with this._browser?.close(), framed by the logs above and below it. However, the stack does not relate to the function call and I don't know how to catch it. Other than this, the process runs smoothly and the browser closes successfully, but this error is making my integration tests fail. Sorry about not sharing a reproducible case, but I couldn't reproduce it without disclosing my business logic.
My question is: why is this happening? is there any way to avoid it?
I eventually figured this out, and the source of this problem wasn't in the code described below but in the event handling logic during the process. While waiting the popup to show up, I was doing the following:
page.once('popup', async (newPage: Page) => {
// capture the information inside the page, with many awaits
});
What I didn't know was that mitt, puppeteer's underlying event handling library, doesn't support asynchronous event handlers, not awaiting my event properly. This was solved by resolving the promise from the handler and awaiting it further in the code:
let pagePromiseResolve: Function;
let pagePromise: Promise<Page> = new Promise(resolve => {
pagePromiseResolve = resolve;
});
page.once('popup', newPage => pagePromiseResolve(newPage));
let newPage = await pagePromise;
// capture the information inside the page, with many awaits
I'm leaving this here in case it helps somebody, either with this specific use-case or with awaiting events with a library alike mitt.

In angular 2+ (Component => Service A => ServiceB), need help to understand the flow

In angular 2+, I have a component A which calls service A where i make some changes and call service B (Http Calls) and get the data which is simply passed back to service A, now i need to subscribe into service A to see the data and also subscribe into Component A to display data there?
why i need to subscribe at 2 places which means its making the http calls twice (which is not good at all)
what is the best way where I can fetch and store data in Service A by subscribing and do all manipulation and simply send that object back to component A to display it? even I try to make a variable in subscribing section in service A but when I try to log that variable outside the subscribe block. it is undefined.
thanks for the help.
while searching for the answer, I found one way (or can called worked around) that is to use "async-await" feature in angular with HttpClient.
which will basically wait at the same line of execution till you get result (success or error). and then proceed further with next line of execution.
for example:
async myFunction() {
this.myResult = await this.httpClient.get(this.url).toPromise();
console.log('No issues, it will wait till myResult is populated.');
}
Explanation:
adding async in front of the function to let it know that execution need to wait and the desire place (mostly at http service call as I need to wait till I get the result) we put await. so execution will go under wait period till it get the response back. and later. simply return the variable.

use of $timeout with 0 milliseconds

HttpMethod.CallHttpPOSTMethod('POST',null, path).success(function (response) {
console.log(response);
$scope.htmlString = $sce.trustAsHtml(response.data[0]);
$timeout(function () {
var temp = document.getElementById('form');
if (temp != null) {
temp.submit();
}
}, 0);
});
I will get html string in RESPONSE of my API call. And then I will add the html to my view page.
If I write the code outside $timeout service it wont work as it will work when written inside $timeout service.
What is the difference between two ways?
How is $timeout useful here?
When you make any changes to the controller, it does not start asynchronously for two-way binding. If the asynchronous code is wrapped in special ones: `$timeout, $scope.$apply, etc. binding will happen. For the current code example, I would have tried replace you code to:
HttpMethod.CallHttpPOSTMethod('POST',null, path).success(function (response) {
console.log(response);
$scope.htmlString = $sce.trustAsHtml(response.data[0]);
var temp = document.getElementById('form');
if (temp != null) {
temp.submit();
}
$scope.$apply();
});
I tried to give you an answer in very simple language, hope it may help to understand your issue.
Generally, When HTTP request fires to execute it will send to the server and get the data from the server this is the general scenario we have in our mind. There may be a situation occur that sometime due to network latency it may possible to receive response delay.
AngluarJs application has its own lifecycle.
Root scope is created during application bootstrap by the $injector. In template linking, directive binding creates new child scope.
While template linking there is watch registered to particular scope to identify particular changes.
In your case, when template linking and binding directive, there is a new watcher registered. Due to network latency or other reason your $http request sends delay response to your $http request and meanwhile those time scope variable has been changed. due to that, it will not give the updated response.
When you send $http request to a server it is asynchronous operation. When you use $timeout ultimately your scope binding wait to numbers of seconds in $timeout function you defined. After n number of seconds, your scope variable watch has been executed and it will update the value if you get the response in time.

Service Worker not caching API content on first load

I've created a service worker enabled application that is intended to cache the response from an AJAX call so it's viewable offline. The issue I'm running into is that the service worker caches the page, but not the AJAX response the first time it's loaded.
If you visit http://ivesjames.github.io/pwa and switch to airplane mode after the SW toast it shows no API content. If you go back online and load the page and do it again it will load the API content offline on the second load.
This is what I'm using to cache the API response (Taken via the Polymer docs):
(function(global) {
global.untappdFetchHandler = function(request) {
// Attempt to fetch(request). This will always make a network request, and will include the
// full request URL, including the search parameters.
return global.fetch(request).then(function(response) {
if (response.ok) {
// If we got back a successful response, great!
return global.caches.open(global.toolbox.options.cacheName).then(function(cache) {
// First, store the response in the cache, stripping away the search parameters to
// normalize the URL key.
return cache.put(stripSearchParameters(request.url), response.clone()).then(function() {
// Once that entry is written to the cache, return the response to the controlled page.
return response;
});
});
}
// If we got back an error response, raise a new Error, which will trigger the catch().
throw new Error('A response with an error status code was returned.');
}).catch(function(error) {
// This code is executed when there's either a network error or a response with an error
// status code was returned.
return global.caches.open(global.toolbox.options.cacheName).then(function(cache) {
// Normalize the request URL by stripping the search parameters, and then return a
// previously cached response as a fallback.
return cache.match(stripSearchParameters(request.url));
});
});
}
})(self);
And then I define the handler in the sw-import:
<platinum-sw-import-script href="scripts/untappd-fetch-handler.js">
<platinum-sw-fetch handler="untappdFetchHandler"
path="/v4/user/checkins/jimouk?client_id=(apikey)&client_secret=(clientsecret)"
origin="https://api.untappd.com">
</platinum-sw-fetch>
<paper-toast id="caching-complete"
duration="6000"
text="Caching complete! This app will work offline.">
</paper-toast>
<platinum-sw-register auto-register
clients-claim
skip-waiting
base-uri="bower_components/platinum-sw/bootstrap"
on-service-worker-installed="displayInstalledToast">
<platinum-sw-cache default-cache-strategy="fastest"
cache-config-file="cache-config.json">
</platinum-sw-cache>
</platinum-sw-register>
Is there somewhere I'm going wrong? I'm not quite sure why it works on load #2 instead of load #1.
Any help would be appreciated.
While the skip-waiting + clients-claim attributes should cause your service worker to take control as soon as possible, it's still an asynchronous process that might not kick in until after your AJAX request is made. If you want to guarantee that the service worker will be in control of the page, then you'd need to either delay your AJAX request until the service worker has taken control (following, e.g., this technique), or alternatively, you can use the reload-on-install attribute.
Equally important, though, make sure that your <platinum-sw-import-script> and <platinum-sw-fetch> elements are children of your <platinum-sw-register> element, or else they won't have the intended effect. This is called out in the documentation, but unfortunately it's just a silent failure at runtime.

nodejs and the non-blocking nightmare

I'm currently developing an API using node.js and MySQL. I'm new to this non-blocking stuff, and I have a question. I'm using node and MySQL module.
Say that we have a function like this:
function doQuery(sql, callback) {
connect(); //does the Client.connect()
client.query(sql, function(err, results, fields) {
if (err) {
errorLog.trace(err, __filename);
throw err;
} else {
logger.trace('DATABASE ACCESS: {query: ' + sql + '} result: OK', __filename);
}
client.end();
callback(results);
});
}
Everything runs ok, callback handles the return of the values, but there's something that bothers me. My browser what till the response is back and i don't know if this is because during this time node is actually blocked, or not.
So, how can I know if an operation is actually blocking my node process? I thought that when you pass a callback to a function, node automatically handles it and puts the execution of this callback at the queue of the event loop. But I'm not actually sure about that
Does all this make any sense to you?
There is a difference between the browser waiting and node.js blocking.
The browser has to wait because it can't get the data back instantly. The browser will stop waiting once you send the response back. Just because the browser is waiting doesn't mean that node.js is blocking. It just means that the connection is still open
Node.js idles whilst your waiting for the callback. it does not block.
node.js can have thousands of open connections with browsers clients. This does not mean it's blocking on each one. It simply means that is idling until it has a callback to handle or a new request to handle.