net::ERR_CONNECTION_RESET with service worker in Chrome - google-chrome

I have a very simple service worker to add offline support. The fetch handler looks like
self.addEventListener("fetch", function (event) {
var url = event.request.url;
event.respondWith(fetch(event.request).then(function (response) {
//var cacheResponse: Response = response.clone();
//caches.open(CURRENT_CACHES.offline).then((cache: Cache) => {
// cache.put(url, cacheResponse).catch(() => {
// // ignore error
// });
//});
return response;
}).catch(function () {
// check the cache
return getCachedContent(event.request);
}));
});
Intermittently we are seeing a net::ERR_CONNECTION_RESET error for a particular script we load into the page when online. The error is not coming from the server as the service worker is picking up the file from the browser cache. Chrome's network tab shows the service worker has successfully fetched the file from the disk cache but the request from the browser to the service worker shows as (failed)
Does anyone know the underlying issue causing this? Is there a problem with my service worker implementation?

This is likely due to a bug in Chrome (and potentially other browsers as well) that could result in a garbage collection event removing a reference to the response stream while it's still being read.
Its fix in Chrome is being tracked at https://bugs.chromium.org/p/chromium/issues/detail?id=934386.

Related

SERVICE WORKER: The service worker navigation preload request failed with network error: net::ERR_INTERNET_DISCONNECTED in Chrome 89

I have a problem with my Service Worker.
I'm currently implementing offline functionality with an offline.html site to be shown in case of network failure. I have implemented Navigation Preloads as described here: https://developers.google.com/web/updates/2017/02/navigation-preload#activating_navigation_preload
Here is my install EventListener were skipWaiting() and initialize new cache
const version = 'v.1.2.3'
const CACHE_NAME = '::static-cache'
const urlsToCache = ['index~offline.html', 'favicon-512.png']
self.addEventListener('install', function(event) {
self.skipWaiting()
event.waitUntil(
caches
.open(version + CACHE_NAME)
.then(function(cache) {
return cache.addAll(urlsToCache)
})
.then(function() {
console.log('WORKER: install completed')
})
)
})
Here is my activate EventListener were I feature-detect navigationPreload and enable it. Afterwards I check for old caches and delete them
self.addEventListener('activate', event => {
console.log('WORKER: activated')
event.waitUntil(
(async function() {
// Feature-detect
if (self.registration.navigationPreload) {
// Enable navigation preloads!
console.log('WORKER: Enable navigation preloads')
await self.registration.navigationPreload.enable()
}
})().then(
caches.keys().then(function(cacheNames) {
cacheNames.forEach(function(cacheName) {
if (cacheName !== version + CACHE_NAME) {
caches.delete(cacheName)
console.log(cacheName + ' CACHE deleted')
}
})
})
)
)
})
This is my fetch eventListener
self.addEventListener('fetch', event => {
const { request } = event
// Always bypass for range requests, due to browser bugs
if (request.headers.has('range')) return
event.respondWith(
(async function() {
// Try to get from the cache:
const cachedResponse = await caches.match(request)
if (cachedResponse) return cachedResponse
try {
const response = await event.preloadResponse
if (response) return response
// Otherwise, get from the network
return await fetch(request)
} catch (err) {
// If this was a navigation, show the offline page:
if (request.mode === 'navigate') {
console.log('Err: ',err)
console.log('Request: ', request)
return caches.match('index~offline.html')
}
// Otherwise throw
throw err
}
})()
)
})
Now my Problem:
On my local machine on localhost everything just works as it should. If network is offline the index~offline.html page is delivered to the user. If I deploy to my test server everything works as well as expected, except for a strange error-message in Chrome on normal browsing(not offline mode):
The service worker navigation preload request failed with network error: net::ERR_INTERNET_DISCONNECTED.
I logged the error and the request to get more information
Error:
DOMException: The service worker navigation preload request failed with a network error.
Request:
Its strange because somehow index.html is requested no matter which site is loaded.
Additional Information this is happening in Chrome 89, in chrome 88 everything seems fine(I checked in browserstack). I just saw there was a change in pwa offline detection in Chrome 89...
https://developer.chrome.com/blog/improved-pwa-offline-detection/
anybody has an idea what the problem might be?
Update
I rebuild the problem here so everybody can check it out: https://dreamy-leavitt-bd4f0e.netlify.app/
This error is directly caused by the improved pwa offline detection you linked to:
https://developer.chrome.com/blog/improved-pwa-offline-detection/
The browser fakes an offline context and tries to request the start_url of your manifest, e.g. the index.html specified in your https://dreamy-leavitt-bd4f0e.netlify.app/site.webmanifest
This is to make sure that your service worker is actually returning a valid 200 response in this situation, i.e. the valid cached response for your index~offline.html page.
The error you're asking about specifically is from the await event.preloadResponse part and it apparently can't be suppressed.
The await fetch call produces a similar error but that can be suppressed, just don't console.log in the catch section.
Hopefully chrome won't show this error from preload responses in future when doing offline pwa detection as it's needlessly confusing.

Slack webhooks cause cls-hooked request context to orphan mysql connections

The main issue:
We have a lovely little express app, which has been crushing it for months with no issues. We manage our DB connections by opening a connection on demand, but then caching it "per request" using the cls-hooked library. Upon the request ending, we release the connection so our connection pool doesn't run out. Classic. Over the course of months and many connections, we've never "leaked" connections. Until now! Enter... slack! We are using the slack event handler as follows:
app.use('/webhooks/slack', slackEventHandler.expressMiddleware());
and we sort of think of it like any other request, however slack requests seem to play weirdly with our cls-hooked usage. For example, we use node-ts and nodemon to run our app locally (e.g. you change code, the app restarts automatically). Every time the app restarts locally on our dev machines, and you try and play with slack events, suddenly when our middleware that releases the connection tries to do so, it thinks there is nothing in session. When you then use a normal endpoint... it works fine and essentially seems to reset slack to working okay again. We are now scared to go to prod with our slack integration, because we're worried our slack "requests" are going to starve our connection pool.
Background
Relevant subset of our package.json:
{
"#slack/events-api": "^2.3.2",
"#slack/web-api": "^5.8.0",
"express": "~4.16.1",
"cls-hooked": "^4.2.2",
"mysql2": "^2.0.0",
}
The middleware that makes the cls-hooked session
import { session } from '../db';
const context = (req, res, next) => {
session.run(() => {
session.bindEmitter(req);
session.bindEmitter(res);
next();
});
};
export default context;
The middleware that releases our connections
export const dbReleaseMiddleware = async (req, res, next) => {
res.on('finish', async () => {
const conn = session.get('conn');
if (conn) {
incrementConnsReleased();
await conn.release();
}
});
next();
};
the code that creates the connection on demand and stores it in "session"
const poolConn = await pool.getConnection();
if (session.active) {
session.set('conn', poolConn);
}
return poolConn;
the code that sets up the session in the first place
export const session = clsHooked.createNamespace('our_company_name');
If you got this far, congrats. Any help appreciated!
Side note: you couldn't pay me to write a more confusing title...
Figured it out! It seems we have identified the following behavior in the node version of slack's API (seems to only happen on mac computers... sometimes)
The issue is that this is in the context of an express app, so Slack is managing the interface between its own event handler system + the http side of things with express (e.g. returning 200, or 500, or whatever). So what seems to happen is...
// you have some slack event handler
slackEventHandler.on('message', async (rawEvent: any) => {
const i = 0;
i = i + 1;
// at this point, the http request has not returned 200, it is "pending" from express's POV
await myService.someMethod();
// ^^ while this was doing its async thing, the express request returned 200.
// so things like res.on('finished') all fired and all your middleware happened
// but your event handler code is still going
});
So we ended up creating a manual call to release connections in our slack event handlers. Weird!

IndexedDB flush to disk on Chrome

I'm facing an issue with IndexedDB on Chrome where I reload my page once the transaction returns a successful write.
Problem is sometimes that data does not reflect after reload. I can solve this by giving a timeout of about 100ms before reload, which leads me to believe that the data is not flushed to disk everytime.
Firexox has an experimental readwriteflush mode which ensures data is flushed to disk before returning a success call, but can't seem to find a similar one for Chrome. Any suggestions?
Here's my insert code:
const data = {type: type, value: value};
const objectStore = StorageService.db.transaction(['localData'], 'readwrite').objectStore('localData');
// readwriteflush doesn't work in chrome
// const objectStore = StorageService.db.transaction(['localData'], 'readwriteflush').objectStore('localData');
const requestSet = objectStore.put(data);
requestSet.onerror = function (event) {
alert('Error in saving data locally');
};
requestSet.onsuccess = function (event) {
console.log('Data was successfully saved locally: ' + type);
if (callback != undefined) {
callback();
}
};
The callback has location.reload = '/'; executed in it (along with some other things), so the page reloads after the onsuccess has been returned.
After the page reloads, I cannot see any data on my IndexedDB storage, both via code and on developer tools. This does not always happen however, I've observed that this happens only when data is larger than usual.
"success" fired at a request does not indicate that the transaction has committed successfully. The transaction could later fail due to a separate failed request (e.g. a conflicting add call), I/O error, or e.g. power loss.
You need to wait for the "complete" event to be fired at the transaction. Chrome flushes to disk before firing the "complete" event.

Chrome doesn't use cache after power loss?

I am creating a digital signage player that uses Chrome as it's display engine. We need to be able to still muddle along if the network goes down without too much interruption.
Chrome works fine caching images, and I've set the "Exipres" header to be a month after access. I can set the player computer offline and have the app run for days with no problem. If I reboot the machine the right way (Start->Shut Down), caching still works as expected.
The issue is that when Chrome exits abnormally - Either a crash or power loss - on reboot, Chrome ignores the cache and refuses to load images. This happens if I cut power 5 minutes after it loads the page, so content is not expiring.
My guess is that Chrome is set to ignore the cache after an abnormal exit to prevent corrupted cache from continually crashing the browser. However, this behavior is not what I need.
Does anyone know of a command line arg or flag I can set to keep this from happening?
Thanks for your help.
I tried everything I could think of to make Chrome not invalidate the local cache on system failure, and came up empty. There's a few other people who had the same question, and I didn't see an answer.
Here's what I did that made this work, and if someone else is having the same problem, it might be the workaround that you need.
I added a service worker that would cache images. The code below isn't perfect yet, but should be a starting place for someone... (FYI, I learned this 5 minutes ago, so if someone wants to give me a pointer or two on how to make this more elegant, I'm all ears.)
We cache anything that has a response type of "cors" so we cache only images coming from the remote server. Note that your images must be loaded via https for this to work.
Taken (mostly) from: https://developers.google.com/web/fundamentals/getting-started/primers/service-workers
var CACHE_NAME = 'shine_cache';
var urlsToCache = [
'/'
];
self.addEventListener('install', function(event) {
// Perform install steps
event.waitUntil(
caches.open(CACHE_NAME)
.then(function(cache) {
console.log('Opened cache');
return cache.addAll(urlsToCache);
})
);
});
self.addEventListener('fetch', function(event) {
//console.log('Handling fetch event for', event.request);
if (event.request.method == 'POST') {
//console.log("Skipping POST");
event.respondWith(fetch(event.request));
return;
}
if (event.request.headers.get('Accept').indexOf('image') !== -1) {
event.respondWith(
caches.match(event.request)
.then(function(response) {
// Cache hit - return response
if (response) {
console.log("Returning from cache.", event.request);
return response;
}
// IMPORTANT: Clone the request. A request is a stream and
// can only be consumed once. Since we are consuming this
// once by cache and once by the browser for fetch, we need
// to clone the response.
var fetchRequest = event.request.clone();
return fetch(fetchRequest).then(
function(response) {
console.log("Have a response.", response);
// Check if we received a valid response
if(!response || response.status !== 200 || response.type !== 'cors') {
return response;
}
// IMPORTANT: Clone the response. A response is a stream
// and because we want the browser to consume the response
// as well as the cache consuming the response, we need
// to clone it so we have two streams.
var responseToCache = response.clone();
caches.open(CACHE_NAME)
.then(function(cache) {
console.log("Caching response", event.request);
cache.put(event.request, responseToCache);
});
return response;
}
);
})
);
}
});

webrtc: failed to send arraybuffer over data channel in chrome

I want to send streaming data (as sequences of ArrayBuffer) from a Chrome extension to a Chrome App, since Chrome message API (includes chrome.runtime.sendMessage, postMessage...) does not support ArrayBuffer and JS arrays have poor performance, I have to try other methods. Eventually, I found WebRTC over RTCDataChannel might a good solution in my case.
I have succeeded to send string over a RTCDataChannel, but when I tried to send ArrayBuffer I got:
code: 19
message: "Failed to execute 'send' on 'RTCDataChannel': Could not send data"
name: "NetworkError"
It seems that it's not a bandwidths limits problem since it failed even though I sent one byte of data. Here is my code:
pc = new RTCPeerConnection(configuration, { optional: [ { RtpDataChannels: true } ]});
//...
var dataChannel = m.pc.createDataChannel("mydata", {reliable: true});
//...
var ab = new ArrayBuffer(8);
dataChannel.send(ab);
Tested on OSX 10.10.1, Chrome M40 (Stnble), M42(Canary); and on Chromebook M40.
I have filed a bug for WebRTC here.
I modified my code, now everything worked amazing:
removed RtpDataChannels option when creating RTCPeerConnection.(YES, remove RtpDataChannels option if you want data channel, what a magic world!)
on receiver side: no need createDataChannel, instead, handle onmessage, onxxx by using event.channle from pc.ondatachannel callback:
pc.ondatachannel function(event)
var receiveChannel = event.channel;
receiveChannel.onmessage = function(event){
console.log("Got Data Channel Message:", event.data);
};
};