I've created a service worker enabled application that is intended to cache the response from an AJAX call so it's viewable offline. The issue I'm running into is that the service worker caches the page, but not the AJAX response the first time it's loaded.
If you visit http://ivesjames.github.io/pwa and switch to airplane mode after the SW toast it shows no API content. If you go back online and load the page and do it again it will load the API content offline on the second load.
This is what I'm using to cache the API response (Taken via the Polymer docs):
(function(global) {
global.untappdFetchHandler = function(request) {
// Attempt to fetch(request). This will always make a network request, and will include the
// full request URL, including the search parameters.
return global.fetch(request).then(function(response) {
if (response.ok) {
// If we got back a successful response, great!
return global.caches.open(global.toolbox.options.cacheName).then(function(cache) {
// First, store the response in the cache, stripping away the search parameters to
// normalize the URL key.
return cache.put(stripSearchParameters(request.url), response.clone()).then(function() {
// Once that entry is written to the cache, return the response to the controlled page.
return response;
});
});
}
// If we got back an error response, raise a new Error, which will trigger the catch().
throw new Error('A response with an error status code was returned.');
}).catch(function(error) {
// This code is executed when there's either a network error or a response with an error
// status code was returned.
return global.caches.open(global.toolbox.options.cacheName).then(function(cache) {
// Normalize the request URL by stripping the search parameters, and then return a
// previously cached response as a fallback.
return cache.match(stripSearchParameters(request.url));
});
});
}
})(self);
And then I define the handler in the sw-import:
<platinum-sw-import-script href="scripts/untappd-fetch-handler.js">
<platinum-sw-fetch handler="untappdFetchHandler"
path="/v4/user/checkins/jimouk?client_id=(apikey)&client_secret=(clientsecret)"
origin="https://api.untappd.com">
</platinum-sw-fetch>
<paper-toast id="caching-complete"
duration="6000"
text="Caching complete! This app will work offline.">
</paper-toast>
<platinum-sw-register auto-register
clients-claim
skip-waiting
base-uri="bower_components/platinum-sw/bootstrap"
on-service-worker-installed="displayInstalledToast">
<platinum-sw-cache default-cache-strategy="fastest"
cache-config-file="cache-config.json">
</platinum-sw-cache>
</platinum-sw-register>
Is there somewhere I'm going wrong? I'm not quite sure why it works on load #2 instead of load #1.
Any help would be appreciated.
While the skip-waiting + clients-claim attributes should cause your service worker to take control as soon as possible, it's still an asynchronous process that might not kick in until after your AJAX request is made. If you want to guarantee that the service worker will be in control of the page, then you'd need to either delay your AJAX request until the service worker has taken control (following, e.g., this technique), or alternatively, you can use the reload-on-install attribute.
Equally important, though, make sure that your <platinum-sw-import-script> and <platinum-sw-fetch> elements are children of your <platinum-sw-register> element, or else they won't have the intended effect. This is called out in the documentation, but unfortunately it's just a silent failure at runtime.
Related
I have a page running in a headless Chromium instance, and I'm manipulating it via the DevTools protocol, using the Puppeteer NPM package in Node.
I'm injecting a script into the page. At some point, I want the script to call me back and send me some information (via some event exposed by the DevTools protocol or some other means).
What is the best way to do this? It'd be great if it can be done using Puppeteer, but I'm not against getting my hands dirty and listening for protocol messages by hand.
I know I can sort-of do this by manipulating the DOM and listening to DOM changes, but that doesn't sound like a good idea.
Okay, I've discovered a built-in way to do this in Puppeteer. Puppeteer defines a method called exposeFunction.
page.exposeFunction(name, puppeteerFunction)
This method defines a function with the given name on the window object of the page. The function is async on the page's side. When it's called, the puppeteerFunction you define is executed as a callback, with the same arguments. The arguments aren't JSON-serialized, but passed as JSHandles so they expose the objects themselves. Personally, I chose to JSON-serialize the values before sending them.
I've looked at the code, and it actually just works by sending console messages, just like in Pasi's answer, which the Puppeteer console hooks ignore. However, if you listen to the console directly (i.e. by piping stdout). You'll still see them, along with the regular messages.
Since the console information is actually sent by WebSocket, it's pretty efficient. I was a bit averse to using it because in most processes, the console transfers data via stdout which has issues.
Example
Node
async function example() {
const puppeteer = require("puppeteer");
let browser = await puppeteer.launch({
//arguments
});
let page = await browser.newPage();
await page.exposeFunction("callPuppeteer", function(data) {
console.log("Node receives some data!", data);
});
await page.goto("http://www.example.com/target");
}
Page
Inside the page's javascript:
window.callPuppeteer(JSON.stringify({
thisCameFromThePage : "hello!"
}));
Update: DevTools protocol support
There is DevTools protocol support for something like puppeteer.exposeFunction.
https://chromedevtools.github.io/devtools-protocol/tot/Runtime#method-addBinding
If executionContextId is empty, adds binding with the given name on
the global objects of all inspected contexts, including those created
later, bindings survive reloads. If executionContextId is specified,
adds binding only on global object of given execution context. Binding
function takes exactly one argument, this argument should be string,
in case of any other input, function throws an exception. Each binding
function call produces Runtime.bindingCalled notification.
.
If the script sends all its data back in one call, the simplest approach would be to use page.evaluate and return a Promise from it:
const dataBack = page.evaluate(`new Promise((resolve, reject) => {
setTimeout(() => resolve('some data'), 1000)
})`)
dataBack.then(value => { console.log('got data back', value) })
This could be generalized to sending data back twice, etc. For sending back an arbitrary stream of events, perhaps console.log would be slightly less of a hack than DOM events? At least it's super-easy to do with Puppeteer:
page.on('console', message => {
if (message.text.startsWith('dataFromMyScript')) {
message.args[1].jsonValue().then(value => console.log('got data back', value))
}
})
page.evaluate(`setInterval(() => console.log('dataFromMyScript', {ts: Date.now()}), 1000)`)
(The example uses a magic prefix to distinguish these log messages from all others.)
We have images that redirect from our media server to a CDN that I'm trying to exclude from my service worker logic to work around the bug in Chrome 40. In Canary the same worker is able to work just fine. I thought there was an event.default() to fall back to the standard behavior but I don't see that in Chrome's implementation, and reading the spec it seems like the current recommendation is to just use fetch(event.request).
So the problem I have is do I have to wait until 99% of all of our users move to Chrome 41+ in order to use service workers in this scenario, or is there some sort of way I can opt out for certain requests?
The core of my logic is below:
worker.addEventListener('install', function(event){
event.waitUntil(getDefaultCache().then(function(cache){
return cache.addAll(precacheUrls);
}));
});
worker.addEventListener('fetch', function(event){
event.respondWith(getDefaultCache().then(function(cache){
return cache.match(event.request).then(function(response){
if (!response){
return fetch(event.request.clone()).then(function(response){
if (cacheablePatterns.some(function(pattern){
return pattern.test(event.request.url);
})) {
cache.put(event.request, response.clone());
}
return response;
});
}
return response;
});
}));
});
Once you're inside a event.respondWith() you do need to issue a response or you'll incur a Network Error. You're correct that event.default() isn't currently implemented.
A general solution is to not enter the event.respondWith() if you can determine synchronously that you don't want to handle the event. A basic example is something like:
function fetchHandler(event) {
if (event.request.url.indexOf('abc') >= 0) {
event.respondWith(abcResponseLogic);
} else if (event.request.url.indexOf('def') >= 0) {
event.respondWith(defResponseLogic);
}
}
self.addEventListener('fetch', fetchHandler);
If event.respondWith() isn't called, then this fetch handler is a no-op, and any additional registered fetch handlers get a shot at the request. Multiple fetch handlers are called in the order in which they're added via addEventListener, one at a time, until the first one calls event.respondWith().
If no fetch handlers call event.respondWith(), then the user agent makes the request exactly as it normally would if there were no service worker involvement.
The one tricky thing to take into account is that the determination as to whether to call event.respondWith() needs to be done synchronously inside each fetch handler. Anything that relies on asynchronous promise resolution can't be used to determine whether or not to call event.respondWith(). If you attempt to do something asynchronous and then call event.respondWith(), you'll end up with a race condition, and likely will see errors in the service worker console about how you can't respond to an event that was already handled.
i am trying to implement a Server Sent Events (SSE) webpage which is powered by Spring. My test code does the following:
Browser uses EventSource(url) to connect to server. Spring accepts the request with the following controller code:
#RequestMapping(value="myurl", method = RequestMethod.GET, produces = "text/event-stream")
#ResponseBody
public DeferredResult<String> subscribe() throws Exception {
final DeferredResult<String> deferredResult = new DeferredResult<>();
resultList.add(deferredResult);
deferredResult.onCompletion(() -> {
logTimer.info("deferedResult "+deferredResult+" completion");
resultList.remove(deferredResult);
});
return deferredResult;
}
So mainly it puts the DeferredResult in a List and register a completion callback so that i can remove this thing from the List in case of completion.
Now i have a timer method, that will periodically output current timestamp to all registered "browser" via their DeferredResults.
#Scheduled(fixedRate=10000)
public void processQueues() {
Date d = new Date();
log.info("outputting to "+ LoginController.resultList.size()+ " connections");
LoginController.resultList.forEach(deferredResult -> deferredResult.setResult("data: "+d.getTime()+"\n\n"));
}
The data is sent to the browser and the following client code works:
var source = new EventSource('/myurl');
source.addEventListener('message', function (e) {
console.log(e.data);
$("#content").append(e.data).append("<br>");
});
Now the problem:
The completion callback on the DeferredResult is called on every setResult() call in the timer thread. So for some reason the connection is closed after the setResult() call. SSE in the browser reconnects as per spec and then same thing again. So on client side i have a polling behavior, but i want an kept open request where i can push data on the same DeferredResult over and over again.
Do i miss something here? Is DeferredResult not capable of sending multiple results? i put in a 10 seconds delay in the timer thread to see if the request only terminates after setResult(). So in the browser the request is kept open until the timer pushes the data but then its closed.
Thanks for any hint on that. One more note: I added async-supported to all filters/servlets in tomcat.
Indeed DeferredResult can be set only once (notice that setResult returns a boolean). It completes processing with the full range of Spring MVC processing options, i.e. meaning that all you know about what happens during a Spring MVC request remains more or less the same, except for the asynchronously produced return value.
What you need for SSE is something more focused, i.e. write each value to the response using an HttpMessageConverter. I've created a ticket for that https://jira.spring.io/browse/SPR-12212.
Note that Spring's SockJS support does have an SSE transport which takes care of a few extras such as cross-domain requests with cookies (important for IE). It's also used on top of a WebSocket API and WebSocket-style messaging (even if WebSocket is not available on either the client or the server side) which fully abstracts the details of HTTP long polling.
As a workaround you can also write directly to the Servlet response using an HttpMessageConverter.
I'm trying to access local storage from content scripts but even though the message passing is working, the output isn't as expected.
CONTENT SCRIPT
var varproxy = localStorage.getItem('proxy'); //gets data from options page saved to local storage
var proxy = "proxystring";
chrome.runtime.sendMessage({message:"hey"},
function(response) {
proxy = response.proxy;
console.log(response.proxy);
}
);
console.log(proxy);
BACKGROUND PAGE (For message passing)
chrome.runtime.onMessage.addListener(
function(request, sender, sendResponse)
{
if (request.message == "hey")
{
sendResponse({proxy: varproxy});
console.log('response sent');
}
else
{sendResponse({});}
});
The console logs the proxy as the value of varproxy and also "response sent" but the
console.log(proxy);
logs the proxy as "proxystring"
Why isn't the value of proxy getting changed? How do I change it as required?
Message sending -- among lots of chrome API function -- is an asynchronous function. The interpreter won't wait for the response, but jumps to the next line. So it can easily happen that log(proxy) will be evaluated first, since communicating with the background page takes some time. As soon as the response is received, the value of proxy changes.
Might I recommend you try out another implementation? What about Chrome Storage?
Then you don't need any message passing at all, because you can access chrome storage within content scripts.
Example, this is something I do in my extensions' content script to grab several values from chrome storage:
chrome.storage.sync.get({HFF_toolbar: 'yes',HFF_logging: 'yes',HFF_timer: '1 Minute'},
function (obj) {
toolbar_option = obj.HFF_toolbar;
logging_option = obj.HFF_logging;
timer_option = obj.HFF_timer;
/* the rest of my content script, using those options */
I personally found this approach much easier, for my purposes anyway, than message passing implementations.
Is possible to intercept 404 error without using web server (browsing html file in the filesystem) ?
I tried with some javascript, using an hidden iframe that preload the destination page and check for the result and then trigger a custom error or redirect to the correct page.
This work fine but is not good on perfomance.
A 404 error is an HTTP status response. So unless you are trying to retrieve this file using an HTTP request/response, you can't have a genuine 404 error. You can only mimic one in something like the way you suggest. Any "standard" way of handling a 404 error is dependent on your flavour of web server anyway...
404 is a HTTP response code, and as such only delivered through the HTTP protocol by servers that speak it. The file:// extension isn't a real protocol response as such, it's a hack built into clients (like browsers) that enable local file support, however it's up to browsers / clients themselves whether they expose any response codes from their file:// implementation. In theory they could report them in the DOM, for example, but they would be response codes exposed to themselves, and as such rarely implemented. Most don't, and there isn't a standard way for it. You may look into browser extensions, like Firefox, and see if they support it, but then, this is highly unstandard and will likely break if you pop it on the web.
Why don't you want to use the server?
I don't believe that it's possible to handle a 404 error client-side, because a 404 error is server-side.
Whenever you load a webpage, you make a request to the server. Thus, when you ask for a file that's not there, it's the server that handles the error. Regular HTML/CSS/JavaScript only come into the picture when the server sends back a response to tell you that it can't find the file.
Steve
Because I was looking for this today. You can now do this without a server by using a Service Worker to cache the custom 404 page, and then serve it when a fetch request status is 404. Following the instructions on the google cache lab, the worker files looks as follows:
const filesToCache = [
'/',
'404.html'
];
const staticCacheName = 'pages-cache-v1';
self.addEventListener('install', event => {
console.log('Attempting to install service worker and cache static assets');
event.waitUntil(
caches.open(staticCacheName).then(cache => {
return cache.addAll(filesToCache);
});
);
});
self.addEventListener('fetch', event => {
console.log('Fetch event for ', event.request.url);
event.respondWith(
caches.match(event.request).then(response => {
if (response) {
console.log('Found ', event.request.url, ' in cache');
return response;
}
console.log('Network request for ', event.request.url);
return fetch(event.request).then(response => {
console.log('response.status:', response.status);
// fetch request returned 404, serve custom 404 page
if (response.status === 404) {
return caches.match('404.html');
}
});
});
);
});